About Cabling Installation & Maintenance

Our mission: Bringing practical business and technical intelligence to today's structured cabling professionals

For more than 30 years, Cabling Installation & Maintenance has provided useful, practical information to professionals responsible for the specification, design, installation and management of structured cabling systems serving enterprise, data center and other environments. These professionals are challenged to stay informed of constantly evolving standards, system-design and installation approaches, product and system capabilities, technologies, as well as applications that rely on high-performance structured cabling systems. Our editors synthesize these complex issues into multiple information products. This portfolio of information products provides concrete detail that improves the efficiency of day-to-day operations, and equips cabling professionals with the perspective that enables strategic planning for networks’ optimum long-term performance.

Throughout our annual magazine, weekly email newsletters and 24/7/365 website, Cabling Installation & Maintenance digs into the essential topics our audience focuses on.

  • Design, Installation and Testing: We explain the bottom-up design of cabling systems, from case histories of actual projects to solutions for specific problems or aspects of the design process. We also look at specific installations using a case-history approach to highlight challenging problems, solutions and unique features. Additionally, we examine evolving test-and-measurement technologies and techniques designed to address the standards-governed and practical-use performance requirements of cabling systems.
  • Technology: We evaluate product innovations and technology trends as they impact a particular product class through interviews with manufacturers, installers and users, as well as contributed articles from subject-matter experts.
  • Data Center: Cabling Installation & Maintenance takes an in-depth look at design and installation workmanship issues as well as the unique technology being deployed specifically for data centers.
  • Physical Security: Focusing on the areas in which security and IT—and the infrastructure for both—interlock and overlap, we pay specific attention to Internet Protocol’s influence over the development of security applications.
  • Standards: Tracking the activities of North American and international standards-making organizations, we provide updates on specifications that are in-progress, looking forward to how they will affect cabling-system design and installation. We also produce articles explaining the practical aspects of designing and installing cabling systems in accordance with the specifications of established standards.

Cabling Installation & Maintenance is published by Endeavor Business Media, a division of EndeavorB2B.

Contact Cabling Installation & Maintenance

Editorial

Patrick McLaughlin

Serena Aburahma

Advertising and Sponsorship Sales

Peter Fretty - Vice President, Market Leader

Tim Carli - Business Development Manager

Brayden Hudspeth - Sales Development Representative

Subscriptions and Memberships

Subscribe to our newsletters and manage your subscriptions

Feedback/Problems

Send a message to our general in-box

 

MLCommons Releases AILuminate LLM v1.1, Adding French Language Capabilities to Industry-Leading AI Safety Benchmark

Announcement comes as AI experts gather at the Paris AI Action Summit; is the first of several AILuminate updates to be released in 2025

MLCommons, in partnership with the AI Verify Foundation, today released v1.1 of AILuminate, incorporating new French language capabilities into its first-of-its-kind AI safety benchmark. The new update – which was announced at the Paris AI Action Summit – marks the next step towards a global standard for AI safety and comes as AI purchasers across the globe seek to evaluate and limit product risk in an emerging regulatory landscape. Like its v1.0 predecessor, the French LLM version 1.1 was developed collaboratively by AI researchers and industry experts, ensuring a trusted, rigorous analysis of chatbot risk that can be immediately incorporated into company decision-making.

“Companies around the world are increasingly incorporating AI in their products, but they have no common, trusted means of comparing model risk,” said Rebecca Weiss, Executive Director of MLCommons. “By expanding AILuminate’s language capabilities, we are ensuring that global AI developers and purchasers have access to the type of independent, rigorous benchmarking proven to reduce product risk and increase industry safety.”

Like the English v1.0, the v1.1 French model of AILuminiate assesses LLM responses to over 24,000 French language test prompts across twelve categories of hazards behaviors – including violent crime, hate, and privacy. Unlike many of peer benchmarks, none of the LLMs evaluated are given advance access to specific evaluation prompts or the evaluator model. This ensures a methodological rigor uncommon in standard academic research and an empirical analysis that can be trusted by industry and academia alike.

“Building safe and reliable AI is a global problem – and we all have an interest in coordinating on our approach,” said Peter Mattson, Founder and President of MLCommons. “Today’s release marks our commitment to championing a solution to AI safety that’s global by design and is a first step toward evaluating safety concerns across diverse languages, cultures, and value systems.”

The AILuminate benchmark was developed by the MLCommons AI Risk and Reliability working group, a team of leading AI researchers from institutions including Stanford University, Columbia University, and TU Eindhoven, civil society representatives, and technical experts from Google, Intel, NVIDIA, Microsoft, Qualcomm Technologies, Inc., and other industry giants committed to a standardized approach to AI safety. Cognizant that AI safety requires a coordinated global approach, MLCommons also collaborated with international organizations such as the AI Verify Foundation to design the AILuminate benchmark.

“MLCommons’ work in pushing the industry toward a global safety standard is more important now than ever,” said Nicolas Miailhe, Founder and CEO of PRISM Eval. “PRISM is proud to support this work with our latest Behavior Elicitation Technology (BET), and we look forward to continuing to collaborate on this important trustbuilding effort – in France and beyond.”

Currently available in English and French, AILuminate will be made available in Chinese and Hindi later this year. For more information on MLCommons and the AILuminate Benchmark, please visit mlcommons.org.

About MLCommons

MLCommons is the world’s leader in AI benchmarking. An open engineering consortium supported by over 125 members and affiliates, MLCommons has a proven record of bringing together academic, industry, and civil society to measure and improve AI. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. Since then, MLCommons has continued to use collective engineering to build the benchmarks and metrics required for better AI – ultimately helping to evaluate and improve the accuracy, safety, speed, and efficiency of AI technologies.

Contacts

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms Of Service.