About Cabling Installation & Maintenance

Our mission: Bringing practical business and technical intelligence to today's structured cabling professionals

For more than 30 years, Cabling Installation & Maintenance has provided useful, practical information to professionals responsible for the specification, design, installation and management of structured cabling systems serving enterprise, data center and other environments. These professionals are challenged to stay informed of constantly evolving standards, system-design and installation approaches, product and system capabilities, technologies, as well as applications that rely on high-performance structured cabling systems. Our editors synthesize these complex issues into multiple information products. This portfolio of information products provides concrete detail that improves the efficiency of day-to-day operations, and equips cabling professionals with the perspective that enables strategic planning for networks’ optimum long-term performance.

Throughout our annual magazine, weekly email newsletters and 24/7/365 website, Cabling Installation & Maintenance digs into the essential topics our audience focuses on.

  • Design, Installation and Testing: We explain the bottom-up design of cabling systems, from case histories of actual projects to solutions for specific problems or aspects of the design process. We also look at specific installations using a case-history approach to highlight challenging problems, solutions and unique features. Additionally, we examine evolving test-and-measurement technologies and techniques designed to address the standards-governed and practical-use performance requirements of cabling systems.
  • Technology: We evaluate product innovations and technology trends as they impact a particular product class through interviews with manufacturers, installers and users, as well as contributed articles from subject-matter experts.
  • Data Center: Cabling Installation & Maintenance takes an in-depth look at design and installation workmanship issues as well as the unique technology being deployed specifically for data centers.
  • Physical Security: Focusing on the areas in which security and IT—and the infrastructure for both—interlock and overlap, we pay specific attention to Internet Protocol’s influence over the development of security applications.
  • Standards: Tracking the activities of North American and international standards-making organizations, we provide updates on specifications that are in-progress, looking forward to how they will affect cabling-system design and installation. We also produce articles explaining the practical aspects of designing and installing cabling systems in accordance with the specifications of established standards.

Cabling Installation & Maintenance is published by Endeavor Business Media, a division of EndeavorB2B.

Contact Cabling Installation & Maintenance

Editorial

Patrick McLaughlin

Serena Aburahma

Advertising and Sponsorship Sales

Peter Fretty - Vice President, Market Leader

Tim Carli - Business Development Manager

Brayden Hudspeth - Sales Development Representative

Subscriptions and Memberships

Subscribe to our newsletters and manage your subscriptions

Feedback/Problems

Send a message to our general in-box

 

New MLPerf Inference Benchmark Results Highlight The Rapid Growth of Generative AI Models

With 70 billion parameters, Llama 2 70B is the largest model added to the MLPerf Inference benchmark suite

Today, MLCommons® announced new results from our industry-standard MLPerf® Inference v4.0 benchmark suite, which delivers industry standard machine learning (ML) system performance benchmarking in an architecture-neutral, representative, and reproducible manner.

MLPerf Inference v4.0

The MLPerf Inference benchmark suite, which encompasses both data center and edge systems, is designed to measure how quickly hardware systems can run AI and ML models in a variety of deployment scenarios. In order to keep pace with today’s ever-changing generative AI landscape, the working group created a new task force to determine which of these models should be added to the v4.0 version of the benchmark. This task force analyzed factors including model licenses, ease of use and deployment, and accuracy in their decision-making process.

After careful consideration, we opted to include two new benchmarks to the suite. The Llama 2 70B model was chosen to represent the “larger” LLMs with 70 billion parameters, while Stable Diffusion XL was selected to represent text-to-image generative AI models.

Llama 2 70B is an order of magnitude larger than the GPT-J model introduced in MLPerf Inference v3.1 and correspondingly more accurate. One of the reasons it was selected for inclusion in the MLPerf Inference v4.0 release is that this larger model size requires a different class of hardware than smaller LLMs, which provides a great benchmark for higher-end systems. We are thrilled to collaborate with Meta to bring Llama 2 70B to the MLPerf Inference v4.0 benchmark suite. You can learn more about the selection of Llama 2 in our deep-dive post.

"Generative AI use-cases are front and center in our v4.0 submission round,” said Mitchelle Rasquinha, co-chair of the MLPerf Inference working group. “In terms of model parameters, Llama 2 is a dramatic increase to the models in the inference suite. Dedicated task-forces worked around the clock to set up the benchmarks and both models received competitive submissions. Congratulations to all!"

For the second new generative AI benchmark, the task force chose Stability AI’s Stable Diffusion XL, with 2.6 billion parameters. This popular model is used to create compelling images through a text-based prompt. By generating a high number of images, the benchmark is able to calculate metrics such as latency and throughput to understand overall performance.

“The v4.0 release of MLPerf Inference represents a full embrace of generative AI within the benchmark suite,” said Miro Hodak, MLPerf Inference co-chair. “A full third of the benchmarks are generative AI workloads: including a small and a large LLM and a text-to-image generator, ensuring that MLPerf Inference benchmark suite captures the current state of the art.”

MLPerf Inference v4.0 includes over 8500 performance results and 900 Power results from 23 submitting organizations, including: ASUSTeK, Azure, Broadcom, Cisco, CTuning, Dell, Fujitsu, Giga Computing, Google, Hewlett Packard Enterprise, Intel, Intel Habana Labs, Juniper Networks, Krai, Lenovo, NVIDIA, Oracle, Qualcomm Technologies, Inc., Quanta Cloud Technology, Red Hat, Supermicro, SiMa, and Wiwynn.

Four firms–Dell, Fujitsu, NVIDIA, and Qualcomm Technologies, Inc.–submitted data center-focused power numbers for MLPerf Inference v.4.0. The power tests require power-consumption measurements to be captured while the MLPerf Inference benchmarks are running, and the ensuing results indicate the power-efficient performance of the systems tested. The latest submissions demonstrate continued progress in efficient AI acceleration.

“Submitting to MLPerf is quite challenging and a real accomplishment,” said David Kanter, executive director of MLCommons. “Due to the complex nature of machine-learning workloads, each submitter must ensure that both their hardware and software stacks are capable, stable, and performant for running these types of ML workloads. This is a considerable undertaking. In that spirit, we celebrate the hard work and dedication of Juniper Networks, Red Hat, and Wiwynn, who are all first-time submitters for the MLPerf Inference benchmark.”

View the Results

To view the results for MLPerf Inference v4.0 visit the Datacenter and Edge results pages.

About MLCommons

MLCommons is the world leader in building benchmarks for AI. It is an open engineering consortium with a mission to make AI better for everyone through benchmarks and data. The foundation for MLCommons began with the MLPerf benchmarks in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 125+ members, global technology providers, academics, and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire machine learning industry through benchmarks and metrics, public datasets, and best practices.

For additional information on MLCommons and details on becoming a member or affiliate, please visit MLCommons.org or contact participation@mlcommons.org.

"Generative AI use-cases are front and center in our v4.0 submission round”

Contacts

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms Of Service.