About Cabling Installation & Maintenance

Our mission: Bringing practical business and technical intelligence to today's structured cabling professionals

For more than 30 years, Cabling Installation & Maintenance has provided useful, practical information to professionals responsible for the specification, design, installation and management of structured cabling systems serving enterprise, data center and other environments. These professionals are challenged to stay informed of constantly evolving standards, system-design and installation approaches, product and system capabilities, technologies, as well as applications that rely on high-performance structured cabling systems. Our editors synthesize these complex issues into multiple information products. This portfolio of information products provides concrete detail that improves the efficiency of day-to-day operations, and equips cabling professionals with the perspective that enables strategic planning for networks’ optimum long-term performance.

Throughout our annual magazine, weekly email newsletters and 24/7/365 website, Cabling Installation & Maintenance digs into the essential topics our audience focuses on.

  • Design, Installation and Testing: We explain the bottom-up design of cabling systems, from case histories of actual projects to solutions for specific problems or aspects of the design process. We also look at specific installations using a case-history approach to highlight challenging problems, solutions and unique features. Additionally, we examine evolving test-and-measurement technologies and techniques designed to address the standards-governed and practical-use performance requirements of cabling systems.
  • Technology: We evaluate product innovations and technology trends as they impact a particular product class through interviews with manufacturers, installers and users, as well as contributed articles from subject-matter experts.
  • Data Center: Cabling Installation & Maintenance takes an in-depth look at design and installation workmanship issues as well as the unique technology being deployed specifically for data centers.
  • Physical Security: Focusing on the areas in which security and IT—and the infrastructure for both—interlock and overlap, we pay specific attention to Internet Protocol’s influence over the development of security applications.
  • Standards: Tracking the activities of North American and international standards-making organizations, we provide updates on specifications that are in-progress, looking forward to how they will affect cabling-system design and installation. We also produce articles explaining the practical aspects of designing and installing cabling systems in accordance with the specifications of established standards.

Cabling Installation & Maintenance is published by Endeavor Business Media, a division of EndeavorB2B.

Contact Cabling Installation & Maintenance

Editorial

Patrick McLaughlin

Serena Aburahma

Advertising and Sponsorship Sales

Peter Fretty - Vice President, Market Leader

Tim Carli - Business Development Manager

Brayden Hudspeth - Sales Development Representative

Subscriptions and Memberships

Subscribe to our newsletters and manage your subscriptions

Feedback/Problems

Send a message to our general in-box

 

MLPerf Results Show Rapid AI Performance Gains

Latest benchmarks highlight progress in training advanced neural networks and deploying AI models on the edge.

Today, MLCommons®, an open engineering consortium, announced new results from two industry-standard MLPerf™ benchmark suites: Training v3.0 which measures the performance of training machine learning models; and Tiny v1.1 which measures how quickly a trained neural network can process new data for extremely low-power devices in the smallest form factors.

Faster training paves the way for more capable intelligent systems

Training models faster empowers researchers to unlock new capabilities, such as the latest advances in generative AI. The latest MLPerf Training round demonstrates broad industry participation and highlights performance gains of up to 1.54x compared to just six months ago and 33-49x over the first round, reflecting the tremendous rate of innovation in systems for machine learning.

The MLPerf Training benchmark suite comprises full system tests that stress machine learning models, software, and hardware for a broad range of applications. The open-source and peer-reviewed benchmark suite provides a level playing field for competition that drives innovation, performance, and energy-efficiency for the entire industry.

In this round, MLPerf Training added two new benchmarks to the suite. The first is a large language model (LLM) using the GPT-3 reference model that reflects the rapid adoption of generative AI. The second is an updated recommender, modified to be more representative of industry practices, using the DLRM-DCNv2 reference model. These new tests help advance AI by ensuring that industry-standard benchmarks are representative of the latest trends in adoption and can help guide customers, vendors, and researchers alike.

“I’m excited to see the debut of GPT-3 and DLRM-DCNv2, which were built based on extensive feedback from the community and leading customers and demonstrate our commitment to keep the MLPerf benchmarks representative of modern machine learning,” said David Kanter, executive director of MLCommons.

The MLPerf Training v3.0 round includes over 250 performance results, an increase of 62% over the last round, from 16 different submitters: ASUSTek, Azure, Dell, Fujitsu, GIGABYTE, H3C, IEI, Intel & Habana Labs, Krai, Lenovo, NVIDIA, NVIDIA + CoreWeave, Quanta Cloud Technology, Supermicro, and xFusion. In particular, MLCommons would like to congratulate first time MLPerf Training submitters CoreWeave, IEI, and Quanta Cloud Technology.

“It is truly remarkable to witness system engineers continuously pushing the boundaries of performance on workloads that hold utmost value for users via MLPerf,” said Ritika Borkar, co-chair of the MLPerf Training Working Group. “We are particularly thrilled to incorporate an LLM benchmark in this round, as it will inspire system innovation for a workload that has the potential of revolutionizing countless applications.”

MLPerf Tiny Results Reflect the Rapid Pace of Embedded Devices Innovation

Tiny compute devices are a pervasive part of everyone’s everyday life, from tire sensors in your vehicles to your appliances and even your fitness tracker. Tiny devices bring intelligence to life at very little cost.

ML inference on the edge is increasingly attractive to increase energy efficiency, privacy, responsiveness, and autonomy of edge devices. Tiny ML breaks the traditional paradigm of energy and compute hungry ML by eliminating networking overhead, allowing for greater overall efficiency and security relative to a cloud-centric approach. The MLPerf Tiny benchmark suite captures a variety of inference use cases that involve "tiny" neural networks, typically 100 kB and below, that process sensor data, such as audio and vision, to provide endpoint intelligence for low-power devices in the smallest form factors. MLPerf Tiny tests these capabilities in a fair and reproducible manner, in addition to offering optional power measurement.

In this round, the Tiny ML v1.1 benchmarks include 10 submissions from academic, industry organizations, and national labs, producing 159 peer-reviewed results. Submitters include: Bosch, cTuning, fpgaConvNet, Kai Jiang, Krai, Nuvoton, Plumerai, Skymizer, STMicroelectronics, and Syntiant. This round includes 41 power measurements, as well. MLCommons congratulates Bosch, cTuning, fpgaConvNet, Kai Jiang, Krai, Nuvoton, and Skymizer on their first submissions to MLPerf Tiny.

“I’m particularly excited to see so many companies embrace the Tiny ML benchmark suite,” said David Kanter, Executive Director of MLCommons. “We had 7 new submitters this round which demonstrates the value and importance of a standard benchmark to enable device makers and researchers to choose the best solution for their use case.”

“With so many new companies adopting the benchmark suite it’s really extended the range of hardware solutions and innovative software frameworks covered. The v1.1 release includes submissions ranging from tiny and inexpensive microcontrollers to larger FPGAs, showing a large variety of design choices,” said Dr. Csaba Kiraly, co-chair of the MLPerf Tiny Working Group. “And the combined effect of software and hardware performance improvements are 1000-fold in some areas compared to our initial reference benchmark results, which shows the pace that innovation is happening in the field.”

View the Results

To view the results for MLPerf Training v3.0 and MLPerf Tiny v1.1, and to find additional information about the benchmarks please visit:

Training v3.0 and Tiny v1.1.

About MLCommons

MLCommons is an open engineering consortium with a mission to make machine learning better for everyone through benchmarks and data. The foundation for MLCommons began with the MLPerf benchmark in 2018, which rapidly scaled as a set of industry metrics to measure machine learning performance and promote transparency of machine learning techniques. In collaboration with its 50+ members - global technology providers, academics, and researchers, MLCommons is focused on collaborative engineering work that builds tools for the entire machine learning industry through benchmarks and metrics, public datasets, and best practices.

For additional information on MLCommons and details on becoming a Member or Affiliate, please visit http://mlcommons.org/ and contact participation@mlcommons.org.

Contacts

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms Of Service.