To contact Cabling Installation & Maintenance:

About Cabling Installation & Maintenance:

Bringing practical business and technical intelligence to today's structured cabling professionals.

For more than 30 years, Cabling Installation & Maintenance has provided useful, practical information to professionals responsible for the specification, design, installation and management of structured cabling systems serving enterprise, data center and other environments. These professionals are challenged to stay informed of constantly evolving standards, system-design and installation approaches, product and system capabilities, technologies, as well as applications that rely on high-performance structured cabling systems. Our editors synthesize these complex issues into multiple information products. This portfolio of information products provides concrete detail that improves the efficiency of day-to-day operations, and equips cabling professionals with the perspective that enables strategic planning for networks’ optimum long-term performance.

Throughout our annual magazine, weekly email newsletters and 24/7/365 website, Cabling Installation & Maintenance digs into the essential topics our audience focuses on:

  • Design, Installation and Testing: We explain the bottom-up design of cabling systems, from case histories of actual projects to solutions for specific problems or aspects of the design process. We also look at specific installations using a case-history approach to highlight challenging problems, solutions and unique features. Additionally, we examine evolving test-and-measurement technologies and techniques designed to address the standards-governed and practical-use performance requirements of cabling systems.
  • Technology: We evaluate product innovations and technology trends as they impact a particular product class through interviews with manufacturers, installers and users, as well as contributed articles from subject-matter experts.
  • Data Center: Cabling Installation & Maintenance takes an in-depth look at design and installation workmanship issues as well as the unique technology being deployed specifically for data centers.
  • Physical Security: Focusing on the areas in which security and IT—and the infrastructure for both—interlock and overlap, we pay specific attention to Internet Protocol’s influence over the development of security applications.
  • Standards: Tracking the activities of North American and international standards-making organizations, we provide updates on specifications that are in-progress, looking forward to how they will affect cabling-system design and installation. We also produce articles explaining the practical aspects of designing and installing cabling systems in accordance with the specifications of established standards.

Is Meta's Pushback on NVIDIA With In-House Chips Good for Shares?

Meta Platforms

[content-module:CompanyOverview|NASDAQ: META]

A new and exciting report about Magnificent Seven stock, Meta Platforms (NASDAQ: META), has recently come out. Thomson Reuters (NYSE: TRI) reported that Meta is testing its own semiconductors for training AI models.

This is a direct attempt to reduce reliance on graphics processing unit (GPU) maker NVIDIA (NASDAQ: NVDA).

So, what are Meta’s in-house chips, and why is the company making this move?

How can this help the company limit AI costs, and could this development potentially benefit the stock in the long term?

Meta’s In-House Chips: What Are They and Why Meta Is Pushing Back on NVIDIA

Meta is now testing its first in-house-made chip specifically for AI training. AI training refers to the generally one-time and upfront cost of teaching an AI model how to think and make predictions. Meta has reportedly already been using in-house-made chips for inference. Inference refers to the process of an AI model actually responding to a unique question or circumstance. This is what happens when someone asks ChatGPT a question, for example. Companies must train models before they can perform inference.

Both of these tasks require extensive computational power and energy costs but take place at different times. Training requires much more upfront computing and energy costs. Once a firm trains a model, the costs for each individual inference are low. However, since inference is ongoing, those small costs add up, potentially creating a larger cost over time.

Overall, Meta is looking at both ends of the spectrum, trying to cut costs on both training and inference. Cutting costs in each is key. Meta wants to create better models over time through training. They also want billions of users to interact with its models through inference, making cutting costs their key as well.

The inclination to do this by creating their own in-house chips makes sense, considering the massive cost of NVIDIA’s GPUs. NVIDIA's gross margin was nearly 74% last quarter, which shows its massive pricing power. NVIDIA chips are mainly used for training and inference. Building in-house chips implicitly inserts more competition for NVIDIA. It means companies like Meta do not have to accept and pay whatever price NVIDIA charges simply.

Energy Plays a Massive Role in Meta’s In-House-Chip Advancement

Custom AI-dedicated chips like the one Meta has started using can also offer both faster performance and lower power consumption compared to GPUs. This is true for specialized inference tasks, though GPUs still lead in general-purpose GenAI performance. Amazon (NASDAQ: AMZN) says its custom chip, Trainium2, offers 30% to 40% better price-to-performance than NVIDIA’s H100 GPUs.

The MIT Sloan School of Management expects energy use from data centers to skyrocket. Data centers currently account for 1% to 2% of global energy demand. MIT says that number could increase to 21% by 2030, given the costs of AI. Thus, reducing the energy usage of its chips is massively important for Meta. It could reduce the costs it has to pay for energy agreements to power its massive data centers.

Meta is in different stages when it comes to using its house-made chips, depending on the task. Meta is currently using its inference chip, known as Artemis, at scale to deliver recommendations. This includes delivering advertisements and short-form videos like Reels and recommending other content on Facebook and Instagram. However, Meta has not yet adopted Artemis for GenAI purposes.

As reported by Reuters, the company is just beginning testing for its training chip. If all goes well, the company hopes to start using the chip for training by 2026. The company will first examine how it can use the chip to train recommender systems and then how it would do so for Gen AI.

Meta’s In-House Chips Can Help Reduce Cost, Potentially Benefiting Shares Long-Term

[content-module:TradingView|NASDAQ: META]

Meta’s push to build its own in-house chips could be a significant tailwind for the stock. The prospect of greatly reduced costs could be additive to the company’s margins long-term. Still, stakeholders will have to wait and see whether the tests for its training chip will be successful. Additionally, ramping up production of these chips would have significant costs associated with it, perhaps putting a near-to-medium-term drag on margins.

Further developments regarding the testing phase of the training chip and future implementation of both chips will be key to watch.

Where Should You Invest $1,000 Right Now?

Before you make your next trade, you'll want to hear this.

MarketBeat keeps track of Wall Street's top-rated and best performing research analysts and the stocks they recommend to their clients on a daily basis.

Our team has identified the five stocks that top analysts are quietly whispering to their clients to buy now before the broader market catches on... and none of the big name stocks were on the list.

They believe these five stocks are the five best companies for investors to buy now...

See The Five Stocks Here

Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms Of Service.