ETFOptimize | High-performance ETF-based Investment Strategies

Quantitative strategies, Wall Street-caliber research, and insightful market analysis since 1998.


ETFOptimize | HOME
Close Window

Inception Raises $50M to Power Diffusion LLMs, Increasing LLM Speed and Efficiency by up to 10X and Unlocking Real-Time, Accessible AI Applications

  • New funding will scale the development of faster, more efficient AI models for text, voice, and code
  • Inception dLLMs have already demonstrated 10x speed and efficiency gains over traditional LLMs

Inception, the company pioneering diffusion large language models (dLLMs), today announced it has raised $50 million in funding. The round was led by Menlo Ventures, with participation from Mayfield, Innovation Endeavors, NVentures (NVIDIA’s venture capital arm), M12 (Microsoft’s venture capital fund), Snowflake Ventures, and Databricks Investment.

Today’s LLMs are painfully slow and expensive. They use a technique called autoregression to generate words sequentially. One. At. A. Time. This structural bottleneck prevents enterprises from deploying scaled AI solutions and forces users into query-and-wait interactions.

Inception applies a fundamentally different approach. Its dLLMs leverage the technology behind image and video breakthroughs like DALL·E, Midjourney, and Sora to generate answers in parallel. This shift enables text generation that is 10x faster and more efficient while delivering best-in-class quality.

Mercury, Inception’s first model and the only commercially available dLLM, is 5-10x faster than speed-optimized models from providers including OpenAI, Anthropic, and Google, while matching their accuracy. These gains make Inception’s models ideal for latency-sensitive applications like interactive voice agents, live code generation, and dynamic user interfaces. It also reduces the GPU footprint, allowing organizations to run larger models at the same latency and cost, or serve more users with the same infrastructure.

“The team at Inception has demonstrated that dLLMs aren’t just a research breakthrough; it’s a foundation for building scalable, high-performance language models that enterprises can deploy today,” said Tim Tully, Partner at Menlo Ventures. “With a track record of pioneering breakthroughs in diffusion models, Inception’s best-in-class founding team is turning deep technical insight into real-world speed, efficiency, and enterprise-ready AI.”

“Training and deploying large-scale AI models is becoming faster than ever, but as adoption scales, inefficient inference is becoming the primary barrier and cost driver to deployment,” said Inception CEO and co-founder Stefano Ermon. ”We believe diffusion is the path forward for making frontier model performance practical at scale.”

The funds raised will enable Inception to accelerate product development, grow its research and engineering teams, and deepen work on diffusion systems that deliver real-time performance across text, voice, and coding applications.

Beyond speed and efficiency, diffusion models enable several other breakthroughs that Inception is building toward:

  • Built-in error correction to reduce hallucinations and improve response reliability
  • Unified multimodal processing to support seamless language, image, and code interactions
  • Precise output structuring for applications like function calling and structured data generation

The company was founded by professors from Stanford, UCLA, and Cornell, who led the development of core AI technologies, including diffusion, flash attention, decision transformers, and direct preference optimization. CEO Stefano Ermon is a co-inventor of the diffusion methods that underlie systems like Midjourney and OpenAI’s Sora. The engineering team brings experience from DeepMind, Microsoft, Meta, OpenAI, and HashiCorp.

Inception’s models are available via the Inception API, Amazon Bedrock, OpenRouter, and Poe – and serve as drop-in replacements for traditional autoregressive (AR) models. Early customers are already exploring use cases in real-time voice, natural language web interfaces, and code generation.

For more information, visit www.inceptionlabs.ai.

About Inception

Inception creates the world’s fastest, most efficient AI models. Today’s autoregression LLMs generate tokens sequentially, which makes them painfully slow and expensive. Inception’s diffusion-based LLMs (dLLMs) generate answers in parallel. They are 10X faster and more efficient, making it possible for any business to create instant, in-the-flow AI solutions. Inception’s founders helped invent diffusion technology, which is the industry standard for image and video AI, and the company is the first to apply it to language. Based in Palo Alto, CA, Inception is backed by A-list venture capitalists, including Menlo Ventures, Mayfield, M12 (Microsoft’s venture fund), Snowflake Ventures, Databricks Investment, and Innovation Endeavors.

For more information, visit www.inceptionlabs.ai.

Contacts

Recent Quotes

View More
Symbol Price Change (%)
AMZN  249.91
+5.50 (2.25%)
AAPL  273.03
+4.56 (1.70%)
AMD  245.06
+11.52 (4.93%)
BAC  53.59
+0.39 (0.74%)
GOOG  289.14
+9.44 (3.38%)
META  631.05
+9.34 (1.50%)
MSFT  503.10
+6.28 (1.26%)
NVDA  194.09
+5.94 (3.16%)
ORCL  243.53
+4.27 (1.78%)
TSLA  436.30
+6.78 (1.58%)
Stock Quote API & Stock News API supplied by www.cloudquote.io
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the Privacy Policy and Terms Of Service.


 

IntelligentValue Home
Close Window

DISCLAIMER

All content herein is issued solely for informational purposes and is not to be construed as an offer to sell or the solicitation of an offer to buy, nor should it be interpreted as a recommendation to buy, hold or sell (short or otherwise) any security.  All opinions, analyses, and information included herein are based on sources believed to be reliable, but no representation or warranty of any kind, expressed or implied, is made including but not limited to any representation or warranty concerning accuracy, completeness, correctness, timeliness or appropriateness. We undertake no obligation to update such opinions, analysis or information. You should independently verify all information contained on this website. Some information is based on analysis of past performance or hypothetical performance results, which have inherent limitations. We make no representation that any particular equity or strategy will or is likely to achieve profits or losses similar to those shown. Shareholders, employees, writers, contractors, and affiliates associated with ETFOptimize.com may have ownership positions in the securities that are mentioned. If you are not sure if ETFs, algorithmic investing, or a particular investment is right for you, you are urged to consult with a Registered Investment Advisor (RIA). Neither this website nor anyone associated with producing its content are Registered Investment Advisors, and no attempt is made herein to substitute for personalized, professional investment advice. Neither ETFOptimize.com, Global Alpha Investments, Inc., nor its employees, service providers, associates, or affiliates are responsible for any investment losses you may incur as a result of using the information provided herein. Remember that past investment returns may not be indicative of future returns.

Copyright © 1998-2017 ETFOptimize.com, a publication of Optimized Investments, Inc. All rights reserved.