Blog Archives
Axonis Emerges From Stealth With Federated AI Architecture That Brings AI to the Data
Incubated for DoD & Intelligence use cases, startup announces commercial availability of enterprise AI platform, delivering the fastest path to secure production AI
Axonis, the federated AI infrastructure platform that enables enterprises to run AI directly on distributed, sensitive, and real-time production data, today emerged from stealth with a production-ready architecture that brings AI to any data, wherever it lives. The company also announced the appointment of Todd Barr as Chief Executive Officer. Barr, formerly of Red Hat, Chainlink, and GitLab, will lead Axonis as it commercializes its DoD-hardened architecture for the enterprise market. Barr joins technical founders David Bauer, PhD, and Chris Yonclas, distributed systems and AI/ML experts for the US Army, DARPA, and multiple US government cloud, intelligence, and digital transformation initiatives.
The Axonis platform introduces a new architectural model that brings AI to the data, eliminating the need for data migration or duplication and delivering secure, immediate, and scalable AI for training, fine-tuning, and deploying models across cloud, on-prem, and edge environments. Axonis complements and protects existing cloud and data lake investments by providing a parallel path that enables organizations to act on raw, real-time data immediately, without slowing down or rethinking their centralization strategy.
As Barr leads Axonis into its commercialization phase following two years of incubation within a US government military contractor (T2S Solutions, part of the Madison Dearborn portfolio), the company is now fully independent and positioned to address one of the biggest barriers in AI adoption: operationalizing AI on data that is too fragmented, regulated, or mission-critical to move.
“Enterprises are quickly realizing that the real barrier to AI isn’t modeling; it’s getting AI models into production,” said Matt Norton, Partner and Head of Technology & Government at Madison Dearborn. “Axonis solves this problem at the architectural level, and Todd has the go-to-market experience to bring that solution to market at scale.”
Data Centralization: Where AI Works in Theory, Not Practice
Enterprises continue to struggle with taking models from proof of concept to production because some of their most valuable data—transactions, customer data, logs, sensor streams, images, and other real-time signals—cannot be moved to a centralized environment. Regulatory constraints, cost, latency, and operational risk make traditional data centralization strategies slow, expensive, and often impossible to fully achieve. As a result, AI adoption stalls at the exact moment organizations attempt to deploy models into mission-critical workflows.
Axonis Architecture: Bring AI to the Data
Axonis solves this production bottleneck with a federated AI architecture that executes models directly where data is generated and governed. Instead of moving terabytes of sensitive data, Axonis moves lightweight models, enabling:
- Training, fine-tuning, and inference on live production data
- Streamlined ELT at runtime for both training and inference
- Real-time intelligence across distributed, centralized, and edge environments
- Secure execution without creating new data copies
- Model collaboration across organizations without sharing raw or sensitive data
This model-to-data approach dramatically reduces data movement, improves model freshness, and unlocks up to 12x faster time-to-AI value.
“After the ChatGPT-style quick wins, enterprises are realizing that they have an architecture problem standing in the way of true business transformation with AI,” said Barr. “Axonis delivers a secure, AI-ready architecture for data and AI compute that will underpin and unlock the business transformation possible by GenAI and agents, without having to run a big IT data centralization project to get there.”
Engineered for Defense, Built for Enterprise Scale
Axonis’ architecture originated inside T2S Solutions to support the U.S. Department of Defense and Intelligence Community, where systems must operate under extreme constraints:
- Intelligence must be close to the data
- Data security is survival
- Connectivity is limited or intermittent
- Data is chaotic and everywhere
- Every action must be governed and auditable
These requirements shaped Axonis into a high-assurance platform that meets and exceeds the security, sovereignty, and operational demands of industries such as healthcare, financial services, insurance, manufacturing, critical infrastructure, and the public sector.
“We’ve spent the better part of six years engineering the AI architecture in Axonis that will support environments where failure isn’t an option,” said David Bauer, Chief Technology Officer and technical co-founder of Axonis. “Those same capabilities—distributed execution, zero-trust security, and model-to-data design—are exactly what enterprises now need to safely and reliably run AI in production.”
Built to Integrate, Designed to Collaborate
Axonis is a cloud-native enterprise solution that fits seamlessly into existing enterprise data and AI ecosystems, including Snowflake, Databricks, MinIO, Iceberg, Jupyter, and leading AI frameworks. The platform also unlocks a new model of cross-organization collaboration: teams can share intelligence without sharing or pooling data, allowing each party to benefit from federated learning while keeping sensitive information fully protected. Because Axonis applies security at the data level, even agentic AI systems and chatbots cannot access or act on information they are not authorized to see, bringing a new level of control and data protection to enterprise AI deployments.
Retym Launches Out of Stealth with $180 Million to Drive AI Infrastructure Innovation
Spark Capital Leads Series D Round to Support Advancements in coherent DSP Solutions
Retym (pronounced “Re-Time”), a leading semiconductor firm, has emerged from stealth with over $180 million raised across multiple rounds to drive AI Infrastructure Innovation. In the latest financing round, the company secured $75 million in Series D funding, led by Spark Capital.
Retym specializes in programmable coherent DSP (digital signal processing) solutions for cloud and AI infrastructure, a foundational technology that enables faster and more efficient transmission within and between AI data centers. With the investment, James Kuklinski, General Partner at Spark Capital, joins Retym’s Board of Directors. The Series D funding marks a significant step forward for the company as it charts a multi-generation product roadmap designed to address the growing demand for AI-driven network bandwidth.
Alongside Spark Capital, existing investors Kleiner Perkins, Mayfield and Fidelity Investments, all participated in the round, further demonstrating confidence in Retym’s vision and execution. The Series D funding will support scaling to production and continued product development advancements.
“As AI workloads continue to scale exponentially, they’re creating unprecedented demands on critical infrastructure,” said James Kuklinski, General Partner, Spark Capital. “Retym’s exceptional team is uniquely positioned to address these challenges, developing products that will enable significant advancements in performance and scale. We’re excited to partner with Retym as they execute on their vision to deliver solutions that will help unlock the next wave of AI innovation.”
“At Mayfield, we invest in exceptional innovators who create category-defining companies. Sachin Gandhi and the Retym team, along with board director Syed Ali, combine rare semiconductor expertise with entrepreneurial drive to redefine AI infrastructure,” said Navin Chaddha, Managing Partner at Mayfield. “We see tremendous opportunity in Retym’s pioneering work on the next generation AI-driven interconnect technologies that will power tomorrow’s data centers.”
“The quest for AI innovation is encountering constraints from current infrastructure technology, and the talented team at Retym is perfectly positioned to introduce new foundational technology to unleash its potential,” said Mamoon Hamid, Partner, Kleiner Perkins. “Retym’s visionary team is already tackling essential bottlenecks in AI infrastructure with a groundbreaking approach to coherent DSPs, and we have been proud partners with them from the start.”
Retym enters the market at a time when the boundaries between “inside-the-datacenter” and “datacenter interconnect” are blurring. According to Dell’Oro Group, global spending on datacenter compute and networking is expected to exceed $1 trillion annually within the next decade, making optical networking and DSP solutions essential for unlocking AI’s true potential. Retym’s innovative, high-performance DSPs are designed to lead the way in power and performance, addressing the complex demands of modern AI infrastructure, and fostering a vibrant, competitive ecosystem.
“Coherent optics and the DSPs that drive them are becoming increasingly crucial for AI-driven data centers as data volumes and performance requirements continue to rise,” said Vlad Kozlov, founder and CEO of the research firm LightCounting. “The ongoing evolution of this market highlights the need for innovative and efficient solutions. Retym is entering the market at an opportune time to potentially capitalize on that need.”
“As AI infrastructure demands intensify, Retym is well-positioned to lead in delivering cost-effective and power-efficient DSP innovation for the rapidly evolving landscape,” said Sachin Gandhi, Retym’s co-founder and CEO. “We’re excited to collaborate with customers and ecosystem partners to integrate our DSPs into high-speed transceiver designs. With groundbreaking product announcements ahead, this is only the beginning.”
Nebius announces oversubscribed strategic equity financing of USD 700 million to accelerate roll-out of full-stack AI infrastructure
Investment comes from a select group of institutional and accredited investors, including participation from Accel, NVIDIA, and certain accounts managed by Orbis Investments
Nebius Group N.V. (“Nebius Group”, “Nebius” or the “Company”; NASDAQ:NBIS), a leading AI infrastructure company, today announced that it has entered into definitive agreements for a USD 700 million private placement financing from a select group of institutional and accredited investors, including participation from Accel, NVIDIA, and certain accounts managed by Orbis Investments.
The financing supports Nebius’ previously announced plans to further build out its full-stack AI infrastructure – including large-scale GPU clusters, cloud platforms and tools and services for developers – for AI pioneers globally.
Arkady Volozh, founder and CEO of Nebius, said: “The foundation of our business is our expertise in building advanced technology infrastructure. We have demonstrated the scale of our ambitions, initiating an AI infrastructure build-out across two continents. This strategic financing gives us additional firepower to do it faster and on a larger scale. I’m grateful to our investors for the trust they have placed in us – our team is ready to deliver.”
Nebius’ full-stack AI infrastructure is being purpose-built to meet the demands of the global AI industry and leans on deep technical expertise across hardware and software, cloud engineering and machine learning (“ML”). Nebius’ core AI infrastructure business has around 400 engineers with decades of knowledge of building world-class tech infrastructure, as well as an in-house large language model (“LLM”) R&D team.
The Company is pursuing an AI infrastructure build-out strategy which combines investments in build-to-suit data centers at greenfield sites with additional capacity deployments through colocations and the expansion of its existing facilities.
The AI-native Nebius GPU cloud is designed to manage the full ML lifecycle – from data processing and training through to fine-tuning and inference – all in one place. The recently launched Nebius AI Studio inference service expands the Company’s offering to app builders, with access to a range of state-of-the-art open-source models in a flexible, user-friendly environment at among the lowest price-per-token on the market.
In the private placement, Nebius will issue 33,333,334 Class A shares at a price per share of USD 21.00, which represents an approximately 3% premium to the volume-weighted average price of the Class A shares since the resumption of trading on Nasdaq. The closing of the private placement is subject to customary closing conditions. Additional details regarding the private placement will be included in a Form 6-K to be filed by the Company with the Securities and Exchange Commission (the “SEC”).
In connection with the private placement, the Board of Directors of the Company is delighted to grant observer rights to Matt Weigand, a Partner at Accel, and intends to nominate Mr. Weigand for election as a director at the 2025 Annual General Meeting of Shareholders.
In addition, having considered the strong trading dynamics and liquidity profile in the Company’s shares since the resumption of trading on Nasdaq on October 21, 2024, the Board has determined that a potential repurchase by the Company of its Class A shares is no longer warranted.
At the Company’s Annual General Meeting of Shareholders in August 2024, shareholders approved a general authorization for the Company to repurchase up to 81 million Class A shares within certain parameters, including a maximum repurchase price of $10.50 per share. This price represented the pro-rata amount of cash on the Company’s balance sheet following the final closing of the Company’s divestment of its Russian business, net of tax and transaction costs, and was not an indication of the value of the current business.
John Boynton, Chairman of the Board of Nebius, said: “The authorization to potentially repurchase shares was originally intended to provide legacy shareholders who wanted to exit our new business an opportunity to do so, especially in light of the prolonged suspension of trading on Nasdaq. Based on the strong level of investor engagement and technical dynamics which we have observed following the resumption of trading on Nasdaq, we believe that those shareholders who may have wanted to exit have had an opportunity to do so at a price higher than the maximum repurchase price authorized by shareholders. The Board has determined that the best way to maximize value for the Company’s shareholders is to invest our capital into our core AI infrastructure business, where the Company believes there is a substantial market opportunity.”
As a result of the combination of the strategic financing and the decision not to deploy any capital toward repurchasing Class A shares, the Company is in a position to narrow its previous guidance, and now expects to deliver an ARR by year-end 2025 of USD 750 million to USD 1.0 billion.
Goldman Sachs Bank Europe SE (“Goldman Sachs”) is acting as sole placement agent for the Company and no one else in connection with the private placement and will not be responsible to anyone other than the Company for providing the protections afforded to clients of Goldman Sachs nor for providing advice in connection with the private placement or any other matters referred to in this press release.
In addition (but except in connection with its role as placement agent on the private placement), Goldman Sachs is acting as financial advisor for the Company and no one else in connection with the Company’s review of strategic options and will not be responsible to anyone else for providing the protections afforded to clients of Goldman Sachs, or for giving advice in connection with this review or any other matter referred to in this press release.
The securities described above have not been registered under the Securities Act of 1933, as amended, and may not be offered or sold in the United States absent registration or an applicable exemption from registration requirements. The company has agreed to file a resale registration statement with the SEC following the filing of its 2024 Annual Report on Form 20-F for purposes of registering the resale of the Class A shares described above.
Podcast: Stelia Launches DawnLink™, Bridging Classic Internet with AI Infrastructure
Stelia, a leading builder of foundational AI infrastructure, today announced the launch of DawnLink™, a pioneering digital bridge designed to seamlessly integrate classic Internet architecture with next-generation AI-driven networks.



