As AI models scale, the bottlenecks are shifting away from raw compute. One of the biggest challenges to solve is the network fabric that connects data transfer. Upscale AI, a new company spun out from Celesta portfolio company Auradine, believes the next wave of AI progress hinges on a solution that is democratized, purpose-built, predictable, and high speed. We spoke with Upscale AI CEO Barun Kar about what’s broken in today’s AI networking space, why open standards matter, and how the company will take its ideas from invention to deployment.

As AI models scale, the bottlenecks are shifting away from raw compute. One of the biggest challenges to solve is the network fabric that connects data transfer. Upscale AI, a new company spun out from Celesta portfolio company Auradine, believes the next wave of AI progress hinges on a solution that is democratized, purpose-built, predictable, and high speed. We spoke with Upscale AI CEO Barun Kar about what’s broken in today’s AI networking space, why open standards matter, and how the company will take its ideas from invention to deployment.
At Upscale AI we’re redefining how large-scale AI systems connect and communicate. AI networking providers today are stuck adapting legacy data center networks. We are instead starting from scratch, building an open-standard, high‑performance networking architecture that’s purpose-built for AI workloads.
We recently announced our SkyHammer product architecture. It’s designed to remove bottlenecks in GPU clusters so data can move quickly enough to keep accelerators busy.
If I had to put it in an elevator pitch it would be this: we make massive AI compute accessible, predictable, and open to everyone.
A few things. First, we are open standards‑first. We’re contributing to and building around emerging standards in both scale‑up and scale‑out domains. This is important, so customers can avoid vendor lock‑in and mix and match components. Most systems today are proprietary stacks from top to bottom, and that creates both technical and economic complications. Customers have no options or leverage.
Second, we focus on true scale‑up. Outside of one major vendor, most offerings can’t connect hundreds of GPUs efficiently – the networking simply isn’t there – so companies are retrofitting cloud architectures to service AI. In contrast, we’re reimagining the network specifically for AI.
The third thing that I believe differentiates us is our team. There is deep networking expertise in this leadership team across switching, routing, and security. We know this domain in our DNA.
Networking has moved in waves. Ethernet ushered in the enterprise era dominated by companies like Cisco and Juniper. Then, pre‑COVID, the center of gravity shifted to cloud, where players like Arista and Broadcom thrived.
We believe we’re now at the start of the AI networking era. The biggest roadblock to scaling AI is no longer just compute; it’s the network. Today’s systems can’t move data fast enough between GPUs to match AI’s scale and speed. When you add closed, proprietary technology, as a challenge on top of this, you get inflexibility and cost pressure. Without rethinking the architecture, AI innovation hits a wall. That’s the opening for an AI‑focused networking company like Upscale.
Yes, we closed a seed round north of $100 million, which signals investors’ conviction about both the urgency and scale of the AI networking problem. We’re active in the open standards bodies and consortia I mentioned, contributing to near‑term product specs and interoperability efforts, with publications on the horizon.
On the execution side for our company, we’re scaling the team and working with GPU vendors and hyperscalers to strengthen the ecosystem. The next phase is turning from innovation into deployment. We recently launched our SkyHammer product at the OCP conference, and now that the product is in market, our laser focus is bringing open, AI‑native networking into production with our first wave of customers.
SkyHammer is a scale‑up chip targeted at the scale‑up market, and beyond the chip we’ll deliver systems and rack‑level solutions. It’s designed as a flexible, AI‑native architecture. It’s designed from first principles to remove GPU‑to‑GPU communication bottlenecks. It’s also adaptable to multiple open scale‑up standards that are coming to market.
The ethos is simple: if the compute is specialized for AI, the network should be too. Instead of bending a cloud switch to do AI’s job, SkyHammer treats data movement in AI clusters as the first‑class problem, aiming for deterministic performance and composability.
The goal is simple: make massive AI compute accessible, predictable, and open to everyone. - Barun Kar, CEO, Upscale AI
It’s about choice, interoperability, and longevity. With UEC, UAL, and SONiC, customers can build systems where components interoperate and can be swapped without ripping out the whole stack.
Open standards also democratize the ecosystem: multiple vendors can compete, which fosters innovation and creates pricing leverage for buyers. For AI, where the technology is evolving so quickly, locked‑down stacks slow you down and inflate costs. Open systems will keep the market and the technology moving.
The stars aligned for us with the market opportunity, the talent, and the investor support to bring the Upscale opportunity to life.
At Auradine we built a low‑power compute business that reached meaningful revenue quickly. As ChatGPT and large‑scale AI exploded, we saw a hole in the market adjacent to our core business: GPUs weren’t moving massive amounts of data among themselves efficiently. The answer, we believed, wasn’t a repurposed cloud network but a memory‑semantic, load/store‑oriented fabric designed specifically for AI. So a team began working on this opportunity area.
Thankfully we already had a lot of networking expertise internally across our leadership team, with experience across switching, routing, and security at companies like Juniper, Palo Alto Networks, and leading low‑latency switch startup Innovium, later acquired by Marvell.
The AI networking opportunity was big enough that we opted to spin out the team as Upscale AI, and thankfully many fantastic investors shared our vision. We’re now executing with backing from investors including Mayfield, Maverick, Qualcomm Ventures, Stepstone, Celesta, and many others.
The team must come first. Build the core team early and hire the best of the best – people aligned on culture and mission.
Second, be in active conversations with customers well ahead of general availability; it’s the only way to ensure what you’re building is deployable and necessary.
Third, make execution relentless. The problems are hard and the cycles can be long. Having a clear North Star (in our case, scaling AI workloads) keeps you going when things get tough.
The best investors act as force multipliers. They open doors to large potential customers for early pilots and reference deployments. They help recruit top technical and leadership talent through their networks, which is critical in the first year. In our domain, introductions to silicon vendors and system integrators for co‑engineering are extremely valuable. Investors can also sponsor benchmarks and events that validate progress and create momentum.
In the near term, success means open AI‑native networking in production – customers deploying at scale with predictable performance and the ability to choose components.
Bigger picture and longer-term, it’s about democratization. Today, the largest AI clusters are concentrated with a few big tech players. With an open, purpose‑built network, we can broaden access, so researchers, startups, and enterprises can scale without sacrificing economics or flexibility.