hosted·ai, a GPU infrastructure software company, has raised a $19 million seed round led by Creandum – the European VC firm behind early investments in Spotify and Klarna. Repeat Ventures, People Ventures, and z21 Ventures also participated. The round funds what the company describes as a GPU virtualization platform for service providers: pooling, multi-tenancy, and overcommit for GPU infrastructure, sold to neoclouds, hosting companies, telcos, and CSPs. CEO Ditlev Bredahl announced the round on LinkedIn on March 19, 2026.

The pitch is direct: the GPU cloud market does not have a scarcity problem. It has a waste problem. Most GPU capacity sits idle – paid for but not doing useful work – because the software layer between hardware and workloads has not caught up to demand. hosted·ai’s platform virtualizes GPUs the way VMware virtualized CPUs two decades ago, enabling providers to run multi-tenant workloads on shared GPU hardware and push utilization rates significantly higher.

What makes this more than a pitch deck is the team behind it. All four co-founders – CEO Ditlev Bredahl, CTO Julian Chesterfield, Narendar Shankar, and James Withall – previously built OnApp, the cloud infrastructure platform that powered 6,000+ deployments across 93 countries before being acquired by Virtuozzo in 2021. Before OnApp, Bredahl ran UK2Group, one of the UK’s largest hosting companies, through its $77 million LDC buyout. This is the same team, solving the same category of problem – giving service providers the software to compete with hyperscalers – but for GPUs instead of CPUs.

What hosted·ai Actually Does

The core product is a software platform that turns bare-metal GPU servers into multi-tenant GPU cloud infrastructure. Traditional GPU clouds allocate entire physical GPUs to single tenants – the equivalent of running one virtual machine per physical server in the pre-VMware era. hosted·ai creates virtual GPUs from pooled physical hardware, enabling multiple customer workloads to share the same GPU while remaining isolated.

The key capabilities:

  • GPU pooling – Workloads are distributed across GPU pools dynamically, maximizing utilization of each card. Customers are billed for GPU VRAM and compute (TFLOPs) consumed, not for entire physical GPUs sitting partially idle.
  • Multi-tenancy – Multiple customer workloads run on the same physical GPU with isolation. Users see virtual GPUs, unaware they are sharing hardware.
  • GPU overcommit – Configurable share ratios allow providers to oversell GPU capacity, the same way CPU overcommit works in traditional virtualization. hosted·ai claims this can increase margins by 5x or more compared to GPU passthrough.
  • Bare-metal deployment – The platform deploys to bare metal in 24 hours. It can run as a full hyperconverged stack (VMs plus GPU-as-a-Service) or as a standalone GPU layer that integrates with existing infrastructure.
  • Billing integration – REST API integration with WHMCS and HostBill, the two dominant billing platforms in the hosting industry.

The technology builds on hosted·ai’s acquisition of Sunlight.io, a company founded by Julian Chesterfield that developed an ultra-lightweight hypervisor (NexVisor) for edge and hyperconverged infrastructure. Chesterfield’s background is worth noting: he worked on the Xen hypervisor project at Cambridge and later served as Storage Architect at XenSource (acquired by Citrix) – the virtualization technology that underpinned the early days of AWS and much of the cloud industry.

The GPU Waste Problem

The framing of GPU infrastructure as a waste problem rather than a scarcity problem is hosted·ai’s central argument. According to the company, GPU cloud providers typically run passthrough configurations where each customer gets a dedicated physical GPU. If a customer’s workload uses 30% of the GPU’s capacity, the other 70% sits idle. The provider still pays for power, cooling, and depreciation on the entire card.

This is the same inefficiency that existed in the CPU server market before virtualization. In the early 2000s, physical servers routinely ran at 10–15% utilization. VMware’s virtualization layer allowed multiple workloads to share a single server, pushing utilization above 60–70% and fundamentally changing the economics of data center computing. That shift created the modern cloud industry.

The GPU market faces the same dynamic. NVIDIA H100 GPUs cost $25,000–$40,000 per card. H200s and B200s are more expensive still. At these price points, the difference between 30% utilization and 80% utilization is the difference between thin margins and strong margins. For a neocloud operator running thousands of GPUs, the financial impact of even a 20-percentage-point improvement in utilization is measured in millions of dollars annually.

hosted·ai is not the only company that has identified this problem. NVIDIA acquired Run:ai – a Kubernetes-based GPU workload orchestration platform – for approximately $700 million in 2024, and plans to open-source it. But Run:ai was designed for enterprise IT departments managing internal GPU clusters, not for service providers selling GPU compute to external customers. hosted·ai’s differentiation is that it is purpose-built for the hosting and neocloud channel: multi-tenant by design, with billing integration, customer isolation, and the overcommit capabilities that service providers need to make GPU economics work.

The Team’s Track Record

The founding team’s history in the hosting industry is directly relevant to whether hosted·ai can execute.

Ditlev Bredahl became CEO of UK2Group in 2006, leading the UK hosting company through a series of acquisitions (Another.com, Stargate/Resell.biz, midPhase, WestHost) before a $77 million management buyout by Lloyds Development Capital in 2011. UK2Group was eventually absorbed into THG Ingenuity Cloud Services. After UK2, Bredahl founded OnApp – a cloud infrastructure platform that enabled service providers to build and sell IaaS, public cloud, and CDN services. OnApp grew to 6,000+ deployments in 93 countries before being acquired by Virtuozzo in July 2021.

Julian Chesterfield (CTO) holds a PhD from Cambridge, worked on the Xen hypervisor project, and served as Storage Architect at XenSource (the Xen-based company acquired by Citrix whose technology powered early AWS). He served as Chief Scientific Officer at OnApp and later founded Sunlight.io, which raised a $6 million Series A from OpenOcean and Bosch Venture Capital before being acquired by hosted·ai.

Narendar Shankar was President at OnApp, overseeing strategy and commercial operations. His background also includes roles at Expedia Group and Loadsmart. James Withall served as CTO at OnApp with a background in cloud platform engineering.

The pattern is clear: the team built software that helped hosting providers compete in the CPU virtualization era (OnApp), and is now building software to help them compete in the GPU era (hosted·ai). The distribution channel – service providers, hosting companies, colocation operators – is the same. The technology layer is different.

packet·ai and GPUaaS.com

hosted·ai runs two additional operations that serve as both revenue streams and proof-of-concept for the platform:

packet·ai is hosted·ai’s own neocloud – a GPU cloud service built on its own platform. It offers NVIDIA B200 instances at $2.25/hour, H200 at $1.50/hour, and RTX 6000 Pro (96GB) at $0.66/hour, with pay-per-second billing, no contracts, and a claimed signup-to-SSH time under five minutes. The pricing is aggressive – hosted·ai claims 50%+ below market rates, citing comparisons to Hyperstack, Sesterce, and CoreWeave. packet·ai runs 500+ GPUs across multiple clusters and serves as a live demonstration that the platform works in production.

GPUaaS.com is a wholesale GPU matchmaking service that connects enterprise demand with GPU supply from providers running the hosted·ai platform. It operates as a marketplace: businesses specify their GPU requirements, GPUaaS.com matches them with vetted providers, and quotes are delivered within hours. The model benefits both hosted·ai’s provider customers (who get deal flow) and enterprises (who get competitive GPU pricing without navigating a fragmented market).

This three-layer approach – software platform for providers, their own neocloud as proof, and a marketplace connecting supply with demand – gives hosted·ai multiple paths to revenue and a feedback loop that improves all three.

Why This Matters for Hosting Companies

The GPU cloud market is currently dominated by purpose-built neoclouds (CoreWeave, Lambda, Nscale) and hyperscalers (AWS, Azure, Google Cloud). Traditional hosting companies – the thousands of operators who built businesses on CPU-based VPS, shared hosting, and dedicated servers – have largely been shut out of GPU compute because the capital requirements are enormous and the software stack is immature.

hosted·ai’s pitch to this market is that the software layer is now available. A hosting company or colocation provider that can source GPU hardware – whether purchased, leased, or financed – can use hosted·ai’s platform to offer multi-tenant GPU cloud services with the same kind of automation and billing integration they already use for CPU-based products. The WHMCS and HostBill integrations are not accidental; they signal that hosted·ai is specifically targeting the hosting industry’s existing operational workflows.

Several dynamics make this relevant now:

  • GPU supply is loosening – The acute GPU shortage of 2023–2024 is easing as NVIDIA scales production and next-generation chips (B200, GB200) enter the market. More available hardware means more potential providers, which means more demand for the software that makes GPU hosting economically viable.
  • Neocloud consolidation is coming – Vultr has predicted that by 2027, a handful of GPU providers will control 80%+ of the market. If that consolidation plays out, the survivors will be those with the best economics – and better utilization rates are the most direct path to better economics.
  • NVIDIA’s Run:ai open-sourcing – When NVIDIA open-sources Run:ai, it will provide free GPU orchestration software to the market. But Run:ai is designed for enterprise Kubernetes environments, not for service providers selling to external customers. hosted·ai is betting that the hosting channel needs a different product – one built around multi-tenancy, billing, and provider economics rather than internal cluster management.
  • The margin problem is real – GPU hardware depreciates rapidly. An H100 purchased today will be worth significantly less in 18 months when the next generation is widely available. Providers running at 30% utilization on depreciating hardware are losing money. Providers running at 70%+ utilization on the same hardware are profitable. The difference is software.

The $19 Million Question

A $19 million seed round is substantial but not extraordinary in the current AI infrastructure landscape, where CoreWeave’s backlog exceeds $66 billion and Nscale has raised $4.7 billion. The comparison, however, is misleading. CoreWeave and Nscale are capital-intensive hardware businesses that need billions to buy GPUs. hosted·ai is a software company that sells to hardware operators. Its capital requirements are fundamentally different – closer to VMware’s early model than to a cloud provider’s.

Creandum’s involvement as lead investor is notable. The firm’s portfolio includes Spotify, Klarna, and Trade Republic. Creandum typically invests after a company has paying customers and a working product, not at the concept stage.

The risk is execution and timing. The OnApp team proved it can build and sell infrastructure software to service providers at scale. But the GPU market moves faster than the CPU market did, GPU architectures change with each NVIDIA generation, and the open-sourcing of Run:ai could commoditize parts of the GPU orchestration stack. hosted·ai’s advantage is its focus on the hosting and neocloud channel – a distribution network the founders have spent 25 years building. Whether that channel adopts GPU services fast enough, and whether hosted·ai’s software stays ahead of the open-source alternatives, will determine whether this becomes another OnApp-scale success or remains a niche player in an increasingly crowded market.