CoreWeave is a specialized cloud provider focused on GPU-accelerated computing, offering large-scale infrastructure optimized for AI/ML workloads, visual effects rendering, and high-performance computing. The company operates one of the largest fleets of NVIDIA GPUs in the cloud, providing on-demand access to compute resources through Kubernetes-based orchestration. CoreWeave went public on the Nasdaq in March 2025 and serves major AI companies, enterprises, and research institutions requiring massive parallel compute capacity.
Founded
2017
Company Size
1001-5000 employees
Headquarters
Roseland, USA
Funding
IPO
On-demand access to latest NVIDIA GPUs like A100s and H100s on bare metal for AI training and inference.
Provides bare-metal GPU clusters with minimal virtualization overhead for maximum performance in AI workloads.
Managed Kubernetes with GPU support for deploying and scaling AI workloads using familiar container tools.
Unifies Kubernetes and SLURM for flexible scheduling, rapid spin-up, and visibility across AI workloads.
Exascale object storage optimized for AI with GPU-local caching and up to 7 GiB/s per GPU throughput.
Hyper-optimized file storage delivering up to 2 GB/s/GPU throughput with 99.9% uptime for AI models.
High-bandwidth, low-latency InfiniBand and RoCE interconnects for near-linear scaling in distributed training.
Sub-1 minute boot times for rapid experiment cycles and reduced provisioning delays.
Supports 100k+ GPU clusters for supercomputing-scale AI training and inference.
Storage with eleven 9’s durability ensuring reliability for large-scale AI data access.
Enables continuous model training without data egress fees using optimized storage.
Optimized infrastructure for HPC tasks like AI training, VFX rendering, and simulations.
Tailor VMs with specific CPU, RAM, and GPU combinations to match exact workload requirements.
Enterprise-grade security features integrated into the AI-optimized cloud platform.
Common questions about CoreWeave features, pricing, and capabilities
CoreWeave offers a massive scale of NVIDIA GPUs, ranging from H100s and A100s for heavy training to L40S and RTX 6000 Ada generation cards for inference and rendering. This variety allows users to match their specific workload requirements with the most cost-effective and performant hardware available.
CoreWeave is built on a bare-metal Kubernetes stack that eliminates the virtualization overhead found in legacy clouds. This architecture provides significantly faster networking and disk I/O, ensuring that your AI models train faster and inference requests are processed with lower latency.
CoreWeave's orchestration layer allows for the rapid provisioning of multi-node clusters in minutes rather than hours. Our high-speed InfiniBand interconnects ensure that these nodes communicate with the low latency required for distributed training across hundreds of GPUs.
Yes, we offer a variety of Cloud Templates and container images optimized for tasks like LLM fine-tuning, stable diffusion, and 3D rendering. These templates come pre-loaded with the necessary drivers and libraries, significantly reducing the time from setup to execution.
Yes, CoreWeave provides a dedicated Terraform provider and full Kubernetes API access, allowing DevOps teams to automate infrastructure deployment. This ensures that your GPU resources can be integrated into existing CI/CD pipelines and managed as code for better scalability.
CoreWeave is fully compatible with all major ML frameworks including PyTorch, TensorFlow, and JAX. Because we provide a standard Kubernetes environment, you can run any containerized workload or utilize pre-configured NVIDIA NGC containers to get started immediately.
We provide flexible billing options including transparent hourly on-demand rates for burstable workloads and reserved instance pricing for long-term projects. Reserved instances offer significant discounts for users who need guaranteed capacity for sustained AI training or production inference.
Unlike many large cloud providers, CoreWeave offers a much more predictable cost model with zero or significantly reduced egress fees. This makes it highly economical for data-intensive machine learning projects that require moving large datasets or model weights frequently.
CoreWeave maintains SOC 2 Type II compliance and adheres to strict data privacy standards to ensure your proprietary models and datasets are protected. Our data centers feature enterprise-grade physical security and network isolation to prevent unauthorized access to your compute resources.
No, CoreWeave provides a private cloud environment where your data and model weights remain exclusively yours. We do not use customer data for any internal model training, ensuring that your intellectual property and competitive advantages are fully preserved.
Enterprise customers have access to dedicated solutions architects and 24/7 technical support from engineers who specialize in GPU infrastructure. We provide Slack-based support channels and rapid response times to ensure your production AI workloads never experience downtime.
Our comprehensive documentation portal includes step-by-step migration guides, API references, and best practices for transitioning workloads from legacy clouds. We also offer professional services to help large organizations architect their move to a GPU-native cloud environment.
4 GPUs, 186GB VRAM, 144 vCPUs, 960GB RAM
Contact for pricing
On-demand price per hour
8 GPUs, 180GB VRAM, 128 vCPUs, 2048GB RAM
Contact for pricing
On-demand price per hour
8 GPUs, 80GB VRAM, 128 vCPUs, 2048GB RAM
Contact for pricing
On-demand price per hour
96 vCPUs, 768GB RAM
Contact for pricing
On-demand price per hour
Spot price per hour
High performance object storage
Contact for pricing
Price per GB per month
Standard object storage
Contact for pricing
Price per GB per month
Public IP Address allocation
Starting at
$4.00/month
Monthly fee per IP
Dedicated Direct Connect 10G
Starting at
$1250.00/month
Monthly billing rate
Enterprise Blackwell Infrastructure
Contact for pricing
Contact sales for pricing
User reviews coming soon
We're building our review system to help you make informed decisions.
Performance data coming soon
We're collecting uptime and performance metrics to provide comprehensive insights.