Cloud Extension Guide

How Nutanix Cloud Clusters extend an on-premises Nutanix environment into AWS without changing the platform

NC2 is the Nutanix Cloud Platform running on AWS bare-metal hosts. Same AOS, same AHV, same Prism Central, with native AWS networking integration. This guide treats NC2 as architecture rather than as a VMware exit option (which it is not), and frames the four use cases that justify it: cloud disaster recovery, on-demand burst, data center exit, and regional expansion.

Where NC2 fits alongside Amazon VMware Cloud on AWS (VMC on AWS), AWS native services, and the on-premises AIM cluster is covered explicitly.

⏱ 18 min read Architecture-focused | Multi-cloud | Operations-aligned

Key Takeaways

  • NC2 is not a VMware exit option - it runs Nutanix AHV on AWS bare-metal hosts and extends existing Nutanix environments rather than replacing vSphere workloads.
  • The four use cases that justify NC2 are cloud disaster recovery, on-demand burst capacity, data center exit without replatforming, and regional expansion without new infrastructure.
  • NC2 uses local NVMe storage to preserve Nutanix DSF performance characteristics, not EBS - this architectural choice maintains operational consistency with on-premises clusters.
  • AWS Direct Connect is the longest lead-time dependency in NC2 deployments, typically taking weeks to provision and validate before production replication can begin.
  • Most AIM environments use both NC2 and VMC on AWS together: NC2 for Nutanix workloads requiring cloud extension, VMC on AWS for VMware-dependent applications needing a bridge architecture.

The framing that costs evaluations time

NC2 is often misclassified during AIM and VMware exit evaluations. It is not a VMware exit destination. It is not Amazon VMware Cloud on AWS. It is not AWS native services hosting a VM. Treating any of those as the right model produces an architecture that does not work or a procurement decision that does not make sense.

The four use cases that justify NC2

Most NC2 engagements involve one or two of these. Few involve all four. Naming the use case at the start of the conversation is the single most useful step in scoping the architecture, the network design, the replication topology, and the cost model.

Disaster Recovery to Cloud

The most common reason organizations adopt NC2. On-premises Nutanix workloads replicate to an NC2 cluster in an AWS region. In a DR event, workloads boot in the cloud cluster with the same network identities, the same storage, and the same management surface. No re-platforming, no format conversion. Replication is native AOS-to-AOS via Nutanix Protection Domains or Nutanix Disaster Recovery, not a third-party tool. RPO and RTO are predictable because the platforms are identical end-to-end.

On-Demand Burst and Elastic Capacity

Workloads that require occasional capacity spikes (quarter-end batch processing, dev/test surges, seasonal demand) can extend into NC2 without permanent on-premises buildout. Workloads migrate to NC2 during the surge and return to on-premises afterward, or run permanently in NC2 if the economics favor it. Sizing is driven by peak capacity, with on-demand or Reserved Instance pricing aligned to the usage profile.

Data Center Exit Without Replatforming

Organizations exiting a colocation facility or shutting down a data center, where workloads are already on Nutanix, can move the entire estate to NC2 without changing the platform. The migration is a Nutanix-to-Nutanix replication, not a re-platforming exercise. The operational team continues to manage the same Prism Central and the same VMs in the same way; the only thing that changes is the location.

Regional Expansion Without New Data Centers

Organizations expanding into new geographic regions where they have no data center presence can stand up NC2 in the relevant AWS region. The operational model is identical to the existing on-premises Nutanix environment. No new colocation contract, no new hardware procurement, no new operational learning curve for the team running it.

How IVI deploys NC2

NC2 deployment runs through the same engagement methodology as on-premises Nutanix work, with AWS-specific additions. The single longest lead-time item is Direct Connect; this is the dependency to start early.

Use Case and Architecture Definition (1 to 2 weeks)

Workload inventory and classification (cloud DR, extension, exit, regional). Target AWS region selection driven by latency, regulatory considerations, and cost. Network architecture design covering VPC layout, Direct Connect requirements, and Transit Gateway integration. Replication strategy: which workloads, what RPO, what RTO, and whether async, NearSync, or sync. Procurement path selection between Nutanix-billed combined and AWS-billed bare metal with Nutanix software separately.

AWS Foundation

VPC and subnet provisioning. Direct Connect provisioning and validation, which is the long-lead-time item; circuits typically take weeks to deliver. Transit Gateway and routing configuration. IAM roles and security groups. AWS Organizations and account structure for the NC2 environment.

NC2 Cluster Deployment

Cluster provisioning via the NC2 portal or Terraform integration. Prism Central registration of the cloud cluster so that on-premises and cloud Nutanix are managed together. Network configuration validation covering cluster connectivity to on-premises and to other AWS VPCs. Storage and data services configuration aligned to the workload profile.

Replication and Protection Configuration

Protection Domain or Nutanix Disaster Recovery configuration for in-scope workloads. Replication schedule and bandwidth allocation tuned to the Direct Connect capacity and the RPO target. DR runbook authoring and validation. Test failover execution to validate the architecture end-to-end before production protection turns on.

Aegis coverage across on-premises and NC2

For AIM clients running NC2, the Aegis service set extends naturally across both environments. The operational consistency is what justifies NC2 over alternative cloud DR approaches; the same co-managed service covers both locations.

Aegis PM

Cluster health monitoring across on-premises and NC2. Replication status and RPO breach detection. Cluster capacity and performance trending. AWS-side metrics integration covering Direct Connect health, Transit Gateway, and bare-metal host status from AWS.

Aegis IR

First-call response for NC2 cluster issues. Coordinated escalation to Nutanix TAC for software defects. Coordinated escalation to AWS support for AWS infrastructure issues. Single point of contact regardless of which vendor's layer is experiencing the problem.

Aegis LM

AOS and AHV lifecycle management identical to on-premises. NC2-specific lifecycle considerations including bare-metal generation transitions and AWS region considerations. Coordinated upgrade scheduling between on-premises and cloud clusters so the estate stays aligned.

Where each workload class actually belongs

NC2 is one of four credible destinations for a given workload, and it is rarely the right answer for every workload in an estate. The architectural decision is workload-by-workload, anchored in the business driver. NC2 has a clear role; it is not the universal answer.

Nutanix Cloud Clusters (NC2)

Nutanix Cloud Platform running on AWS bare-metal hosts. Same AOS, AHV, and Prism Central as on-premises. Local NVMe storage forms the Nutanix DSF storage pool; AWS native services are used for adjacent purposes (backup target, S3 cold tier, cross-region replication) but not for cluster storage.

Best fit: On-premises platform is, or is becoming, Nutanix. The cloud workload should run AHV, not vSphere. The use case is DR, burst, data center exit, or regional expansion of a Nutanix environment. Long-term cloud architecture is the destination, not a short-term bridge.

Tradeoffs: Requires Nutanix on-premises (or commitment to it) for the operational consistency value to land. Direct Connect is the long-lead-time dependency. Steady-state, high-utilization workloads with no DR or extension driver are usually cheaper on-premises.

Amazon VMware Cloud on AWS (VMC on AWS)

Customer's vSphere stack running on AWS-provided bare-metal infrastructure. Preserves NSX-T, SRM, and ISV vSphere certifications. Used as a bridge for workloads that cannot immediately move to AHV.

Best fit: VMware-dependent workloads that cannot immediately move to AHV. Applications requiring NSX-T, SRM, or specific ISV vSphere certification. Bridge architecture during application upgrade, refactor, or replacement. Customer retains Broadcom licensing under BYOL for the bridge period.

Tradeoffs: Bridge architecture, not destination. Broadcom licensing economics still apply for the duration. Operationally and architecturally separate from a Nutanix environment, so it does not deliver the cross-cluster operational consistency that NC2 does.

AWS Native Services

EC2, ECS, EKS, Lambda, RDS, MSK, and similar AWS managed services. The application is built or refactored to consume cloud-native primitives directly rather than running as a VM on a Nutanix or VMware platform.

Best fit: Cloud-native applications designed for autoscaling on EC2, ECS, EKS, or Lambda. Workloads moving toward managed services (RDS for databases, MSK for Kafka). Stateless web tiers where elastic horizontal scaling is the value driver.

Tradeoffs: Application architecture work is required; running a traditional VM workload on EC2 with no architectural change loses the operational consistency benefit and gains the cloud bill. Operational tooling is AWS-native, not Nutanix-native.

What makes NC2 architecturally different from generic cloud DR

A handful of architectural choices distinguish NC2 from third-party cloud DR overlays and from running a Nutanix VM on a generic cloud hypervisor. These are the points worth understanding before evaluating NC2 against alternative cloud DR approaches.

Same hypervisor end-to-end

NC2 hosts run AHV directly on AWS bare-metal EC2 instances. There is no AWS hypervisor in between. The cluster behaves like an on-premises Nutanix cluster because it is one; the only difference is the location of the bare-metal hardware.

Why this matters: VM constructs, snapshots, replication semantics, and operational tooling are identical across on-premises and cloud. Workloads do not change format or behavior when they move. Operations teams do not relearn the platform.

Local NVMe storage, not EBS

The Nutanix Distributed Storage Fabric is built from the local NVMe on each bare-metal host. AWS native storage services (S3, EBS, FSx) are not the primary cluster storage; they are used for adjacent purposes such as backup targets, archival cold tier, and cross-region replication.

Architectural implication: NC2 cluster I/O performance follows on-premises DSF characteristics, not EBS characteristics. This is the technical foundation for treating NC2 as architecturally consistent with on-premises Nutanix rather than as a different platform that happens to run Nutanix software.

Single management plane via Prism Central

Prism Central manages NC2 clusters alongside on-premises Nutanix clusters. A single instance can manage on-premises and cloud Nutanix environments together, with the same dashboard, the same VM constructs, and the same protection policies.

What changes operationally: The on-premises versus cloud distinction becomes a deployment-time decision, not an operational one. For Aegis-managed environments, Aegis PM extends across both locations via LogicMonitor; the operational view is unified.

Who NC2 is for

NC2 has a clear architectural role and a set of environments where it does not belong. The list below names both, so the fit conversation can be specific.

Related Resources

FAQs

Frequently Asked Questions

Is NC2 a VMware exit option?

No. NC2 runs Nutanix AHV, not vSphere. The VMware exit destinations in the AIM portfolio are AHV on UCS for the on-premises core and Amazon VMware Cloud on AWS (VMC on AWS) for VMware-dependent workloads that need a bridge. NC2 is the cloud extension of a Nutanix environment, not the path off VMware. This framing is the most common source of evaluation confusion and is worth getting right early.

How do we choose between NC2 and Amazon VMware Cloud on AWS?

Choose by workload class. NC2 is the right cloud target when the workload is or is becoming Nutanix on-premises. VMC on AWS is the right cloud target when the workload depends on vSphere features (NSX-T, SRM, specific ISV certifications) and cannot move to AHV in the relevant timeframe. Most AIM environments end up using both: NC2 for the bulk of Nutanix workloads, VMC on AWS for the smaller VMware-dependent set.

Can we use NC2 without on-premises Nutanix?

Technically yes, but architecturally it is rarely the right answer. NC2 in isolation gives up the operational consistency value that justifies it. For greenfield cloud-only Nutanix deployments, regional expansion is a credible use case; for everything else, on-premises Nutanix should come first or alongside.

Why does NC2 use local NVMe instead of EBS?

To preserve Nutanix DSF performance characteristics. The Distributed Storage Fabric is built from the local NVMe on each bare-metal host, the same way it is built from local NVMe in an on-premises Nutanix cluster. Running DSF on EBS would change the I/O profile and break the architectural consistency that makes NC2 valuable. AWS native storage services are used for adjacent purposes, not for primary cluster storage.

What is the long-lead-time item in NC2 deployment?

AWS Direct Connect. Circuits typically take weeks to deliver and are the gating dependency for production NC2 deployments. VPN is acceptable as an interim or for smaller deployments, but Direct Connect is the recommended path for production replication traffic. Starting the Direct Connect conversation early in the engagement, ideally in parallel with use case definition, prevents it from becoming the schedule constraint.

Ready to explore NC2 for your environment?

IVI's cloud architecture team works with organizations to evaluate NC2 fit, design the network architecture, and deploy NC2 clusters as part of a unified AIM engagement. Start with a use case conversation to determine if NC2 aligns with your cloud extension requirements.

Start a Conversation