The AI Infrastructure Stack: Jensen Huang’s “5-Layer Cake” as a Framework for Enterprise Transformation

EXECUTIVE SUMMARY

The AI market is currently dominated by discussions around models and applications, but the largest operational bottlenecks are emerging several layers lower in the stack. Jensen Huang’s “5-layer cake” framework identifies the five interdependent layers required for enterprise AI at scale: energy, accelerated computing, infrastructure, models, and applications. Enterprises that modernize only the application layer will encounter scaling failures long before achieving meaningful ROI. The organizations that win will be the ones that treat AI as infrastructure — not software.

FIGURE 01 · THE 5-LAYER CAKE
Jensen Huang’s framework: AI as a vertically integrated infrastructure stack
BUSINESS VALUE · VISIBLE TO LEADERSHIP LAYER 5ApplicationsCopilots · workflow automation · predictive analytics · ticket routing LAYER 4ModelsOperational + physical AI · digital twins · cybersecurity automation LAYER 3Infrastructure (AI Factory)Storage · fabrics · orchestration · observability · security telemetry LAYER 2Accelerated ComputingGPU clusters · HBM · RDMA fabrics · distributed inference systems LAYER 1EnergyPower density · thermal architecture · cooling · facility redundancy PHYSICAL FOUNDATION · WHERE FAILURES ORIGINATE
Each layer depends on the integrity of the layers beneath it · Source: WUC Technologies engagement archive, mapped to NVIDIA framing

Why Jensen Huang’s “5-Layer Cake” Changes Enterprise IT Strategy

In his recent GTC keynote, NVIDIA CEO Jensen Huang described artificial intelligence as a “5-layer cake” composed of energy, chips, infrastructure, models, and applications. The framing matters because it reframes AI from a software conversation into an infrastructure conversation.

Most organizations still evaluate AI primarily at the application layer:

  • copilots
  • chat interfaces
  • workflow automation
  • analytics platforms

But enterprise AI failures rarely originate there. The real constraints appear lower in the stack:

  • storage throughput collapse under inference workloads
  • east-west network saturation
  • GPU cluster underutilization
  • telemetry blind spots
  • data pipeline fragmentation
  • security governance gaps between cloud and on-prem environments

The organizations successfully operationalizing AI are not merely deploying models. They are redesigning infrastructure around sustained high-density compute, low-latency data movement, and observability at scale.

For enterprise operators, Huang’s “5-layer cake” is less a metaphor and more a systems architecture model for the next decade of infrastructure engineering.

For organizations working with WUC Technologies, the implication is straightforward: AI readiness is now directly tied to infrastructure maturity.

Layer 1 — Energy: The Physical Constraint Most AI Strategies Ignore

Enterprise AI begins with power density.

That sounds obvious until organizations begin deploying inference clusters at scale and discover that existing facilities were designed for conventional virtualization workloads — not sustained GPU utilization across high-density racks.

The modern AI data center introduces operational challenges that traditional enterprise facilities rarely encountered:

  • thermal concentration
  • cooling inefficiency
  • rack power imbalance
  • UPS capacity exhaustion
  • increased east-west traffic heat generation
  • facility-level redundancy constraints

Hyperscalers already understand this. Enterprise environments are now catching up. The economics are changing quickly:

  • larger AI models require exponentially more compute
  • inference traffic is becoming persistent rather than burst-oriented
  • token generation introduces continuous utilization patterns
  • AI-assisted operations create always-on workloads

The result is that energy is no longer a facilities discussion isolated from IT operations. It is becoming a direct infrastructure scalability constraint.

The numbers reflect the shift. Conventional enterprise racks operate at 4–8 kW; modern GPU racks routinely exceed 50 kW, and NVIDIA’s GB200 NVL72 reference design pushes 132 kW per rack — roughly a 16–30× increase. Air cooling reliably tops out near 30 kW; everything beyond that requires direct-liquid or immersion. PUE targets are tightening from the conventional 1.5–1.8 range toward 1.1–1.2 for liquid-cooled AI builds. Training-cluster power footprints are now measured in tens to hundreds of megawatts: a 100,000-GPU H100 cluster draws roughly 150 MW, and announced gigawatt-scale builds are on the near horizon.

In practice, this changes procurement planning: rack density planning matters earlier, cooling architecture matters earlier, power distribution becomes strategic, and workload placement decisions become financially material.

The infrastructure conversation is now partially an energy conversation.

Notable operators in this layer
NextEra Energy
Power utility
Constellation
Nuclear / Power
Vistra
Power generation
GE Vernova
Grid / Turbines
Siemens Energy
Power systems
Schneider Electric
Power / Cooling
Eaton
UPS / PDU
Vertiv
DC cooling / UPS
Cummins
Backup generators

Layer 2 — Accelerated Computing: Why GPUs Changed the Economics of Enterprise Compute

Traditional enterprise infrastructure evolved around CPU-centric architectures optimized for transactional workloads and general-purpose virtualization. AI workloads behave differently.

Training and inference require massively parallel operations across enormous data sets. GPUs transformed AI because they dramatically improved parallel compute efficiency compared to conventional CPU architectures. This shift is now restructuring enterprise compute design itself.

The hardware specifics drive the architecture. A single NVIDIA H100 carries 80 GB of HBM3 at 3.35 TB/s; the H200 raises that to 141 GB of HBM3e at 4.8 TB/s; the Blackwell B200 roughly doubles capacity and bandwidth again at approximately 1 kW TDP per GPU. Cluster topology depends on NVLink 5 (1.8 TB/s GPU-to-GPU within a node) and InfiniBand NDR or XDR (400 or 800 Gb/s) for inter-node fabric. Below those bandwidth floors, distributed training and large-context inference degrade non-linearly — a fabric that looked sufficient for virtualized workloads will not look sufficient under a 256-GPU all-reduce.

The modern AI stack increasingly depends on:

  • GPU clusters
  • high-bandwidth memory architectures
  • low-latency interconnects
  • RDMA-capable fabrics
  • distributed inference systems
  • high-throughput storage pipelines

This creates architectural pressure throughout the environment. A GPU cluster operating at scale immediately exposes weaknesses elsewhere:

  • storage latency spikes
  • oversubscribed network fabrics
  • insufficient telemetry granularity
  • queue depth imbalance
  • bottlenecked east-west traffic paths

In other words, accelerated computing amplifies infrastructure weaknesses that conventional workloads often tolerated quietly. This is one reason many organizations underestimate AI adoption complexity. The visible application layer appears manageable. The underlying infrastructure dependencies are not.

Notable operators in this layer
NVIDIA
GPU silicon / CUDA
AMD
Instinct GPU / EPYC
Intel
Xeon / Gaudi
TSMC
Advanced foundry
Broadcom
Custom AI ASIC
Marvell
Networking silicon
Cerebras
Wafer-scale engine
Groq
Inference LPU
SambaNova
RDU systems
FIGURE 02 · AMPLIFICATION EFFECT
GPU clusters expose latent infrastructure weaknesses
CONVENTIONAL WORKLOAD AI WORKLOAD AT SCALE Storage latency · tolerable Storage latency · inference collapse Oversubscribed fabric · absorbed Oversubscribed fabric · training stalls Telemetry gaps · rarely noticed Telemetry gaps · root cause invisible Queue imbalance · not visible Queue imbalance · cluster underutilization
Latent weaknesses become operational failures under sustained AI workload

Layer 3 — Infrastructure: The Emergence of the AI Factory

One of Huang’s most important concepts is the idea of the “AI factory.”

Traditional data centers process business operations: ERP, email, virtualization, storage, transactional systems. AI factories generate intelligence itself. Their output is:

  • predictions
  • inference
  • automation
  • reasoning
  • optimization
  • synthetic generation
  • operational recommendations

That distinction changes infrastructure priorities significantly. The AI factory depends on synchronized performance across storage systems, compute fabrics, telemetry systems, networking, orchestration platforms, observability tooling, and security instrumentation.

This is where infrastructure modernization becomes operationally critical. Many enterprise environments still contain:

  • fragmented monitoring systems
  • siloed storage telemetry
  • aging Fibre Channel fabrics
  • inconsistent cloud integration
  • legacy network segmentation models
  • limited east-west visibility

Those limitations become materially more dangerous under AI workloads because AI amplifies throughput sensitivity. A latency condition that produces minimal impact in a conventional VM environment may severely degrade inference performance inside distributed AI systems.

The architectural delta between a conventional data center and an AI factory is not incremental — it is generational:

Dimension Conventional data center AI factory
Rack power density 4–8 kW typical 50–132+ kW (GB200 NVL72 = 132 kW)
Cooling architecture Air (CRAC / CRAH) Direct liquid + immersion
Network fabric 10 / 25 / 100 GbE Ethernet 400 / 800 GbE + InfiniBand NDR / XDR
Storage tier SAN / NAS hybrid (HDD + flash) Parallel filesystem, all-flash (Lustre, WekaIO, VAST)
Observability granularity Per-VM metrics · uptime focus Per-GPU, per-fabric-port, token-level telemetry
PUE target 1.5–1.8 typical 1.1–1.2 (liquid-cooled)
Power per facility 1–2 MW 10–50+ MW per training cluster
THE NEW REQUIREMENT

AI workloads must be observable end-to-end

That includes storage queue depth visibility, GPU utilization telemetry, network congestion analysis, inference latency mapping, cross-domain correlation, and automated anomaly detection. Organizations that treat observability as optional operational tooling will struggle to scale AI reliably.

Notable operators in this layer
Dell Technologies
Servers / Storage
Cisco
Network / Security
HPE
Servers / Cray
Supermicro
GPU servers
Arista
DC networking
Pure Storage
All-flash storage
NetApp
Hybrid storage
AWS
Hyperscaler
Microsoft Azure
Hyperscaler
Google Cloud
Hyperscaler / TPU
Oracle Cloud
OCI / RDMA
Equinix
Colocation
Digital Realty
Colocation
VAST Data
AI-native storage
NVIDIA DGX
AI factory ref-arch
AI-READINESS ASSESSMENT

Where does your storage and fabric break under AI load?

WUC engineers map the latent failure modes — queue depth, east-west saturation, telemetry gaps — before the first GPU cluster lands on your floor.

Request an assessment →

Layer 4 — Models: The Intelligence Layer Is Expanding Beyond Chatbots

Public AI discussion remains heavily centered on generative chat interfaces. Enterprise deployment patterns tell a different story.

The largest long-term AI impact is likely to emerge from operational and physical AI systems:

  • industrial automation
  • predictive maintenance
  • manufacturing optimization
  • digital twins
  • cybersecurity automation
  • healthcare analytics
  • infrastructure operations intelligence

This transition matters because operational AI introduces much stricter infrastructure requirements than consumer-facing chatbot workloads:

  • manufacturing AI systems require deterministic latency
  • healthcare analytics require governance and auditability
  • cybersecurity AI requires real-time telemetry ingestion
  • infrastructure AI depends on continuous observability streams

The model layer therefore becomes deeply dependent on infrastructure integrity. This is where many organizations encounter architectural fragmentation: disconnected telemetry pipelines, inconsistent data normalization, fragmented operational tooling, incomplete event correlation, weak governance models.

AI models are only as effective as the operational systems feeding them.

The model itself is not the moat.
The operational environment supporting the model increasingly is.
Notable operators in this layer
OpenAI
GPT / o-series
Anthropic
Claude
Google DeepMind
Gemini
Meta AI
Llama
Mistral AI
Open-weight
Cohere
Enterprise RAG
xAI
Grok
IBM
Granite / watsonx
Databricks
DBRX / Lakehouse
Hugging Face
Model hub
NVIDIA NeMo
Enterprise AI
Microsoft Phi
Small models
FIELD CHECKLIST · FREE PDF

AI Infrastructure Readiness Checklist — the 5-Layer Audit

A two-page printable workbook. One section per layer. Concrete thresholds, command snippets, and the questions to ask before procurement signs off on an AI build.

Inside: rack-density worksheet (Layer 1) · GPU + fabric capacity check (Layer 2) · observability gap audit (Layer 3) · data-pipeline governance map (Layer 4) · application-readiness scorecard (Layer 5)

Work emails only · no spam · you can unsubscribe from any follow-up email · we audit-log requests for abuse prevention.

Layer 5 — Applications: Where Enterprise ROI Actually Materializes

Applications remain the most visible AI layer because this is where business leaders directly experience outcomes:

  • AI copilots
  • workflow automation
  • predictive analytics
  • intelligent ticket routing
  • automated incident correlation
  • infrastructure optimization engines
  • customer support orchestration

But successful AI applications depend entirely on the maturity of the lower layers. This is where many enterprise AI initiatives fail. Leadership teams often attempt to deploy AI applications before data pipelines are stabilized, observability is mature, infrastructure bottlenecks are mapped, governance models are operationalized, and telemetry integrity is validated.

The result is predictable:

  • unreliable outputs
  • inconsistent inference performance
  • operational distrust
  • security escalation
  • governance conflicts
  • runaway infrastructure costs

The organizations achieving measurable ROI are approaching AI differently. They are treating AI as an infrastructure modernization initiative first and an application initiative second.

Notable operators in this layer
Microsoft Copilot
M365 / Dynamics
Salesforce
Einstein / Agentforce
ServiceNow
Now Assist
Adobe
Firefly / Sensei
Palantir
AIP / Foundry
Snowflake
Cortex AI
UiPath
Agentic RPA
Workday
HR / Finance AI
Datadog
AI observability
Splunk
Security AI
Dynatrace
Davis AI / APM
HubSpot
Breeze / CRM
Non-exhaustive editorial map · vendors listed reflect notable ecosystem participation, not endorsement · brand marks are property of their respective owners.

The Hidden Enterprise Opportunity: Infrastructure Modernization for AI Operations

One of the most overlooked implications of Huang’s framework is that AI increases the strategic importance of infrastructure engineering. Not decreases it.

As AI adoption accelerates:

  • storage demand increases
  • telemetry volume increases
  • network complexity increases
  • observability requirements expand
  • security surfaces multiply
  • east-west traffic intensifies
  • compute density rises

This creates significant demand for enterprise infrastructure modernization, hybrid cloud integration, storage optimization, network architecture redesign, observability engineering, and AI-ready operational environments.

For organizations like WUC Technologies — with deep experience across enterprise storage, Cisco networking, virtualization platforms, and infrastructure operations — this shift aligns directly with where enterprise demand is heading.

The market is moving beyond generic cloud migration discussions. The next phase is operational AI infrastructure.

AI Observability: The New Operational Discipline

AI infrastructure introduces a visibility problem most enterprises are not fully prepared for. Traditional monitoring approaches were designed around uptime, CPU utilization, storage capacity, and transactional latency.

AI environments require deeper operational telemetry:

  • inference latency mapping
  • GPU saturation analysis
  • vector pipeline tracing
  • token-generation performance
  • distributed workload correlation
  • model drift detection
  • cross-domain event analysis

Modern observability stacks increasingly integrate Splunk, Datadog, Dynatrace, ServiceNow, OpenTelemetry, and internal AI-assisted operational agents.

The operational model is changing from reactive monitoring toward predictive infrastructure intelligence. That transition is likely to define the next generation of enterprise operations engineering.

FIGURE 03 · OBSERVABILITY STACK FOR AI OPERATIONS
From reactive monitoring to predictive infrastructure intelligence
TELEMETRY SOURCES GPU saturationper-card utilization Storage queue depthper-fabric, per-LUN Network congestioneast-west fabric load Inference latencytoken / request Model driftaccuracy regression CORRELATION ENGINESplunk · DatadogDynatrace · OTelcross-domain analysis PREDICTIVE INTELLIGENCEAnomaly detectionCapacity forecastingAuto-remediation
Telemetry sources feed cross-domain correlation; correlation feeds predictive intelligence

Final Thoughts

Jensen Huang’s “5-layer cake” framework succeeds because it accurately reflects how enterprise AI is actually being operationalized. AI is not a standalone software category. It is an infrastructure stack:

  • Energy powers compute.
  • Compute powers infrastructure.
  • Infrastructure powers models.
  • Models power applications.
  • Applications generate business value.

Every layer depends on the integrity of the layers beneath it.

For enterprise leaders, the takeaway is increasingly difficult to ignore: the organizations that treat AI as an infrastructure transformation initiative will scale faster, operate more reliably, and realize ROI earlier than organizations focused solely on the application layer.

The AI era is not eliminating infrastructure engineering. It is making infrastructure engineering strategically central again.

About S. O’Brien

Senior Principal Engineer at WUC Technologies, leading enterprise infrastructure operations and AI-readiness assessments for enterprise manufacturing, healthcare, and financial-services clients. Two decades of fieldwork across Fibre Channel fabrics, GPU cluster integration, hypervisor storage stacks, and observability engineering. Authorized Dell and Cisco partner; SOC 2 Type II audit-ready operations.

Planning AI infrastructure modernization?

WUC Technologies helps enterprise IT teams assess AI readiness across storage, network, compute, observability, and security layers — before the first GPU cluster lands on the floor.

Book a Discovery Call
Get a Custom Solution