2026 Digital Infrastructure Summit Art
Article

Five Takeaways From the 2026 Digital Infrastructure Summit

April 22, 2026
Latham recently convened industry leaders at the inaugural Digital Infrastructure Summit in New York. Explore the key insights on today’s most pressing data center trends shaping the market.

Latham & Watkins recently hosted its inaugural 2026 Digital Infrastructure Summit in New York. Latham partners, clients, industry experts, and thought leaders came together to share insights into the rapidly evolving sector and discuss opportunities in the market. The summit underscored that AI-driven digital infrastructure is a multi-trillion-dollar, multi-decade buildout at the intersection of national security, energy markets, supply chains, capital markets, and regulatory policy.

Here are five key takeaways:

1. Power and deal structures are now the critical path for data center growth.

Power availability, not real estate, is the binding constraint on new data center capacity, pushing developers to secure interconnection positions, build substations/transformers, and increasingly rely on on-site/behind-the-meter generation, demand response/curtailment, and hybrid power mixes to meet aggressive time-to-power targets. AI-driven load variability (including large step changes) is also elevating batteries, controls, and curtailment provisions as reliability tools that enable rapid scale. That uncertainty is now embedded in contracts and capital: customers push on force majeure and delivery risk; developers seek greater design completion before signing; and financiers underwrite not only credit but also permitting, supply chain, and execution risk. As technology changes faster, leverage is shifting toward tighter basis-of-design documentation, clearer change-order regimes, and more explicit pricing of obsolescence and residual-value uncertainty across a broader mix of capital structures.

2. Design discipline is now a core competitive advantage in digital infrastructure.

As digital infrastructure has evolved into a more structured, capital-intensive asset class, developers are being pushed to apply greater discipline in risk allocation, counterparty selection, and execution. Rapid shifts from air to liquid cooling are driving higher densities, heavier equipment, and stricter technical requirements, forcing designs that emphasize flexibility, modularity, and optionality to allow for customer needs changing on compressed timelines. Success increasingly depends on deep technical expertise, access to capital and power, and trust with partners and communities, as power and cooling constraints begin to shape not just facility design but the ultimate scale and viability of projects.  

3. AI workload mix is reshaping infrastructure demand.

Training needs ultra-dense, tightly coupled compute. Inference is distributed and spiky, requiring orchestration, redundancy strategies, and new thinking on risk allocation and potential liability as models/agents proliferate. Training clusters concentrate extreme power and cooling in a single footprint, making time-to-power, network topology, and physical proximity constraints decisive for site selection. Inference, by contrast, can see sudden order-of-magnitude demand spikes, increasing the importance of software routing, throttling policies, and multi-region capacity planning. As “always-on” agentic systems expand, providers are also anticipating tougher questions around service-level agreements, responsibility boundaries, and legal exposure when downstream outputs or outages create real-world harm.  

4. Industrial-scale AI is driving a massive buildout constrained by power, hardware, and talent.

We are entering an inflection point where “non-biological intelligence” is being manufactured at industrial scale — driving a compounding collision of (i) rapidly improving model capability; (ii) geopolitical competition for AI supremacy; and (iii) hard constraints in energy, supply chains, and talent. As scaling has shifted from relying primarily on human data to heavier inference-time compute, reasoning and autonomy have accelerated rather than plateaued, implying continued demand-side shock for compute even as the unit cost of intelligence falls. The result is a historic infrastructure buildout, with “intelligence factories” increasingly defined by gigawatt-scale power strategies, critical hardware bottlenecks (transformers, high-bandwidth memory, advanced chips), and the need to win public support for rapid deployment.

5. Local opposition and regulatory moratoriums are emerging as material barriers to data center deployment.

Even when power, land, and capital align, projects are increasingly delayed by community opposition and local government intervention. NIMBY-driven litigation over zoning, permits, and entitlements can add months or years, fueled by concerns around noise, water use, grid strain, and limited local job creation. In response, more jurisdictions are imposing moratoriums or heightened review standards, creating regulatory uncertainty that complicates financing and chills investment. Developers that treat permitting and community relations as core workstreams—backed by early engagement and credible mitigation plans—are better positioned to avoid stranded capital and protect time-to-revenue.

Endnotes

    This publication is produced by Latham & Watkins as a news reporting service to clients and other friends. The information contained in this publication should not be construed as legal advice. Should further analysis or explanation of the subject matter be required, please contact the lawyer with whom you normally consult. The invitation to contact is not a solicitation for legal work under the laws of any jurisdiction in which Latham lawyers are not authorized to practice. See our Attorney Advertising and Terms of Use.