Investment Research Process

AI IMPLICATION & CONSEQUENCES FOR PUBLIC EQUITY PORTFOLIOS

Written by Michael Miller | Dec 8, 2025 2:30:00 PM

Today’s Market Landscape

A technological revolution is underway with the rise of AI. As of late 2025, we are on familiar ground from a historical perspective, with investor enthusiasm driving higher stock prices for those companies expected to be future leaders. However, as with all periods of significant change and growth, valuation has become very complex; stocks that look optically expensive may be very reasonably valued, and vice versa. The combination of large price gains and seemingly endless investor enthusiasm has led some to declare that a bubble has formed.

We’ll take no position on the bubble question here. Rather, we are focused on high levels of AI-related index concentration, the context incubating the current cycle, and the implications for public equity returns.

Parallels to the e-commerce and internet boom of the late 1990s are interesting; however, with regard to AI, the potential impact seems less certain. Under some scenarios, AI has far greater societal and economic implications than the internet, while in others the potential is dramatically less. Most importantly, in either case, the implications for investors are significant and generally negative.

What is already clear is that the business dynamics regarding AI are, again, quite complex, with questions around the magnitude of capex, the lifecycles of the assets being purchased, and the ability to generate sufficient revenue to support both initial and ongoing capex.

Given the market capitalizations of AI-related names, investors have little choice but to take a view on this, since it has a major impact on benchmark comparisons. Our goal is to make the trade-offs clear as investors think about public equity positioning in the current environment.

 

A Useful Outside View

A recent note from a very seasoned long/short equity manager we follow offers a useful framing of the broader issues. He is a sensible, somewhat contrarian investor with a slight value bias, though only towards companies that still have solid growth prospects.

He notes that after a brief pause in the first half of the year, the AI trade again dominated the market, helped by a series of OpenAI announcements. Nvidia suggested that AI-related capital spending could reach $3-4 trillion by 2030. This excludes the roughly $1.4 trillion of additional power investment Deloitte estimates will be required. In his view, there is little value in debating whether this is a bubble; very major technological shift has gone through a period of exuberance followed by correction, and that enthusiasm is often part of how the infrastructure gets built. Some capital will be lost, though the timing and scale are unknown.

His core concern is the nature of the assets being built and the uncertainty of the business model. Earlier waves of infrastructure, such as railroads or fiber networks, had multi-decade lives and needed limited incremental capital. AI data centers may need to replace roughly 60% of their capital base every five years because GPUs become obsolete so quickly. This creates unusually high cash requirements even before considering any return on capital. He believes this level of capital intensity is without precedent in technology or in most sectors. Given the short asset life, he expects capital intensity to exceed even the most demanding industries. That implies that margins will have to reach very high levels, or returns on capital will be weak, or GPU costs will need to fall sharply.

In the near term, risks appear contained. Installed capacity remains small relative to demand, and most spending is being funded with equity or internal cash flows. The medium-term picture is more uncertain. Supply and demand will eventually meet, which will compress returns; capex may grow too large to fund solely through equity or cash flow. At that point, the business model will need to prove it can support both expansion and the heavy maintenance spending required to sustain the asset base. He also points to broader economic risks from potential wealth effects or, depending on how adoption unfolds, either a slowdown or deflationary pressure and job losses.

He also highlights the lack of operating leverage so far. New technologies often see rapid declines in marginal cost, but that has not happened here. OpenAI has cut prices sharply, including the price of its flagship model by 80% in 2025. Usage is up tenfold over 18 months, yet revenue is only about 2.5 times higher. Competition is intense, and many models are not meaningfully differentiated. Adoption is rising because prices are well below production cost. If those costs do not fall, prices will eventually need to rise to cover average cost; only a small portion of users pays today.

Looking ahead five years, he estimates that if total AI capex reaches $4 trillion, maintaining the existing asset base would require about $2.4 trillion in reinvestment. Assuming a 30% return on that capital, based on cloud-industry economics, annual revenue would need to exceed $5 trillion, with roughly $1.7 trillion required for maintenance capex alone. For context, global e-commerce revenue excluding China is projected at $1.5 trillion, internet advertising at $0.9 trillion, and US IT spending excluding AI at about $1.1 trillion. He also believes regulated utilities, with capped returns and leveraged balance sheets, cannot fund the necessary power build-out. Much of that burden will fall on AI operators and end users. He estimates additional energy-related investment needs of roughly $1 trillion.

 

Counterarguments

Before turning to the historical perspective that supports the concerns outlined above, it is important to acknowledge the most obvious cases for maintaining significant exposure to today’s leading AI companies.

 

Point #1: The AI boom is being led by highly successful, well-established businesses with strong balance sheets, rapid earnings growth, scalable models, and substantial free cash flow generation. Valuations are not cheap, but they are defensible given the quality and prospects of these firms.

 

The chart below, supplied by Goldman Sachs, provides a useful perspective regarding the power of tech earnings, particularly over the last decade.

Goldman also notes that valuations, based on forward Price/Earnings to Growth (“PEG”) ratios, are high, but not nearly as extreme as those witnessed in the past.

Goldman believes these valuations, while high, are reasonable given ROEs.

This is further supported by a comparison of the Magnificent 7 to the other 493 names comprising the S&P 500, courtesy of Sparkline Capital.

There is much to consider regarding the ultimate return on capital from AI investments for the Magnificent 7, especially given the radical shift in their business models towards far more asset-heavy structures. Even so, their dominance and profitability are likely to remain strong in the interim, which may be enough to support current prices.

 

Point #2: As noted earlier, AI capacity remains well below demand, which supports earnings and continued capex expansion. Markets today are driven largely by near-term conditions, so longer-term risks are not a major concern for many investors.

 

This is also a powerful argument, since market preferences are now deeply established and structurally different from those of the past.[1].  

 

Point #3: Investors should adapt to market conditions to manage the risk of under-performance, since periods of sustained lag often lead to reactionary decisions that leave them worse off than if they had controlled relative risk from the start.

 

For perpetual institutions with a spending rate to support, this point is both entirely reasonable and highly disconcerting. Market and economic history are clear on the dangers associated with elevated valuations and periods of rapid technological change. From 2000 to 2002, for example, the S&P 500 fell 37.6%, and every dollar allocated there lost roughly half of its purchasing power. At the same time, investments in small-cap value stocks or with skilled value managers largely preserved their real value.

Yet before that correction, many investors had already abandoned what would become the winning strategies in favor of whatever was working at the time. This cycle is unlikely to be different, as it is difficult to escape the perception that under-performance reflects low skill or a limited understanding of the present or future. As discussed later in this memorandum, the best approach is one that weighs the trade-offs involved.

 

Core Concerns

Our concerns regarding market conditions and AI valuations are built on three key issues.

  1. The change underway demands massive levels of capex, both initially and on an ongoing basis. The scale is such that it is difficult to see how sufficient revenues can be generated to support these investments.
  2. Current market pricing reflects very high confidence that the biggest beneficiaries of AI will be today’s best-known companies. History and logic suggest that lucrative areas attract competition and innovation from smaller or newer entrants. We see no reason why AI would be different.
  3. Concentration of AI-related names in the index is unusually high, which increases the risks of any unwinding. Negative wealth effects from such a reversal would create major economic challenges in the US and elsewhere.

Below are several charts that illustrate these issues. The first group shows the dramatic change in capital intensity for today’s leading names. It is striking that the significant deterioration of one of the most compelling characteristics of the Magnificent 7 (i.e., massive and growing free cash flows at scale) is being overlooked.

At the same time, what feels like endless demand for AI stocks has led many investors to forget how the last major technology shift played out. The chart below shows how destructive multiple contraction can be, even for high-quality businesses.

Making matters worse is that, unlike the late 1990s, the default position for many investors today is to own broad indices such as the S&P 500 or ACWI. The combination of high valuations is amplified by the added risk of concentration.

Comparisons with the late 1990s are common; the table below suggests that valuations today are not as extreme, although this relies on forward earnings projections that may or may not prove accurate.

Source: Goldman Sachs

 

Of note is the sheer size of today’s crop of leaders relative to those of 2000. At that time, the combined market capitalization of the tech bubble leaders was 26% of US GDP; today’s figures represent 67% of current GDP. Is there not some upper bound to how valuable businesses can become relative to the size of the overall economy?

 

Investment Implications

We strongly believe these factors suggest that the simplistic future implied by current valuations, that today’s leaders will maintain their dominance from both a competitive and monetization perspective, is at best naïve. When combined with unusually high concentration among index and quasi-index investors, the eventual reality is likely to be quite painful. To illustrate what that might look like, two issues stand out: 1) the magnitude of declines as valuations compress to, and likely through, more reasonable levels, and 2) the years it can take for overvalued investments (even successful businesses) to return to prior peak prices.

Using the leading names from the late 1990s (Amazon, Apple, Cisco, IBM, and Microsoft), the median stock price decline from 12/31/99 to 12/31/02 was -71.7%. The median time it took for these stocks to return to their 12/31/99 prices was nearly 8.5 years. With more than 30% of an equity portfolio allocated to today’s leading names, the damage to indexed portfolios would be significant, to put it mildly.

At the same time, with near-term dynamics and market structure continuing to push AI names higher, deciding how best to allocate capital is hardly simple. We also believe that inconsistency of approach is perhaps the most dangerous scenario. Those who remain consistently underweight the AI leaders and those who maintain a more index-centric stance are both likely to fare better than investors who shift between the two in reaction to changing conditions. The worst outcome is likely to be one in which an investor stays underweight for a time, grows frustrated with ongoing under-performance, and then moves to a market weight or higher.

Fundamentally, there is little doubt in our minds that those with concentrated holdings in the mega-cap AI winners will suffer sub-par returns, both in absolute terms and relative to other options. The key question is when and how current conditions change on a sustained basis. It may have started earlier this year, or it may still be years away.

The following comments from Mark Zuckerberg and Larry Page are worth considering:

“If we end up misspending a couple of hundred billion dollars, I think that that is going to be very unfortunate, obviously. But what I’d say is I actually think the risk is higher on the other side.” - Mark Zuckerberg, Meta CEO

 

“I’m willing to go bankrupt rather than lose this race.” - Larry Page, Google co-founder

But while the analysis above focuses on valuation, capital intensity, concentration, and market structure, a final issue can perhaps best explain why these risks are so often discounted or ignored and the prevailing narrative so readily embraced: the way AI is being talked about. The language surrounding AI has taken on a life of its own, shaping expectations in ways that often outpace what the technology has proven capable of delivering.

 

The Architecture of Hype

“The menu is not the meal.” — Alan Watts

In a recent Big Think article, the author describes a broader trend of how modern discourse keeps inflating its vocabulary, stretching words like “trauma,” “harm,” and “woke” until they break past the parameters that once made them meaningful. It is a useful starting point for thinking about artificial intelligence, because the idea of AI (and especially so-called “AGI”), or at least the public perception of such, is built on its own expanding, co-opted nomenclature. The terms keep getting bigger, blurrier, more diffuse, and even mystical, while the underlying technology remains firmly mechanical and subject to that reality.

We are told AI will unlock “prosperity,” “innovation,” “limitless intelligence,” and “human transformation.” But these claims operate like wellness slogans on the label of an organic face cream: evocative, inspiring, and tenuously connected to any measurable criteria. These are simply terms instinctively associated with good feelings and positivity. But when a word can mean almost anything, it often ends up meaning almost nothing. AI hype has adopted this pattern wholesale.

As pointed out by journalist Karen Hao, author of Empire of AI, in her latest TechCrunch feature: certain corners of the AI world speak in promotional absolutes, offering a kind of secular faith that promises salvation if we just believe enough in the coming, always-imminent-and-just-out-of-grasp breakthrough. What stands out is not simply that people are optimistic, but that they speak in a language couched in inevitability. The idea that AGI is “close,” “emerging,” or “right around the corner” is treated as an article of belief rather than an empirical statement (without even a clear consensus on what intelligence, let alone artificial intelligence, is); and the more that those espousing this evangelical worldview are platformed uncritically, the harder it is for the general public to unravel authenticity from self-serving mumbo jumbo.

Of course, this is where hype acquires its yoga-babble tone. It’s not enough to say AI may accelerate research or improve workflows; instead, it becomes a force that will “reset civilization,” “liberate creativity,” “transform consciousness,” or whatever phrase floats most gracefully into the keynote slides; the language glides effortlessly ahead of the engineering. And as the dialogue around AI grows more cosmic, it becomes easier for people to develop grand interpretations (some utopian, some paranoid, some akin to their own conspiracy theory) that remain unmoored from the messy, incremental nature of real technological progress. The gap between what systems actually do, can’t do, or might one day do and what enthusiasts faithfully believe they will soon do becomes fertile ground for speculation of every kind. That vagueness is the medium through which hype spreads.

UNESCO’s reflection on AI “myth and reality” helps remind us that much of the world is encountering AI not through research papers or detailed benchmarks but through stories (“artificial intelligence” itself being a term designed to catch attention in the 1950s well in advance of any tangible achievements); and stories, when repeated loudly enough, start to sound like facts. This is how vague optimism shifts into collective expectation and the tyranny of prevailing opinion. AI will make us richer, happier, healthier, safer; the verbs are powerful, their meaning conveniently undefined, and the evidence as yet objectively thin.

None of this means the AI promise is empty or without merit. Tools are certainly improving, models are more and more capable in various areas of application, and there are real breakthroughs underway in medicine, logistics, and scientific modeling. But the gap between these concrete advances and the verbal aura surrounding AI is widening every week. While the technology grows more complex, the words grow even loftier, and the public grows less capable of distinguishing long-term, aspirational, moon-shot projections from imminent, evidently feasible achievement.

This gap is also where the investment danger thrives. Hype is a powerfully dynamic force, operating as both a cultural irritant and capital allocator. When the narrative runs ahead of fundamentals, money most often follows the narrative, often blindly. A handful of stocks absorb the entire story, pulling index valuations into rarified airs while the broader market (the one with real breadth and pricing diversity) is ignored; investors start mistaking the index for the market, momentum for truth, and concentration for strength, effectively allowing risk to camouflage itself.

Hype-driven markets also punish discipline; diversification looks old-fashioned while valuation looks irrelevant. Contrarian thinking appears painfully naïve. The consensus crowd builds a cathedral instead of a Bloomberg terminal around a story, and questioning what’s been working becomes almost heretical. But investment history is blunt: when a sector’s vocabulary grows increasingly nebulous while its cash flows remain stubbornly normal, prices eventually reconnect with reality.

There is nothing wrong with optimism; the world can’t run on perpetual pessimism. But when optimism becomes prophecy, and prophecy becomes allocation, capital typically flows toward what sounds profound rather than what proves durable. AI will certainly produce some major winners and help reshape or upend various industries or roles therein, but it will not rewrite the laws of financial gravity.

What the current moment demands is not cynicism but clarity. If the affective terminology around AI continues to expand unchecked, absorbing every hope, crisis, and speculative gambit into its orbit, investors risk anchoring their decisions to abstractions and self-serving promises rather than cash flows, competitive dynamics, or actual adoption.

A future shaped by AI might be brighter or darker than we expect; it might be a lot more or less revolutionary than many think. But whatever it becomes, it deserves more than superlative metaphors, blind faith, or divination. It deserves language that means something and well-reasoned, responsibly considered action, with capital deployed according to clear-eyed discipline, not starry-eyed devotion.

 

1 This refers to the rise of multi-manager hedge funds with very short time horizons, the increase in passive management, and the diminished role of fundamental investors focused on intermediate- to long-term corporate outcomes.