What to Expect From AGI: My Speculations
Software | . 12 min read (2951 words).
AGI attracts sweeping predictions, yet many discussions overlook the constraints that shape real intelligence and new technologies alike: cost, energy, maintenance and the deep integration into human civilisation. AGI may well be achievable, but perhaps far more expensive, fragile and bounded than the grand narratives suggest. What follows is one plausible future built on that premise.

Topics:
Introduction
Artificial intelligence investments have skyrocketed in the past few years and underlying that are significant assumptions about the future and nature of AGI (Artificial General Intelligence). Some expect AGI to deliver abundance (widespread job automation), others anticipate catastrophe (superintelligence apocalypse), while a third group dismisses the entire concept as metaphysical folly (machines cannot truly think and never will).
None of these positions seem complete or entirely realistic.
This blog post explores a fourth path – still speculative, but grounded in experience and plausible evidence. It begins by assuming AGI is possible, yet predicts that it will prove costly, delicate and limited in ways that are not yet visible. The argument is not that AGI will fail, but that its eventual role may be smaller, narrower and more expensive than is commonly imagined. This outlook could be called AGI realism and occupies the middle ground between more optimistic and more pessimistic views.
Intelligence exists and it is not cheap
Human intelligence offers the clearest existing example of general reasoning, yet it also reveals how costly such capability can be. A single mind requires decades of education and socialisation, large metabolic demands, stable institutions, supportive relationships and continuous cultural learning and integration. Intelligence is not a standalone module but an emergent property of the entire human civilisation. Human intelligence is most capable when combined across groups, institutions and generations. Humans isolated from society and stimulation quickly lose their cognitive edge.
Still, human cognition arrives essentially ‘for free’ because it is a natural by-product of what humans require to thrive: nourishment, safety, socialisation, education and community. We need to maintain human cognition regardless of its necessity for work or productivity, simply because human life is valuable.
Artificial systems, however, must bear the full cost of their capability directly: compute, training, data, alignment, maintenance, infrastructure and inference. They do not already exist regardless of their productive use.
We also know that human intelligence is highly diverse. Human civilisation invests heavily in building cognitive capabilities across the population, but each individual still has both strengths and weaknesses. Humans perform the best when collaborating with others who have complementary skills and when deeply integrated into society. The scale of investment needed to produce and maintain collective human intelligence might very well be a fundamental property of any general intelligence.
Energy and compute costs
General intelligence, however it is implemented, will require substantial computation and therefore substantial energy. We do not yet know the exact relationship between capability, compute and power consumption, but there is little reason to believe AGI will be achieved without large-scale computation and significant energy use. This applies to both training and inference, which may also be more tightly coupled than they are today.
The cost of AGI will therefore depend heavily on the cost and availability of compute and energy. Without a radical breakthrough in either, AGI is unlikely to become cheap. The only alternative would be a fundamentally more efficient approach to intelligence – something far beyond current methods. While this is possible in principle and the human brain hints that high efficiency is attainable, hard limits on computational efficiency are likely to exist, just as they do for all other algorithms.
AGI is therefore inherently tied to the economics of computation and energy.
The reward problem
Another profound challenge for AGI lies in how general intelligence is formed. Human cognition did not arise from an unguided process. It was shaped by evolution acting over millions of years, with survival and reproductive success serving as an extraordinarily rich and unforgiving reward signal. This fitness criterion did not merely select for intelligence; it sculpted the entire set of capacities that make human intelligence workable. Curiosity, caution, social bonding, pattern recognition and long-term planning emerged because they improved fitness across countless environments.
Artificial systems do not inherit anything like this. They still require a reward function – not necessarily one embedded inside the system, but some external criterion that guides the training process and determines what counts as improvement. Even if we had a perfect learning algorithm, we would still need a way to judge whether the behaviour it produced was better or worse. Without such a fitness measure, there is no direction to optimise towards.
Designing this reward structure is not a peripheral detail. A viable fitness criterion for a general system must make sense across a vast range of tasks, contexts and behaviours. It must encourage the emergence of stable, useful capabilities without incentivising pathological shortcuts. In practice this is as difficult as designing AGI itself. The reward problem is effectively AGI-complete.
Evolution solved this through an unimaginably broad search: billions of agents, countless trials, immense diversity of pressures. Re-creating anything with comparable robustness will require an equally deep understanding of what ‘success’ should mean for an artificial agent.
General intelligence cannot be trained without a guiding signal. Providing that signal – and ensuring it leads somewhere we actually want to go – may prove to be one of the fundamental challenges of AGI and inherently costly.
The diminishing returns problem
Scaling has delivered impressive gains so far, yet diminishing returns are already visible. Improvements become smaller while costs and data needs rise sharply. More compute does not yield proportionally greater capability. To reach AGI we will obviously have to move beyond scaling current systems, but AGI is unlikely to end up being cheap regardless of how it is built simply due to the nature of general intelligence itself (it is inherently complex and computation-intensive).
Even if we can scale up AI capability, the economic benefits of additional intelligence are not linear. Many tasks will see little to no improvement beyond a certain threshold of capability. If the cost of full AGI ends up being high, the benefit compared to specialised AI must be correspondingly greater to justify its use. This may not be the case for most practical applications.
AGI as an Apollo programme
The Apollo missions proved that humans could reach the moon. They also demonstrated how expensive such feats are. Humanity did not keep going back because the return on investment was too low, not because the technology was impossible.
AGI may follow a similar path. A few landmark systems might be built, celebrated and used in specialised domains where cost is no obstacle. Yet broad commercial deployment requires something else entirely: low cost, reliability and good-enough performance. AGI may never satisfy those requirements.
In essence, AGI could become an extraordinary scientific achievement in the same category as the moon landing and a decisive milestone in human history. At the same time, it may end up being the most expensive toy in the world, rather than an economic miracle.
What is interesting about this analogy is that the Apollo programme was driven by huge national investment and political incentives. AGI development is currently funded by private capital and commercial incentives, a model that has worked well for other technologies but only makes sense if investors can recoup their costs with a reasonable return on investment. This post argues that betting on that looks like a risky proposition at present.
Chaos, weather and hidden constraints
New technologies often begin with enthusiasm and assumptions of boundless potential (paired with critics who are just as sceptical). Early computer scientists believed perfect weather prediction was merely a matter of feeding equations into faster machines. Only later did the deeper structure of the atmosphere reveal itself. Weather turned out to be a chaotic system with inherent limits on predictability, no matter how much compute, data and ingenuity were applied. This is related to the now famous butterfly effect. Tiny variations in initial conditions lead to wildly different outcomes, making long-term accurate forecasts impossible.
There is a pattern to be found here: our first impressions of a system’s potential rarely match its eventual landscape of limits. In the beginning, it is easier to see the possibilities than the constraints. We cannot yet know what equivalents to the butterfly effect or NP-complete computational problems (which cannot be efficiently computed exactly) might exist in the realm of AGI.
As AI systems grow more capable and more complex, they will almost certainly reveal boundaries that are invisible today. Intelligence is shaped by many interacting factors that create emergent behaviours we cannot fully anticipate in advance. The constraints on both what is possible and what is useful will not be known until much later and then they will appear almost obvious (at least to the initiated expert) in hindsight.
Why specialised AI captures most of the value
Markets reward reliability, cost-efficiency and predictable behaviour. They prefer tools that are easy to audit, safe to deploy and optimised for a specific purpose. Narrow AI already excels here: translation, coding assistance, content generation, fraud detection, chatbots and pattern recognition.
General intelligence is expensive, harder to align and offers broader capability without necessarily offering better value. Historically, technological revolutions have been driven by specialised products that are cheap and sufficient for their task, rather than general-purpose systems that do everything moderately well at a higher cost (in contrast to prototypes that are often more flexible and expensive). This is why most people commute by train or car rather than helicopter and carry a smartphone rather than a gaming PC with them. At the same time, we prefer general-purpose computers over a dozen single-purpose devices. Similarly, AI is likely to find a sweet spot between specialisation and generality that balances cost and reusability.
Pure intelligence is valuable, but in most economic contexts it is not sufficient on its own, regardless of sophistication. Specialised systems with proper integrations will therefore match the practical demands of institutions well enough or even better at a fraction of the cost and end up capturing most of the value.
AGI, if achieved, may become valuable in scaling certain high-budget ambitious endeavours in science, research and exploration. However, it may still struggle to compete with less general AI across the bulk of the economy, where cost and predictability matter more and it remains efficient to keep humans in the loop.
Why AGI will not replace all work
Forecasts of full automation often assume that once AGI exists, it will naturally displace human labour wherever it is capable. In reality, labour markets operate through substitution: organisations adopt the option that offers the best balance of cost, reliability and ease of integration.
Humans, despite their limitations, bring with them a wide envelope of general capability shaped by ordinary life. Education, socialisation and experience provide a foundation of judgement, context understanding and behavioural stability at no incremental cost to employers. These capacities emerge because humans must develop them simply to function in society.
AGI, by contrast, must carry its full cost into every task it performs. Running and maintaining a broadly capable system may be entirely feasible, yet there is no clear reason to expect it will be cheaper than employing a person whose capabilities arrive as a natural consequence of human development. Capability alone is not enough; the system must also produce a superior cost–benefit profile to justify substitution at scale.
A further economic factor is the cost of advanced inference. Highly capable AI models do not presently enjoy the extreme economies of scale seen in services like web search, where marginal costs are negligible. Inference costs often rise with capability because larger or more general models demand more compute and memory per query. This creates a structural headwind against widespread replacement: deploying AGI everywhere may remain expensive even if the system performs well and training costs are fully amortised across use cases.
In structured, narrow domains where scale drives efficiency, advanced AI may be adopted rapidly. In roles where accountability, context sensitivity or institutional trust are central, the economic calculus is quite different. Replacing a human might require reproducing a broad set of stable behaviours and contextual understanding, which may not be cost-effective.
None of this implies limits on what AGI may eventually be capable of. It reflects how organisations decide: they adopt the option that delivers the most value for its cost. AGI will substitute aggressively where it has a clear economic advantage and far less where humans remain the more efficient choice.
Work is a mosaic of tasks, incentives and institutional expectations. AGI will reshape parts of it, but large-scale replacement depends on economics, not possibility.
AGI versus narrow AI: a crucial distinction
A clarification is needed here. This blog post concerns AGI (artificial general intelligence), capable of flexible reasoning across arbitrary domains. This is different from today’s large language models or other specialised AI systems. Current AI, impressive as it is, remains narrow. It excels at specific tasks but lacks the open-ended adaptability that defines general intelligence. The costs of narrow AI inference tell us little about what true generality will cost, because the computational and architectural demands are likely to be fundamentally different.
Narrow AI will continue to transform industries. It has the advantage of being more affordable, already exists and will continue to improve and become more capable and well-integrated. This post argues that narrow AI will dominate most economic activity for the foreseeable future and that the prohibitive costs will only be a major obstacle for AGI.
What is now popularly called AGI used to be referred to as strong AI in previous decades. The opposite then was weak AI. I’m still partial to those old terms, but AGI works well and narrow AI (or ANI) is the logical opposite. At this point, AGI and ANI are less ambiguous due to slight shifts in usage.
We can briefly and informally define AGI as:
AGI, if achieved, will likely matter in domains where cost is secondary to capability. Yet even here, its influence is likely to be profound but bounded. Narrow AI is more promising for broad practical applications. AGI may resemble fundamental research more than a product category with a defensible moat. Both are unlikely to produce the kind of monopolies that earlier technologies created through network effects or economies of scale (e.g. Microsoft Windows, Google Search, Facebook).
What I actually expect
The future will probably be shaped by slow, incremental progress and structural constraints rather than dramatic transformation. I do not expect a sudden arrival of AGI that changes the world overnight. Instead, I anticipate gradual progress in AI capabilities, infrastructure and integration over decades, with limitations in cost and fundamental capability revealing themselves along the way.
AGI may be built, yet remain expensive, niche and scientifically valuable rather than commercially dominant. Narrow AI will continue expanding across industries, supported by improved infrastructure and deeply embedded in everyday workflows. Institutions will adapt, but people will remain central to trust, legitimacy and collective decision-making.
The world after AGI may feel surprisingly familiar – similar to the world after space travel. What will change daily life more profoundly is likely to be the steady improvements in more specialised and limited AI systems that are cheap and reliable enough to be widely adopted. AGI could be as much part of everyday life 2100 as space travel is today.
For AGI to become a mass-market technology1, we would need a revolution not only in AI but also in cost-effective computation. This might happen if computation becomes several orders of magnitude cheaper through new hardware breakthroughs, but the probability at this point seems low. Without such a shift, AGI is unlikely to become affordable enough for mass adoption within the next hundred years.
In many ways, AGI might become the flying cars of our time: technically feasible (eventually), yet prohibitively expensive and impractical for everyday use. This is the difference between possibility and practicality. Just as the helicopter is a triumph of engineering that never became mass-market, AGI may prove extraordinary and achievable, but bounded in use.
Conclusions
The future of AI will be shaped by the same steady forces that govern everything else: physics, energy, economics, mathematics, culture, institutions and human civilisation. Progress will continue where it is cheap, reliable and transformative. AGI research remains worthwhile, but it should be viewed as ambitious science and a milestone in human history rather than a profitable product category. Technical revolutions do happen, but in this case progress is more likely to be incremental and constrained by rising costs. Mass adoption relies on low cost. If AGI cannot deliver that, its role will remain limited. Otherwise, the real revolution will be in how we deliver AGI affordably and reliably at scale to the masses.
Intelligence may be duplicated in silicon, given enough effort. Meaning, collaboration, responsibility, civilisation building and cost-effective general intelligence will remain deeply human for the foreseeable future.
This is my current view. Time will reveal how close it comes to reality.
By mass market, I mean that anyone with a smartphone can access affordable AGI similar to ChatGPT or even web search today. Not necessarily free, but cheap enough that billions of people can use it regularly. ↩︎

