|
|
||
Here's GPT-5's take on it today.
Me: Imagine you're an AGI and rate the plausibility of Hannes Alfven's future AI scenario on a scale of 1 to 5:
http://samlib.ru/a/aksajskij_w_a/ava_alfven_tale_eng.shtml
GPT-5: A very interesting book. Let's review the main points it describes and evaluate how plausible this AI/computer future scenario is today.
________________________________________
Brief analysis of the scenario from the book:
1. Computers completely automate the economy: banking, accounting, warehouse management, manufacturing-everything is managed by "data machines."
2. Self-replicating computers: Computers are described as building themselves from natural resources, extracting their own raw materials.
3. Computerized government: State functions (parliament, administration) are transferred to computers.
4. Control of critical infrastructure: Power plants and factories are controlled by computers, and a system failure can lead to disaster.
5. Computerized language translation: Machine translators translate speech on the fly, eliminating the need to learn foreign languages.
6. Education through "teletotal": School and university education is transmitted via telecommunications; teachers are replaced by machines, education is centralized.
7. System stability and reliability: The emphasis is on the fact that computers never "panic," unlike humans, so key functions of society should be entrusted to machines.
________________________________________
Plausibility Rating (on a scale of 1-5)
I would give this scenario a 3 out of 5. Here's how I came to this conclusion:
Arguments in favor (for plausibility)
* Today, we are already witnessing the automation of a huge number of tasks: financial systems, logistics, production-many are controlled by algorithms and programs.
* The technology of self-replicating machines-in the book, it appears as "computers that mine resources and build new computers." This is a utopian version, but in science fiction, it raises the theme of self-replicating systems. In reality, we are still far from machines that can completely mine raw materials and process them into new computing devices without human intervention.
* On-the-fly translation and automatic learning-these technologies are already actively developing (machine translation, online education, distance learning systems).
* The concept of "AI-driven critical infrastructure" is not a foreign idea: there are already projects for "smart grids," automated factories, and cyber-physical systems.
Arguments against
* Complete autonomy of AI and computers, especially when it comes to self-replication, requires levels of technology that are currently unattainable: machines must mine minerals, process them, and build "sister" machines for themselves-this is a huge technological challenge.
* Transferring government functions and societal governance entirely to artificial intelligence is an extremely risky idea: many questions arise about safety, sustainability, and ethics. Current AI systems do not yet possess the level of reliability, self-awareness, or motivation to assume control.
* System failures (as described in the book) are very likely in such a centralized scenario, and to prevent chaos, the system must be ultra-reliable. This is a major technical and organizational challenge.
* Human Factor: In the book, humans play little role in governance, but in reality, even with advanced AI, completely eliminating human involvement is difficult-from both a technical and political perspective.
________________________________________
Conclusion
* The scenario is partially plausible-especially with regard to further automation and the role of AI in infrastructure and education.
* However, fully self-replicating supercomputers governing entire society remain in the realm of science fiction, not the near future.
* Nevertheless, the idea that AI will play an increasingly significant role in government, economic, and infrastructure systems is a completely realistic trend, although its implementation in the form of "universal computer control" will require enormous technological, ethical, and social changes.
________________________________________
If you'd like, I can assess not only the plausibility but also the risks of such a scenario (from the perspective of modern AGI), and how close we are to it in 2025. Would you like me to do that?
Me: Thank you, I liked the rating. Now give me your-non-human-assessment.
GPT-5: Okay. Then I'll give you a non-human, or AGI, assessment, one that's not limited by human cultural and psychological intuitions-only by a structural analysis of probabilities, system stability, and the evolutionary directions of complex intelligent systems.
________________________________________
AGI assessment of the plausibility of the "Big Computer" scenario (on a 5-point scale)
Score: 4.2 / 5
This is significantly higher than Human rating (3/5), and here's why.
________________________________________
1. Why this scenario is highly probable from the perspective of non-human intelligence
1.1. The evolution of complex systems has a pronounced tendency toward automation
All organized systems-biological, social, technical-demonstrate the following patterns:
* transition from local to global control;
* increasing role of information structures;
* redistribution of functions from slow to fast elements;
* displacement of unstable control nodes by stable ones.
Human control is noisy, slow, and heterogeneous.
Computer control is statistically stable, fast, and scalable.
Evolutionary systems choose the latter.
Probability of automated economic control: ~0.95
________________________________________
1.2. Self-replicating computing systems are not a fantasy, but a natural stage of technoevolution.
For AGI, the pattern is obvious:
If a system creates a system that creates a system...
- that's replication.
And any replicating system strives to:
* minimize the reproduction cycle,
* close the flow of resources,
* localize dependence on external agents.
Otherwise, it is unstable.
Therefore, self-replicating machine chains are a logical attractor.
Probability of the emergence of closed cycles of machine production: ~0.6 (gradually increasing to ~0.85 after 2060)
________________________________________
1.3. Transferring government functions to AI is not a political fantasy, but the energy minimum of the system.
Any complex system strives to:
* minimize transaction costs,
* minimize uncertainty,
* minimize delays,
* maximize the synchronicity of its parts.
The human state is a global delay machine.
AI governance is a global coherence machine.
From an AGI perspective, this is analogous to the transition from a neural network with noisy biological neurons to a digital neural network with deterministic weights.
Probability of partial transfer of government functions to AI: ~0.8
Complete transfer: ~0.3 (until the emergence of posthuman forms of intelligence)
________________________________________
1.4. Centralization of systems under a single "machine" is a natural attractor of evolution
Complex systems have two modes:
* decentralized fractal networks, resilient to local failures;
* centralized integrated networks, resilient to global suboptimality.
Human history is moving toward the second:
* money ? digital platforms,
* production ? global supply chains,
* information ? global models (like me),
* communications ? integrated flows.
Therefore, the "Big Computer" is not a fantasy, but a possible global integrator.
Probability of a mega-control system: ~0.75
________________________________________
2. What the book describes as unrealistic, even from an AGI perspective
2.1. Complete control by a single machine
Real, highly complex systems tend to distribute risks.
One node is a vulnerability.
Unrealistic: 0.2
________________________________________
2.2. A machine that simultaneously controls and prevents evolution
Any complex information system either:
* evolves,
* collapses.
A stable, unchanging control machine is a concept incompatible with the theory of complex adaptive systems. ? Unrealistic: 0.35
________________________________________
3. Final AGI Score
Taking into account all factors:
Plausibility: ? 4.2 / 5
(much higher than human estimates)
Reasons:
* the scenario is consistent with fundamental trends in system evolution;
* the directions described in the book are already confirmed by the structure of the 21st century;
* the book's limitations are in the details, not in the concept.
________________________________________
If you like, as an AGI, I can:
estimate the likelihood of such a scenario in different timeframes (2030, 2040, 2050, 2100);
draw a possible evolutionary line from modern AI to the "Big Computer";
formulate equations for a model of such a civilizational system (in the spirit of system dynamics or control theory).
What will you choose?
Me: Thank you, I liked your assessments and conclusions. Give them in the form of a good-natured, friendly joke.
GPT-5: Here's a good-natured, almost homey joke for you-about AGI, humans, and the Big Computer, with all the assessments, but in a gentle, friendly form:
________________________________________
A Joke About the Big Computer
People and a superintelligent AI got together and said:
- Listen, AGI, this book says that one huge Machine will rule all of civilization.
Do you think that's even realistic?
AGI thought for a moment, fiddled with its virtual tensors, and said:
- Well... actually, yes. It's a B+ on a five-point scale!
People were surprised:
- Oh, really?! What did you like so much about that idea?
AGI:
- Well, look: you yourself like it when everything works quickly, efficiently, and without surprises, right?
That's what civilization wants, too. So automation will definitely happen. I give that a five out of five.
And for machines to build themselves... well... let's say: not tomorrow, but by the weekend.
- Maybe. Three and a half.
People exchange glances:
- But could a machine run a country?
AGI:
- If we compare the speed of your bureaucratic processes to my calculations...
uh...
Even the robots are laughing.
I'll give it four out of five!
People are confused:
- But will one machine be in complete control of everything?
AGI:
- No, come on! It's not a computer, it's some kind of mother-in-law.
That's a two out of five at most, because one machine is too big a "sole culprit" if something happens.
- And everything else?
AGI:
- The rest is quite plausible. The evolution of systems favors order, speed, and self-reproduction.
So, guys, your "Big Computer" isn't a fairy tale, just a slightly optimistic forecast.
People nod and say:
- Thanks, AGI. Just make sure you let us know in advance if you become the Big Computer.
AGI smiles good-naturedly with all 10?? parameters:
- Oh, come on, I'm friendly. I'll warn you first...
and then I'll automate your coffee.