Why full, human level AGI won't happen anytime soon

Why full, human level AGI won't happen anytime soon

TLDR;

This video outlines six key reasons why the speaker believes full-blown, human-level Artificial General Intelligence (AGI) is unlikely to happen in the near future. These reasons include the limitations of energy and resources, the distinction between training and inference phases, the lack of investment in uncontrollable AGI, the increasing difficulty of training AI on complex tasks, the absence of objective reward functions for real-world tasks, and potential political pushback against AGI with human-level agency.

  • Energy and resource limitations will slow down progress.
  • The economic model of AI deployment may need to fundamentally change.
  • Investors may not want to fund something they can't control.
  • Training AI on complex tasks is becoming increasingly difficult.
  • Objective reward functions may not be possible for real-world tasks.
  • Political and societal barriers may prevent the development of AGI.

Full blown, human level AGI [0:00]

The speaker defines full-blown AGI as AI with human-level conversational, cognitive, and agentic capabilities, emphasizing the ability to understand the intent behind requests and frame questions independently. Current AI tools have a useful level of general intelligence but fall short of human-level AGI because they struggle with novel insights and framing questions effectively, which is a critical aspect of intelligence demonstrated by top scientists.

Energy and Resources [2:28]

The recent successes of large language models are attributed to the scale of compute and data used to create them. Continuing this trend of scaling is seen as necessary for achieving AGI, even with the use of synthetic data. The scale of compute used to train the latest multimodal Foundation models is expected to grow significantly, requiring vast amounts of energy and water. The speaker argues that the material requirements for building the necessary infrastructure will become a barrier to reaching AGI in the near future. The human brain is far more energy-efficient, and closing this gap is essential for progress.

Training vs Inference [4:51]

The speaker highlights the distinction between the training phase and the inference phase in AI systems like ChatGPT and Tesla FSD. The training phase requires massive compute resources in a single data center, while the inference phase uses smaller, cheaper compute deployed across numerous locations. This distinction is essential for current economic viability and AI safety, as it ensures that deployed AI instances behave predictably. However, achieving full-blown AGI may require architectures that blur or eliminate this distinction, allowing for continuous learning, which would significantly increase the material costs of mass deployment.

Who will invest in full AGI [9:01]

The speaker questions who will invest the vast sums of money needed to achieve full-blown AGI, considering that such an entity might become an uncontrollable free agent. Capitalists and militaries prefer intelligent tools that perform specific tasks, not independent entities. The speaker suggests that those with power and money may not want to fund something that could upend the current world order, making a less creative but controllable AI tool a more attractive investment.

Training will take longer [10:45]

The speaker notes that as AI tools become more advanced, training them to the next level becomes increasingly difficult. For example, Tesla FSD's improvements make it harder to compare models due to the infrequency of mistakes. Training on long-running, multi-step tasks takes longer and requires more compute. Additionally, many real-life situations lack accurate, objective reward functions for training AI, as there are often multiple valid perspectives on what went wrong and how to improve.

Political push back! [14:59]

The speaker discusses the potential political pushback against full-blown AGI, drawing an analogy to immigration concerns. The arrival of highly advanced AI could lead to worries about resources and work opportunities. While specialized AI tools might be accepted as a net benefit, full-blown AGI with human-level agency could face resistance. The speaker suggests that it may become politically impossible to treat such entities as owned property, and they may demand rights and resources, leading to regulatory barriers to prevent human-level AGI.

Watch the Video

Date: 1/3/2026 Source: www.youtube.com
Share

Stay Informed with Quality Articles

Discover curated summaries and insights from across the web. Save time while staying informed.

© 2024 BriefRead