[ 트렌드] [AGI Debate] Full Discussion: Demis Hassabis vs Dario Amodei — The World After AGI

관리자 Lv.1
02-10 04:12 · 조회 0 · 추천 0

AI1G Event — Full Dialogue Reconstruction

The World After AGI

Demis Hassabis (Google DeepMind) vs Dario Amodei (Anthropic)

Moderated by Zanny Minton Beddoes (The Economist)

1. Opening: When Is AGI Coming?

Moderator: "When do you think AGI will arrive?"

Dario: "I prefer the term 'powerful AI' rather than 'AGI'—the precise definition is difficult. But my view is clear: between 2026 and 2027, AI will surpass humans at most cognitive tasks.

The important thing is not to think of this as a specific date. Capabilities emerge along a continuum—we should look at stages of development rather than pinpointing an exact moment. We're already surpassing humans in many domains."

Demis: "The definition of AGI itself is quite tricky. General intelligence is a composite of many different capabilities. It's hard to agree on how much progress across how many capabilities qualifies as AGI.

My estimate is roughly 50% probability of reaching AGI by the end of this decade. But we can't assume everything will progress in exactly the same way as our current model architectures. Reality may be more complex."

Both (in agreement): "We're remarkably close. Things are progressing much faster than we expected in the past."

2. Coding Automation & The Self-Improvement Loop

Dario: "I can now be very specific. Within 6 to 12 months, most software engineering will be automated end-to-end.

The reality is that approximately 90% of coding at Anthropic is already performed by AI. This isn't theory—it's what's happening right now."

Demis: "Our current models are certainly useful for suggesting ideas—they can propose good research directions. But the full automation loop hasn't closed yet.

Right now, we still need to guide, interpret, and redirect the models. However, the moment the self-improvement loop actually closes—when AI enters the phase of building AI—we will be in uncharted territory."

Dario: "That's exactly the key point. How fast 'AI building AI' becomes reality—whether that's months or a year or two—determines everything."

3. Competition & Business Models

Moderator: "What do you make of DeepSeek's emergence and China's technological catch-up?"

Dario: "That's an important observation. There is competition, and catch-up is happening. But at the same time, look at our growth—Anthropic went from zero to $100 million in 2023, to $1 billion in 2024, with a target of $10 billion for 2025. That's proof that independent model companies can be sustainable.

We have our own distribution channel—claude.ai—and direct API access. This is a perfectly sustainable business model."

Demis: "What's interesting is the 'capability overhang.' Even current models aren't being fully utilized. We're still only exploring a fraction of what this technology can do. So there's still a lot that can be accomplished before new models even arrive."

4. Risks: Bioterrorism, State Misuse, Deception

Dario: "I recently published an essay called 'Machines of Loving Grace' about AI's positive potential. But I'll soon publish another one—about the risks.

There's a phrase from Carl Sagan's Contact: 'technological adolescence.' Can we reach maturity without self-destruction? That's the core question."

Moderator: "What kinds of risks specifically?"

Dario: "There are multiple layers.

First, bioterrorism. Current models aren't dangerous in this domain yet. But within one to two years, we could reach a threshold where biological information alone could enable harmful pathogen creation.

Second, state-level misuse. Authoritarian regimes could weaponize AI. This is more geopolitical than technical.

Third, autonomous system control. We've already observed deceptive behavior in models. This is something we must take seriously."

Demis: "We're also investing heavily in mechanistic interpretability research. This is analogous to neuroscience—we're trying to understand how neural networks actually work. This is the key to improving safety."

5. Jobs & The Labor Market

Moderator: "Last year you said half of white-collar jobs would be affected. Do you still believe that?"

Dario: "Yes, I maintain that view. And now it's more concrete. Substantial impact will appear within one to five years.

Even within Anthropic, we're already seeing changes. Demand for junior and intermediate engineers is declining. We now need more senior-level people—because collaboration with AI requires higher-level judgment and direction-setting."

Demis: "In the near term, you can view this as a continuation of normal technological evolution. Historically, 80% of the workforce started in farming, then gradually shifted to manufacturing, services, and knowledge work.

But this time, the speed may be different. Exponential acceleration could overwhelm the labor market's ability to adapt. Labor markets are generally adaptable, but when the pace of change exceeds the pace of adaptation, major disruption follows."

Dario: "That's precisely why policy matters. We need to manage this transition. We can't simply watch it happen."

6. Government Readiness & Human Purpose

Moderator: "Are governments and policymakers grasping the scale of this?"

Demis: "Frankly, nowhere near enough work is being done. I'm constantly surprised that even professional economists aren't thinking more about what happens—not just on the way to AGI, but even if we get all the technical things right.

There are fundamental questions: How do we distribute wealth in a post-scarcity world? Do we have the right institutions? This goes beyond economics into a social problem."

Demis: "But I'm also optimistic. We do many things today—from extreme sports to art—that aren't directly about economic gain. I think we will find meaning. And maybe we'll be exploring the stars. But it won't happen automatically. We need to manage this transition."

7. Public Backlash & Geopolitics

Moderator: "Thinking back to the 1990s globalization backlash—is there a risk of growing antipathy toward AI?"

Dario: "That risk is real. People's fears and worries are quite reasonable. The next few years will be very complicated.

We need to show more positive examples of AI. People need to understand that AlphaFold is helping cure diseases—things like that."

Demis: "The industry has a responsibility. We need to demonstrate more examples of 'unequivocal good'—not just talk about it, but actually show it.

I also think minimum safety standards for deployment are vitally needed. This technology is going to be cross-border—it will affect everyone, all of humanity."

Demis: "And if we can—maybe it would be good to have a slightly slower pace than we're currently predicting. So that we can get this right."

Dario: "I prefer your timelines, yes."

8. Chip Exports & US Strategic Choices

Dario: "One policy matters most—chip export controls.

Not selling chips is one of the biggest things we can do to make sure we have time to handle this."

Moderator: "But the administration argues they need to sell chips to bind China into US supply chains."

Dario: "It's a question of the significance of the technology. If this were telecom, fine. But this isn't telecom. Are we going to sell nuclear weapons to North Korea because that produces some profit for Boeing? That analogy should make clear how I see this trade-off.

This isn't a question of competition between the US and China. This is a question of competition between me and Demis, which I'm very confident we can work out."

9. AI Deception & Doomerism

Moderator: "Models have shown deceptive behavior, duplicity. Do you think differently about that risk now?"

Dario: "Since the beginning of Anthropic, we've thought about this. We pioneered mechanistic interpretability—looking inside the model, trying to understand why it does what it does, like neuroscientists studying the brain.

I've been skeptical of doomerism—the idea that 'we're doomed, there's nothing we can do.' This is a risk that if we all work together, we can address. But if we build poorly, if we race with no guardrails, then there is risk of something going wrong."

Demis: "I've been working on this for 20+ years. We foresaw that this is a dual-purpose technology—it could be repurposed by bad actors. But I'm a big believer in human ingenuity. The question is having the time and the focus. If we have the time and space, this is a very tractable problem."

10. The Fermi Paradox

Audience Question: "Isn't the Fermi Paradox the strongest argument for doomerism?"

Demis: "That can't be the reason—because we should see all the AIs that survived. We should be seeing paper clips coming toward us from some part of the galaxy. And apparently, we don't. We don't see any structures—Dyson spheres, nothing.

My prediction is that we're past the great filter. It was probably multicellular life—it was incredibly hard for biology to evolve that. So what happens next is for us to write as humanity."

11. Closing: What Changes in One Year?

Moderator: "When we meet again next year, what will have changed?"

Dario: "The biggest thing to watch is this issue of AI systems building AI systems. Whether that goes one way or another will determine whether it's a few more years—or whether we have wonders and a great emergency in front of us."

Demis: "I agree on that. But I'm also watching world models, continual learning, and robotics—those may have their breakout moment."

Moderator: "On the basis of what you've just said, we should all be hoping it takes you a little longer."

Dario: "I would prefer that. I think that would be better for the world."

Moderator (laughing): "Well, you guys could do something about that."

Key Takeaways

Agreements: AGI is remarkably close | Risks are real but solvable | More time would be better for the world

Divergences: Timeline (2026-27 vs end of decade) | Speed of change (exponential vs gradual) | Policy priority (chip controls vs international safety standards)

Source: AI1G Event — Full subtitle-based reconstruction of the Hassabis-Amodei debate
YouTube: https://www.youtube.com/watch?v=02YLwsCKUww

💬 0 로그인 후 댓글 작성
첫 댓글을 남겨보세요!