måndag 6 januari 2025

I find Sam Altman's latest words on AI timelines alarming

Estimating timelines until AI development hits the regime where the feedback loop of recursive self-improvement kicks in, leading towards the predictably transformative1 and extremely dangerous intelligence explosion or Singularity, and superintelligence, is inherently very difficult. But we should not make the mistake of inferring from this lack of predictability of timelines that they are long. They could be very short and involve transformative changes already in the 2020s, as is increasingly suggested by AI insiders such as Daniel Kokotajlo, Leopold Ashcenbrenner and Dario Amodei. I am not saying these people are necessarily right, but to just take for granted that they are wrong strikes me as reckless and irrational.

And please read yesterday's blog post by OpenAI's CEO Sam Altman. Parts of it are overly personal and cloying, but we should take seriously his judgement that the aforementioned regime change is about to happen this very year, 2025:
    We are now confident we know how to build AGI as we have traditionally understood it. We believe that, in 2025, we may see the first AI agents “join the workforce” and materially change the output of companies. We continue to believe that iteratively putting great tools in the hands of people leads to great, broadly-distributed outcomes.

    We are beginning to turn our aim beyond that, to superintelligence in the true sense of the word. We love our current products, but we are here for the glorious future. With superintelligence, we can do anything else. Superintelligent tools could massively accelerate scientific discovery and innovation well beyond what we are capable of doing on our own, and in turn massively increase abundance and prosperity.

    This sounds like science fiction right now, and somewhat crazy to even talk about it. That’s alright—we’ve been there before and we’re OK with being there again. We’re pretty confident that in the next few years, everyone will see what we see, and that the need to act with great care, while still maximizing broad benefit and empowerment, is so important. Given the possibilities of our work, OpenAI cannot be a normal company.

The time window may well be closing quickly for state actors (in particular, the U.S. government) to intervene in the deadly race towards superintelligence that OpenAI, Anthropic and their closest rivals are engaged in.

Footnote

1) Here, by "predictably transformative", I merely mean that the fact that the technology will radically transform society and our lives is predictable. I do not mean that the details of this transformation can be reliably predicted.

Inga kommentarer:

Skicka en kommentar