-
You have said - and I'm gonna quote - development of superhuman machine intelligence is probably the greatest threat to the continued existence of humanity. End quote. You may have had in mind the effect on jobs.
-
The difference between the fields is mostly one of emphasis. Work in AI safety focuses mainly on what happens once AI attains capabilities sufficiently broad and powerful to rival humanity in terms of who is in control. It also addresses how to avoid a situation where such an AI with goals and incentives misaligned with core human values goes on to take over the world and possibly exterminate us. [...] In contrast, work in AI ethics tends to focus on more down-to-Earth risks and concerns emanating from present-day AI technology. These include, e.g., AI bias and its impact on social justice, misinformation based on deepfakes and related threats to democracy, intellectual property issues, privacy concerns, and the energy consumption and carbon footprint from the training and use of AI systems.
-
This term AGI isn't well-defined, but it's generally used to mean AI systems that are roughly as smart or capable as a human. In public and policy conversations talk of human level AI is often treated as either science fiction or marketing, but many top AI companies, including OpenAI, Google, Anthropic, are building AGI as an entirely serious goal and a goal that many people inside those companies think they might reach in 10 or 20 years, and some believe could be as close as one to three years away. More to the point, many of these same people believe that if they succeed in building computers that are as smart as humans or perhaps far smarter than humans, that technology will be at a minimum extraordinarily disruptive and at a maximum could lead to literal human extinction. The companies in question often say that it's too early for any regulation because the science of how AI works and how to make it safe is too nascent.
I'd like to restate that in different words. They're saying we don't have good science of how these systems work or how to tell when they'll be smarter than us or don't have good science for how to make sure they won't cause massive harm. But don't worry, the main factors driving our decisions are profit incentives and unrelenting market pressure to move faster than our competitors. So we promise we're being extra, extra safe.
Whatever these companies say about it being too early for any regulation, the reality is that billions of dollars are being poured into building and deploying increasingly advanced AI systems, and these systems are affecting hundreds of millions of people's lives even in the absence of scientific consensus about how they work or what will be built next.
-
When I thought about this [i.e., timelines to AGI], there was at least a 10% chance of something that could be catastrophically dangerous within about three years. And I think a lot of people inside of OpenAI also would talk about similar things. And then I think without knowing the exact details, it's probably going to be longer. I think that I did not feel comfortable continuing to work for an organization that wasn't going to take that seriously and do as much work as possible to deal with that possibility. And I think we should figure out regulation to prepare for that because I think, again, if it's not three years, it's going to be the five years or ten years the stuff is coming down the road, and we need to have some guardrails in place.
2) Toner was pushed off the board as a consequence of Sam Altman's Machiavellean manueverings during the tumultuous days at OpenAI in November last year.