torsdag 31 december 2015

Recenserad i New Scientist

Min nya bok Here Be Dragons: Science, Technology and the Future of Humanity har fortfarande ett par veckor kvar fram till officiell utgivning, men den har redan hunnit bli recenserad i New Scientist. Och recensionen, författad av konstnären och mångsysslaren Jonathon Keats, kan bara berskivas som glödande! Hör här:
    THE year is 2056, and scientists have just created the first computer with superhuman intelligence. Aware of the risks, the programmers trained it in ethics. The machine functions flawlessly.

    Aiming to maximise happiness in the universe, and calculating that sentient beings are happy less than half the time, the computer exterminates all sentient life. The balance of happiness increases from negative to zero - only there’s nobody left to enjoy it.

    Futurologists refer to this sort of misunderstanding as perverse instantiation, and Olle Häggström is concerned about it.

    [...]

    "Some of the advances that may lie ahead of us can actually make us worse off, a lot worse, and, in the extreme case, cause the extinction of the human race," he claims. Here Be Dragons is his attempt to chart potential dangers so that we approach the future more responsibly, much as medieval map-makers alerted explorers to perils with depictions of mythological beasts.

    Although some of his scenarios are outlandish, Here Be Dragons deserves to be read by all scientists and engineers, and especially by ambitious postdocs considering cutting-edge research. His sense of caution is profound, heartfelt and free of Luddite polemic: it's a stimulating attempt to balance the pursuit of breakthroughs with old-fashioned humility.

    [...]

    There are no easy answers [...]. The only certainty is that concerns about the future require vigorous debate. To that end, Here Be Dragons is an essential provocation.

Läs hela recensionen här!

torsdag 24 december 2015

Gogodod jojulol!

Med Aron Bjurströms vackra framförande på rövarspråket av O helga natt önskar jag härmed bloggens läsare en God Jul!

söndag 20 december 2015

Intervjuad, två gånger om

Här två aktuella intervjuer med mig, på centrala teman från min bok Here Be Dragons: Science, Technology and the Future of Humanity som utkommer i januari med som redan nu går att beställa från diverse nätbokhandlar:
  • I nummer 7/2015 av Göteborgs universitets tidning GU Journalen finns en artikel rubricerad Ond forskning, som återger (valda brottstycken ur) ett engagerat samtal med bioetikprofessorn Christian Munthe och mig, om nya teknologier på gott och ont. Bilderna ovan är saxade därifrån.
  • I nummer 3/2015 av det socialdemokratiska ungdomsförbundet SSU:s tidskrift Tvärdrag intervjuas jag under rubriken Räkna med robotrevolution.

onsdag 16 december 2015

Om universum inte fanns

Hur vore det om universum inte fanns? För mig öppnar tanken en fasansfull avgrund, men den danske matematikern, konstnären, poeten och mångsysslaren Piet Hein (1905-1996) tog det hela lite coolare i sin Nothing is indispensable: Grook to warn the universe against megalomania.
    The universe may
    be as great as they say.
    But it wouldn't be missed
    if it didn't exist.

torsdag 10 december 2015

Ada Lovelace 200 år

Idag är en dag då vi som gillar vetenskap har extra stor anledning att höja våra glas till en skål. Den läsare må (nätt och jämnt) vara förlåten som tänker att jag härmed syftar på att det idag är Nobeldagen, och att det är till gamle Alfred Nobel vi bör skåla. Vad jag har i åtanke är emellertid något mycket större: Augusta Ada King, grevinna av Lovelace, född Byron (1815-1852) fyller 200 år!1 Numera kallar vi henne oftast kort och gott Ada Lovelace.

Denna briljanta matematiker levde ett kort men intensivt liv, och är mest känd för sitt samarbete med matematikerkollegan Charles Babbage kring hans så kallade analytiska maskin (the analytical engine) - en dator långt före elektronikens och kretskortens tidevarv, vilken istället byggde på kugghjul och mekanik. Lovelace och Babbage utgjorde ett formidabelt radarpar, där (med dagens terminologi) Babbage stod för hårdvaran och Lovelace för mjukvaran - samt för de mest långtgående visionerna om datorns användbarhet. Hon beskrivs ibland som tidernas första datorprogrammerare. Den analytiska maskinen färdigställdes aldrig, men hade i annat fall blivit den första så kallade universella datorn, ett århundrade innan Alan Turing skapade detta inom datalogin helt centrala begrepp.

Lovelace står idag, inte minst tack vare den årligen i oktober firade Ada Lovelace Day, som kvinnlig förebild inom de till stora delar mansdominerade STEM-områdena (science, technology, engineering and mathematics). Annie Burman uppmärksammade henne i en läsvärd understreckare i SvD häromdagen. För den som vill ta del av Lovelaces liv och gärning i det aningen längre formatet rekommenderar jag Sydney Paduas underbara serieroman The Thrilling Adventures of Lovelace and Babbage, som med hjälp av tredubbla fotnotsapparater lyckas med såväl att berätta den sanna historien som att fabulera ohämmat underhållande kring det parallella universum i vilket Babbage lyckades färdigställa sin analytiska maskin. Ett bättre julklappstips än The Thrilling Adventures får ni inte av mig i år!

Fotnot

1) Eller skulle ha gjort, om inte döden kommit emellan.

söndag 6 december 2015

Dilbert om ett eventuellt AI-genombrott

Häromveckan slog jag ett slag för engelskspråkiga filosofi- och vetenskapsnördiga tecknade serier, och framhöll Randall Munroes xkcd, Zach Weinersmiths Saturday Morning Breakfast Cereal (SMBC), och Corey Mahlers Existential Comics som mina favoriter. En annan rolig och läsvärd serie som delvis kan räknas till samma kategori är Scott Adams Dilbert, även om den i första hand är inriktad på att skildra dåligt ledarskap inom teknikföretag. De senaste två veckorna har denna dagligt utkommande serie behandlat ett av mina egna favoritämnen här på bloggen: faror i samband med ett eventuellt framtida genombrott inom AI (artificiell intelligens). Speciellt behandlas fallet då ett sådant genombrott inträffar på teknikföretag ansatta av dåligt ledarskap. Följande är de hittills publicerade avsnitten i detta löst samanhållna äventyr; vi får se om det blir någon fortsättning imorgon måndag: En del idéer hämtade från facklitteraturen på området förekommer här, och jag finner framställningen delvis ganska finurlig. Den kan nog bidra till intresset för AI-futurologi, men för att faktiskt skaffa sig en välgrundad uppfattning om riskerna och den övriga problematiken duger naturligtvis inte dessa Dilbert-strippar. Se gärna Nick Bostroms bok från 2014, eller min egen som kommer i nästa månad.

tisdag 1 december 2015

What I think about What to Think About Machines That Think

Our understanding of the future potential and possible risks of artificial intelligence (AI) is, to put it gently, ruefully incomplete. Opinions are drastically divided, also among experts and leading thinkers, and it may be a good idea to hear from several of them before forming an opinion of one's own, to which it is furthermore a good idea to attach a good deal of epistemic humility. The 2015 anthology What to Think About Machines That Think, edited by John Brockman, offers 186 short pieces by a tremendously broad range of AI experts and other high-profile thinkers. The format is the same as in earlier items in a series of annual collections by the same editor, with titles like What We Believe but Cannot Prove (2005), This Will Change Everything (2009) and This Idea Must Die (2014). I do like these books, but find them a little bit difficult to read, in about the same way that I often have difficulties reading poetry collections: I tend to quickly rush towards the next poem before I have taken the time necessary to digest the previous one, and as a result I digest nothing. Reading Brockman's collections, I need to take conscious care not to do the same thing. To readers able to handle this aspect, they have a lot to offer.

I have still only read a minority of the short pieces in What to Think About Machines That Think, and mostly by writers whose standpoints I am already familiar with. Many of them do an excellent job expressing important points within the extremely narrow page limit given, such as Eliezer Yudkowsky, whom I consider to be one of most important thinkers in AI futurology today. I'll take the liberty of quoting at some length from his contribution to the book:
    The prolific bank robber Willie Sutton, when asked why he robbed banks, reportedly replied, "Because that's where the money is." When it comes to AI, I would say that the most important issues are about extremely powerful smarter-than-human Artificial Intelligence (aka superintelligence) because that's where the utilons are - the value at stake. More powerful minds have bigger real-world impacts.

    [...]

    Within the issues of superintelligence, the most important (again following Sutton's Law) is, I would say, what Nick Bostrom termed the "value loading problem": how to construct superintelligences that want outcomes that are high-value, normative, beneficial for intelligent life over the long run - that are, in short, "good" - since if there is a cognitively powerful agent around, what it wants is probably what will happen.

    Here are some brief arguments for why building AIs that prefer "good" outcomes is (a) important and (b) likely to be technically difficult.

    First, why is it important that we try to create a superintelligence with particular goals? Can't it figure out its own goals?

    As far back as 1739, David Hume observed a gap between "is" questions and "ought" questions, calling attention in particular to the sudden leap between when a philosopher has previously spoken of how the world is and then begins using words like should, ought, or ought not. From a modern perspective, we'd say that an agent's utility function (goals, preferences, ends) contains extra information not given in the agent's probability distribution (beliefs, world-model, map of reality).

    If in 100 million years we see (a) an intergalactic civilization full of diverse, marvelously strange intelligences interacting with one another, with most of them happy most of the time, then is that better or worse than (b) most available matter having been transformed into paperclips? What Hume's insight tells us is that if you specify a mind with a preference (a) > (b), we can follow back the trace of where the > (the preference ordering) first entered the system and imagine a mind with a different algorithm that computes (a) < (b) instead. Show me a mind that is aghast at the seeming folly of pursuing paperclips, and I can follow back Hume's regress and exhibit a slightly different mind that computes < instead of > on that score too.

    I don't particularly think that silicon-based intelligence should forever be the slave of carbon-based intelligence. But if we want to end up with a diverse cosmopolitan civilization instead of, for example, paperclips, we may need to ensure that the first sufficiently advanced AI is built with a utility function whose maximum pinpoints that outcome. If we want an AI to do its own moral reasoning, Hume's Law says we need to define the framework for that reasoning. This takes an extra fact beyond the AI having an accurate model of reality and being an excellent planner.

    But if Hume's Law makes it possible in principle to have cognitively powerful agents with any goals, why is value loading likely to be difficult? Don't we just get whatever we programmed?

    The answer is that we get what we programmed, but not necessarily what we wanted. The worrisome scenario isn't AIs spontaneously developing emotional resentment for humans. It's that we create an inductive value learning algorithm and show the AI examples of happy smiling humans labeled as high-value events - and in the early days the AI goes around making existing humans smile and it looks like everything is OK and the methodology is being experimentally validated; and then, when the AI is smart enough, it invents molecular nanotechnology and tiles the universe with tiny molecular smiley-faces. Hume's Law, unfortunately, implies that raw cognitive power does not intrinsically prevent this outcome, even though it's not the result we wanted.

    [...]

    For now, the value loading problem is unsolved. There are no proposed full solutions, even in principle. And if that goes on being true over the next decades, I can't promise you that the development of sufficiently advanced AI will be at all a good thing.

Max Tegmark has views that are fairly close to Yudkowsky's,1 but warns, in his contribution, that "Unfortunately, the necessary calls for a sober research agenda that's sorely needed is being nearly drowned out by a cacophony of ill-informed views", among which he categorizes the following eight as "the loudest":
    1. Scaremongering: Fear boosts ad revenues and Nielsen ratings, and many journalists appear incapable of writing an AI-article without a picture of a gun-toting robot.

    2. "It's impossible": As a physicist, I know that my brain consists of quarks and electrons arranged to act as a powerful computer, and that there's no law of physics preventing us from building even more intelligent quark blobs.

    3. "It won't happen in our lifetime": We don't know what the probability is of machines reaching human-level ability on all cognitive tasks during our lifetime, but most of the AI researchers at a recent conference put the odds above 50 percent, so we'd be foolish to dismiss the possibility as mere science fiction.

    4. "Machines can't control humans": Humans control tigers not because we are stronger, but because we are smarter, so if we cede our position as smartest on our planet, we might also cede control.

    5. "Machines don't have goals": Many AI systems are programmed to have goals and to attain them as effectively as possible.

    6. "AI isn't intrinsically malevolent": Correct - but its goals may one day clash with yours. Humans don't generally hate ants - but if we want to build a hydroelectric dam and there's an anthill there, too bad for the ants.

    7. "Humans deserve to be replaced": Ask any parent how they would feel about you replacing their child by a machine, and whether they'd like a say in the decision.

    8. "AI worriers don't understand how computers work": This claim was mentioned at the above-mentioned conference, and the assembled AI researchers laughed hard.

These passages from Yudkowsky and Tegmark provide just a tip of the iceberg of interesting insights in What to Think About Machines That Think. But, not surprisingly in a volume with so many contributions, there are also disappointments. I'm a huge fan of philosopher Daniel Dennett, and the title of his contribution (The Singularity - an urban legend?) raises expectations further, since one might hope that he could help rectify the curious situation where many writers consider the drastic AI development scenario referred to as an intelligence explosion or the Singularity to be extremely unlikely or even impossible, but hardly anyone (with Robin Hanson being the one notable exception) offers arguments for this position raising above the level of slogans and one-liners. Dennett, by opening his contribution with the paragraph...
    The Singularity - the fateful moment when AI surpasses its creators in intelligence and takes over the world - is a meme worth pondering. It has the earmarks of an urban legend: a certain scientific plausibility ("Well, in principle I guess it's possible!") coupled with a deliciously shudder-inducing punch line ("We'd be ruled by robots!"). Did you know that if you sneeze, belch, and fart all at the same time, you die? Wow. Following in the wake of decades of AI hype, you might think the Singularity would be regarded as a parody, a joke, but it has proven to be a remarkably persuasive escalation. Add a few illustrious converts - Elon Musk, Stephen Hawking, and David Chalmers, among others - and how can we not take it seriously? Whether this stupendous event takes place ten or a hundred or a thousand years in the future, isn't it prudent to start planning now, setting up the necessary barricades and keeping our eyes peeled for harbingers of catastrophe?

    I think, on the contrary, that these alarm calls distract us from a more pressing problem...

...and then going on to talk about something quite different, turns his piece into an almost caricaturish illustration of the situation I just complained about.

I want to end by quoting, in full, the extremely short (even by this book's standards) contribution by physicist Freeman Dyson - not because it offers much (or anything) of substance (it doesn't), but because of its witticism:
    I do not believe that machines that think exist, or that they are likely to exist in the foreseeable future. If I am wrong, as I often am, any thoughts I might have about the question are irrelevant.

    If I am right, then the whole question is irrelevant.

I like the humility - "as I often am" - here. Dyson pretty much confirms what I was convinced of all along, namely that he agrees with the message in my 2011 blog post Den oundgängliga trovärdighetsbedömningen: fallet Dyson that he ought to be read critically.2

Footnote

1) And my own views are mostly in good alignment with Yudkowsky's and with Tegmark's. Both of them are cited approvingly in the chapter on AI in my upcoming book Here Be Dragons: Science, Technology and the Future of Humanity (Oxford University Press, January 2016).

2) This is in sharp contrast to the general reaction to my piece among Swedish climate denialists; they seemed at the time to think that Dyson ought to be read un-critically, and that any suggestion to the contrary was deeply insulting (here is a typical example).