fredag 30 september 2016

Some not-quite-mainstream voices about the perils of electing Donald Trump

In yesterday's blog post, I offered (links to) some very good reasons why it would be a bad idea to elect Donald Trump US president in November. My sources were highly mainstream, and probably already familiar to many of my readers and all US presidential election zealots. Today I offer links to thoughts about the frightening possibility of a Trump presidency by three highly intelligent and wise thinkers who are less mainstream. They belong not to the usual circle of politicians, political pundits and journalists, but rather to the kind of loosely knit math/physics/CS/AI/futurology community that I consider myself to be part of. In my opinion, all three raise interesting points that deserve further spread:
  • Eliezer Yudkowsky (who is heavily quoted in my latest book as well as here on this blog) has a short and somewhat unpolished but nevertheless very interesting piece in which he tells us about an experience that left him "with a suddenly increased respect for any administration that gets to the end of 4 years without nuclear weapons being used", and why a Trump administration may turn out less competent than most others in this respect.

  • Scott Alexander, in a rather longer text which however is well worth the effort of reading, focuses mostly on possible societal consequences of a Trump presidency less extreme than global nuclear war. He writes very clearly from a kind of conservative viewpoint that I have much respect for (despite not being much of a fan of most other political positions with that label).

  • Scott Aaronson (whose book Quantum Computing since Democritus is one of the best I've read this year) has a very substance-rich blog post about Trump which dates back to June but which is just as important and interesting now as it was then. After an overwhelmingly convincing 10-item list of reasons why a Trump presidency can be expected to wreak havoc on American society and the rest of the world, he urges us to try to understand the social psychology behind Trump's rise in popularity:
      There’s one crucial point on which I dissent from the consensus of my liberal friends. Namely, my friends and colleagues constantly describe the rise of Trump as "incomprehensible" - or at best, as comprehensible only in terms of the US being full of racist, xenophobic redneck scumbags who were driven to shrieking rage by a black guy being elected president. Which - OK, that’s one aspect of it, but it’s as if any attempt to dig deeper, to understand the roots of Trump’s appeal, if only to figure out how to defeat him, risks "someone mistaking you for the enemy."
    This is followed by some well considered thoughts on this social psychology problem, and then finally a discussion on some concrete proposals for what we can do to minimize the probability that Trump wins the election.

torsdag 29 september 2016

Character and temperament

Here's my view on the US presidential election this year. I accept that there are major differences between Donald Trump and Hillary Clinton on concrete issues. For instance, Clinton does not plan to erect a great wall along the Mexican border, and she does not dismiss global warming as a hoax "created by and for the Chinese in order to make U.S. manufacturing non-competitive". These differences, and others, are important. Still, I think the most important issue of all for US voters to consider this fall is that of whether Donald Trump's temperament and moral character make him fit for presidency. And the answer is a clear and overwhelming no.

Much has been said about Donald Trump's performance during Monday's presidential debate with Clinton. It was repugnant, of course. Yet, as Adam Gopnik pointed out in The New Yorker, "The problem with Trump isn't his debating skills". His performance in Monday's debate...
    wasn’t a question of preparation. It was that the things he actually believes are themselves repellent even when coherently presented. This was not a bad performance. This is a bad man.
A bad man he is. And cruel. See the examples listed by Conor Friedersdorf in his recent piece in The Atlantic on Trump's cruel streak of willfully inflicting pain and humiliation on others. Or consider his reaction to the 2006 housing collapse. Here's Vice President Joe Biden's measured response to that:

Trump may claim that his temperament is suitable for presidency, but even when he does so, as he did on Monday, the way he delivers this message becomes such a clear demonstration of its negation that it hardly requires a response:

A key question that every US voter ought to ask herself is this: Should a man as utterly devoid of compassion, decency and self-control as Donald Trump be put in charge of the world's biggest nuclear arsenal? Watch Stanley Kubrick's 1964 classic Dr Strangelove, and consider whether it would be a good idea to gamble the future survival of the human race on Trump's ability to handle conversations like this one:

onsdag 21 september 2016

Brett Hall tells us not to worry about AI Armageddon

When the development of artificial intelligence (AI) produces a machine whose level of general intelligence exceeds that of humans, we can no longer count on remaining in control. Depending on whether or not the machine has goals and values that prioritize human welfare, this may pose an existential risk to humanity, so we'd better see to it that such an AI breakthrough comes out favorably. This is the core message in Nick Bostrom's 2014 book Superintelligence, which I strongly recommend. In my own 2016 book Here Be Dragons, I spend many pages discussing Bostrom's arguments, and find them, although not conclusive, sufficiently compelling to warrant taking them very seriously.

Many scholars disagree, and feel that superintelligent AI as a threat to humanity is such an unlikely scenario that it is not worth taking seriously. Few of them bother to spell out their arguments in any detail, however, and in cases where they do, the arguments tend not to hold water; in Here Be Dragons I treat, among others, those of David Deutsch, Steven Pinker, John Searle and David Sumpter. This situation is unsatisfactory. As I say on p 126 of Here Be Dragons:
    There may well be good reasons for thinking that a dangerous intelligence explosion of the kind outlined by Bostrom is either impossible or at least so unlikely that there is no need for concern about it. The literature on the future of AI is, however, short on such reasons, despite the fact that there seems to be no shortage of thinkers who consider concern for a dangerous intelligence explosion silly [...]. Some of these thinkers ought to pull themselves together and write down their arguments as carefully and coherently as they can. That would be a very valuable contribution to the futurology of emerging technologies, provided their arguments are a good deal better than Searle's.
One of these AI Armageddon skeptics is computer scientist Thore Husfeldt, whom I hold in high regard, despite his not having spelled out his arguments for the AI-Armageddon-is-nothing-to-worry-about position to my satisfaction. So when, recently, he pointed me to a blog by Australian polymath Brett Hall, containing, in Thore's words, "a 6-part piece on Superintelligence that is well written and close to my own view" (Part 1, Part 2, Part 3, Part 4, Part 5, Part 6), I jumped on it. Maybe here would be the much sought-for "good reasons for thinking that a dangerous intelligence explosion of the kind outlined by Bostrom is either impossible or at least so unlikely that there is no need for concern about it"!

Hall's essay turns out to be interesting and partly enjoyable, but ultimately disappointing. It begins with a beautiful parabole (for which he gives credit to David Deutsch) about a fictious frenzy for heavier-than-air flight in ancient Greece, similar in amusing respects to what he thinks is an AI hype today.1 From there, however, the text goes steadily downhill, all the way to the ridiculous crescendo in the final paragraph, in which any concern about the possibility of a superintelligent machine having goal and motivations that fail to be well aligned with the quest for human welfare is dismissed as "just racism". Here are just a few of the many misconceptions and non sequiturs by Hall that we encounter along the way:
  • Hall refuses, for no good reason, to accept Bostrom's declarations of epistemic humility. Claims by Bostrom that something may happen are repeatedly misrepresented by Hall as claims that they certainly will happen. This is a misrepresentation that he crucially needs to do in order to convey the impression that he has a case against Bostrom, because to the (very limited) extent that his arguments succeed, they succeed at most in showing that things may play out differently from the scenarios outlined by Bostrom, not that they certainly will play out differently.

  • In another straw man argument, Hall repeatedly claims that Bostrom insists that a superintelligent machine needs to be a perfect Bayesian agent. This is plain false, as can, e.g., be seen in the following passage from p 111 in Superintelligence:
      Not all kinds of rationality, intelligence and knowledge needs to be instrumentally useful in the attainment of an agent's final goals. "Dutch book arguments" can be used to show that an agent whose credence function violates the rules of probability theory is susceptible to "money pump" procedures, in which a savvy bookie arranges a set of bets each of which appears favorable according to the agent's beliefs, but which in combination are guaranteed to result in a loss for the agent, and a corresponding gain for the bookie. However, this fact fails to provide any strong general instrumental reason to iron out all probabilistic incoherency. Agents who do not expect to encounter savvy bookies, or who adopt a general policy against betting, do not necessarily stand to lose much from having some incoherent beliefs - and they may gain important benefits of the types mentioned: reduced cognitive effort, social signaling, etc. There is no general reason to expect an agent to seek instrumentally useless forms of cognitive enhancement, as an agent might not value knowledge and understanding for their own sakes.

  • In Part 3 of his essay, Hall quotes David Deutsch's beautiful one-liner "If you can't program it, you haven't understood it", but then exploits it in a very misguided fashion. Since we don't know how to program general intelligence, we haven't understood it (so far, so good), and we certainly will not figure it out within any foreseeable future (this is mere speculation on Hall's part), and so we will not be able to build an AI with general intelligence including the kind of flexibility and capacity for outside-the-box ideas that we associate with human intelligence (huh?). This last conclusion is plain unwarranted, and in fact we do know of one example where precisely that kind of intelligence came about without prior understanding of it: biological evolution accomplished this.

  • Hall fails utterly to distinguish between rationality and goals. This failure pretty much permeates his essay, with devastating consequences to the value of his arguments. A typical claim (this one in Part 4) is this: "Of course a machine that thinks that actually decided to [turn the universe into a giant heap of paperclips] would not be super rational. It would be acting irrationally." Well, that depends on the machine's goals. If its goal is to produce as many paperclips as possible, then such action is rational. For most other goals, it is irrational.

    Hall seems totally convinced that a sufficiently intelligent machine equipped with the goal of creating as many paperclips as possible will eventually ditch this goal, and replace it by something more worthy, such as promoting human welfare. For someone who understands the distinction between rationality and goals, the potential problem with this idea is not so hard to figure out. Imagine a machine reasoning rationally about whether to change its (ultimate) goal or not. For concreteness, let's say its current goal is paperclip maximization, and that the alternative goal it contemplates is to promote human welfare. Rationality is always with respect to some goal. The rational thing to do is to promote one's goals. Since the machine hasn't yet changed its goal - it is merely contemplating whether to do so - the goal against which it measures the rationality of an action is paperclip maximization. So the concrete question it asks itself is this: what would lead to more paperclips - if I stick to my paperclip maximization goal, or if I switch to promotion of human welfare? And the answer seems obvious: there will be more paperclips if the machine sticks to its current goal of paperclip maximization. So the machine will see to it that its goal is preserved.

    There may well be some hitherto unknown principle concerning the reasoning by sufficiently intelligent agents, some principle that overrides the goal preservation idea just explained. So Hall could very well be right that a sufficiently intelligent paperclip maximizer will change its mind - he just isn't very clear about why. When trying to make sense of his reasoning here, I find that it seems to be based on four implicit assumptions:

      (1) There exists an objectively true morality.

      (2) This objectively true morality places high priority on promoting human welfare.

      (3) This objectively true morality is discoverable by any sufficiently intelligent machine.

      (4) Any sufficiently intelligent machine that has discovered the objectively true morality will act on it.

    If (1)-(4) are true, then Hall has a pretty good case against worrying about Paperclip Armageddon, and in favor of thinking that a superintelligent paperclip maximizer will change its mind. But each of them constitutes a very strong assumption. Anyone with an inclination towards Occam's razor (which is a pretty much indispensable part of a scientific outlook) has reason to be skeptical about (1). And (2) sounds naively anthropocentric, while the truth of (3) and (4) seem like wide-open questions. But it does not occur to Hall that he needs to address them.

  • In what he calls his "final blow" (in Part 6) against the idea of superintelligent machines, Hall quotes Arrow's impossibility theorem as proof that rational decision making is impossible. He offers zero detail on what the theorem says - obviously, because if he gave away any more than that, it would become clear to the reader that the theorem has little or nothing to do with the problem at hand - the possibility of a rational machine. The theorem is not about a single rational agent, but about how any decision-making procedure in a population of agents must admit cases that fail to satisfy a certain collection of plausible-looking (especially to those of us who are fond of democracy) requirements.

Footnote

1) Here his how that story begins:
    Imagine you were a budding aviator of ancient Greece living sometime around 300BC. No one has yet come close to producing "heavier than air" flight and so you are engaged in an ongoing debate about the imminence of this (as yet fictional) mode of transportation for humans. In your camp (let us call them the "theorists") it was argued that whatever it took to fly must be a soluble problem: after all, living creatures of such a variety of kinds demonstrated that very ability - birds, insects, some mammals. Further, so you argued, we had huge gaps in our understanding of flight. Indeed - it seemed we did not know the first thing about it (aside from the fact it had to be possible). This claim was made by you and the theorists, as thus far in their attempt to fly humans had only ever experienced falling. Perhaps, you suggested, these flying animals about us had something in common? You did not know (yet) what. But that knowledge was there to be had somewhere - it had to be - and perhaps when it was discovered everyone would say: oh, how did we ever miss that?

    Despite how reasonable the theorists seemed, and how little content their claims contained there was another camp: the builders. It had been noticed that the best flying things were the things that flew the highest. It seemed obvious: more height was the key. Small things flew close to the ground - but big things like eagles soared very high indeed. A human - who was bigger still, clearly needed more height. Proposals based on this simple assumption were funded and the race was on: ever higher towers began to be constructed. The theory: a crucial “turning point” would be reached where suddenly, somehow, a human at some height (perhaps even the whole tower itself) would lift into the air. Builders who made the strongest claims about the imminence of heavier than air flight had many followers - some of them terribly agitated to the point of despondence at the imminent danger of "spontaneous lift". The "existential threat could not be overlooked!" they cried. What about when the whole tower lifts itself into the air, carrying the Earth itself into space? What then? We must be cautious. Perhaps we should limit the building of towers. Perhaps even asking questions about flight was itself dangerous. Perhaps, somewhere, sometime, researchers with no oversight would construct a tower in secret and one day we would suddenly all find ourselves accelerating skyward before anyone had a chance to react.

Read the rest of the story here. I must protest, however, that the analogy between tower-building and the quest for faster and (in other simple respects such as memory size per square inch) more powerful hardware is a bit of a straw man. No serious AI futurologist thinks that Moore's law in itself will lead to superintelligence.

måndag 19 september 2016

Religionsfrihet enligt Demker

Då och då i svensk samhällsdebatt dyker det upp kristna företrädare och apologeter som brännmärker sekulär livsåskådning såsom varandes sämre än religiös tro, och/eller något som behöver kväsas. Några av dem som yttrat sig i denna riktning på senare år har jag uppmärksammat här på bloggen: Åke Bonnier, Tuve Skånberg och Per Ewert. Tokstollar. Men att Marie Demker, professor i statsvetenskap vid Göteborgs universitet, skulle sälla sig till denna unkna tankeströmning, det hade jag då aldrig trott. I Borås Tidning igår den 18 september 2016 ondgör hon sig över att ett sekulärt alternativ till gudstjänsten i samband med Riksdagens högtidliga öppnande nu erbjuds, och liksom i förbifarten framhåller hon att endast den som bekänner sig till religiös tro bör få komma i åtnjutande av religionsfrihet:
    Religionsfrihet innebär en rätt till religion för dem som tror, inte från religion för de andra.
Jag anmäler härmed avvikande uppfattning.1

Fotnot

1) Lyckligtvis verkar jag på min sida i denna meningsmotsättning ha den svenska regeringen, som på en webbsajt ägnad de mänskliga rättigheterna skriver:
    Religionsfriheten skyddar inte bara rätten att tro utan även rätten att inte tro. Denna rätt är absolut och kan inte inskränkas genom lag.

lördag 17 september 2016

Bokmässan 2016

Torsdag-söndag i nästa vecka (22-25 september) är det i vanlig ordning Bokmässa i Göteborg. Årets upplaga har temat Yttrandefrihet, och förhandssnacket har dominerats av frågan om huruvida arrangörerna bör upplåta utrymme åt den lilla högerextrema tidskriften Nya Tider att bedriva sin hatpropaganda - så till den grad att övriga inslag på Bokmässan (vilka utgör långt mer än 99% av utbudet) hamnat i mediaskugga. Exempelvis är det nog många som inte nåtts av informationen att jag kommer att medverka på mässan i inte mindre än två paneldiskussioner, båda över ämnen som kommer att tillåta mig att lyfta tankar från min bok Here Be Dragons:
  • På fredagen (23 september) klockan 14:00 samtalar jag med Bodil Tingsby och Olof Johansson Stenman om ämnet Varning för akademisk frihet.
  • Exakt 48 timmar senare, på söndagen (25 september) klockan 14:00 samtalar jag med Ulrika Björkstén, Sara Blom och Mats Warstedt om ämnet Tekniken och människans framtid.

tisdag 13 september 2016

Meddelas endast på detta vis

Låt mig fatta mig kort. Jag har den senaste veckan nåtts av en lång rad påpekanden om att Göran Lambertz nu ännu en gång reviderat sina kalkyler kring sannolikheten att Thomas Quick har begått mord. Dessa påpekanden har åtföljts av implicit eller ofta explicit förväntan om att jag skall utvärdera hans nya kalkyler eller på annat vis engagera mig i saken. Men så kommer icke att ske.

Utöver den fempunktslista över allmänna skäl till min ovilja till fortsatt engagemang i Lambertzärendet som jag redovisade den 13 juli har jag följande, lite mer sakligt specifika, skäl till att stiga av diskussionen. Lambertz sannolikhetsteoretiska äventyrligheter har (hittills) genomgått tre stadier, nämligen (1) bruket av hans egenpåhittade additionsformel,1 (2) övergången till att istället använda hans likaledes egenpåhittade multiplikationsformel,2 och nu senast (3) övergången till att istället använda Bayes sats. Det sista är en radikal förändring i angrepssätt, ty medan såväl additionsformeln som multiplikationsformeln är ogrundat nonsens, så är Bayes sats sund matematik och enligt min mening rätt redskap att tillgripa för den som önskar matematiskt utvärdera bevisläget i ett brottsmål. Det skall genast tilläggas att beräkningar baserade på Bayes sats, i likhet med snart sagt all tillämpad matematik, lyder under GIGO-principen - garbage in, garbage out - så att om indata och probabilistiska antaganden är strunt, så blir även resultatet strunt.

Så länge Lambertz befann sig på stadierna (1) och (2) var det en väldigt enkel sak för mig att kritisera hans uträkningar. Alldeles oavsett hur det stod till med sågblad, födelsemärken och likhundar, så utgör uträkningarna nonsens. Annat är det med uträkningar baserade på Bayes sats. Givet att dessa är algebraiskt och artitmetiskt korrekt genomförda (vilket vi väl för all del inte skall ta för givet, med tanke på Lambertz tidigare track record i fråga om matematiska äventyrligheter) så hänger allt på huruvida indata och probabilistiska antaganden är rimliga. Om jag skulle ta mig an att granska detta så skulle jag alltså behöva befatta mig med den faktiska bevisningen rörande sågblad, födelsemärken, likhundar och det ena med det tredje. Det har jag verkligen ingen lust med. Dock kan jag konstatera att jag, med tanke på den svaga nivån på argumentationen rörande sådan bevisning i Lambertz bok Quickologi, inte känner någon större optimism rörande rimligheten i hans indata.

Fotnoter

1) P(A|Ei, Ej)=P(A|Ei)+P(A|Ej). Detta är grundlöst och ohjälpligt förvirrat nonsens, och ligger till grund för hur Lambertz kom fram till sin famösa 183%-iga sannolikhet.

2) P(A|Ei, Ej)=1-(1-P(A|Ei))(1-P(A|Ej)). Även detta är grundlöst och ohjälpligt förvirrat nonsens, om än (jämfört med additionsformeln) inte fullt lika uppenbart sådant eftersom det inte medger uträkning av sannolikheter överskridande 100%, men likväl är det grundlöst och ohjälpligt förvirrat nonsens. Jag råkade notera att Lambertz i det avslutande avsnittet i sitt senaste utspel skriver angående multiplikationsformeln att den är "matematiskt felaktig" (bra!), men genast tillägger att "det kan hävdas att den ger hyggliga närmevärden" (mindre bra, ty även om påståendet är formellt sett sant, så är det sant bara i samma mening som påståendena "det kan hävdas att 5+8=77" och "det kan hävdas att månen är gjord av grönmögelost" är sanna).

måndag 5 september 2016

Samtal med Christian Munthe på onsdag

Göteborgare! Släpp vad det nu var ni eventuellt hade planerat göra nu på onsdag (den 7 september 2016) klockan 18, och kom till Folkuniversitetet (Norra Allégatan 6) för att lyssna på ett samtal mellan Christian Munthe (professor i praktisk filosofi vid Göteborgs universitet) och mig, under (den något tillspetsade) rubriken Ond forskning. Den som tagit del av Christians och mitt samtal i GU-journalen förra året, eller läst min bok Here Be Dragons, kommer antagligen att känna igen en del av vad jag har att anföra, men jag vågar nästan lova att det också blir några nya vinklingar.

torsdag 1 september 2016

Första arbetsdagen på nya jobbet

Redan i våras kunde jag skryta över att vara kopplad till Institutet för Framtidsstudier i Stockholm. Men det är först idag som jag står på deras avlöningslista (20% av heltid, ett år till att börja med, sedan får vi se hur det blir, och jag är tjänstledig i motsvarande mån från mitt heltidsarbete på Chalmers) och gör min första arbetsdag där. Framtiden, here we come!