måndag 16 april 2018
onsdag 11 april 2018
- If any major military power pushes ahead with AI weapon development, a global arms race is virtually inevitable, and the endpoint of this technological trajectory is obvious: autonomous weapons will become the Kalashnikovs of tomorrow. Unlike nuclear weapons, they require no costly or hard-to-obtain raw materials, so they will become ubiquitous and cheap for all significant military powers to mass-produce. It will only be a matter of time until they appear on the black market and in the hands of terrorists, dictators wishing to better control their populace, warlords wishing to perpetrate ethnic cleansing, etc. Autonomous weapons are ideal for tasks such as assassinations, destabilizing nations, subduing populations and selectively killing a particular ethnic group. We therefore believe that a military AI arms race would not be beneficial for humanity.
fredag 6 april 2018
- Artificiell intelligens - bara av godo? Om detta ämne samtalar jag med Ulrika Lindstrand och Lisa Bondesson (båda från Sveriges Ingenjörer) onsdagen den 18 april kl 12.00 på Chalmers, lokal HB3, Hörsalsvägen 10.
- Mitt andra panelsamtal den onsdagen har rubriken Vita lögner och andra lögner. Övriga medverkande är Eva Staxäng (programproducent på Jonsereds herrgård), Christian Lenemark (lektor i litteraturvetenskap vid Göteborgs universitet) och Christer Borg (psykolog), och det hela går av stapeln onsdagen den 18 april kl 17.30 på Pedagogen, Västra Hamngatan 25, Hus A, Kjell Härnqvistsalen.
- Söndagen den 22 april klockan 15.00 talar jag över ämnet Konkurrens eller samarbete, vilket i programmet sammanfattas som följer:
- Den filosofiska grundtanken bakom marknadsekonomin är att om var och en drivs av egennytta, så blir utfallet bra för kollektivet och samhället. Men stämmer det alltid? Oroande exempel dyker upp inom t.ex. klimatförändringar, kapprustning och artificiell intelligens.1
fredag 30 mars 2018
- Miles Brundage: Scaling Up Humanity: The Case for Conditional Optimism about Artificial Intelligence.
In this chapter, Brundage (a research fellow at the Future of Humanity Institute) is very clear about the distinction between conditional optimism and just plain old optimism. He's not saying that an AI breakthrough will have good consequences (that would be plain old optimism). Rather, he's saying that if it has good consequences, i.e., if it doesn't cause humanity's extinction or throw us permanently into the jaws of Moloch, then there's a chance the outcome will be very, very good (this is conditional optimism).
- Thomas Metzinger: Towards a Global Artificial Intelligence Charter.
Here the well-know German philosopher Thomas Metzinger lists a number of risks that come with future AI development, ranging from well-known ones concerning technological unemployment or autonomous weapons to more exotic ones arising from the possibility of constructing machines with the capacity to suffer. He emphasizes the urgent need for legislation and other government action.
- Olle Häggström: Remarks on Artificial Intelligence and Rational Optimism.
This text is already familiar to readers of this blog. It is my humble attempt to sketch, in a balanced way, some of the main arguments for why the wrong kind of AI breakthrough might well be an existential risk to humanity.
- Peter Bentley: The Three Laws of Artificial Intelligence: Dispelling Common Myths.
Bentley assigns great significance to the fact that he is an AI developer. Thus, he says, he is (unlike us co-contributors to the report) among "the people who understand AI the most: the computer scientists and engineers who spend their days building the smart solutions, applying them to new products, and testing them". Why exactly expertise in developing AI and expertise in AI futurology necessarily coincide in this way (after all, it is rarely claimed that farmers are in a privileged position to make predictions about the future of agriculture) is not explained. In any case, he claims to debunk a number of myths, in order to arrive at the position which is perhaps best expressed in the words he chose to utter at the seminar in October: superhumanly intelligent AI "is not going to emerge, that’s the point! It’s entirely irrational to even conceive that it will emerge" [video from the event, at 12:08:45]. He relies more on naked unsupported claims than on actual arguments, however. In fact, there is hardly any end to the inanity of his chapter. It is very hard to comment on at all without falling into a condescending tone, but let me nevertheless risk listing a few of its very many very weak points:
1. Bentley pretends to speak on behalf of AI experts - in his narrow sense of what such expertise entails. But it is easy to give examples of leading AI experts who, unlike him, take AI safety and apocalyptic AI scenarios seriously, such as Stuart Russell and Murray Shanahan. AI experts are in fact highly divided in this issue, as demonstrated in surveys. Bentley really should know this, as in his chapter he actually cites one of these surveys (but quotes it in shamelessly misleading fashion).
2. In his desperate search for arguments to back up his central claim about the impossibility of building a superintelligent AI, Bentley waves at the so-called No Free Lunch theorem. As I explained in my paper Intelligent design and the NFL theorems a decade ago, this result is an utter triviality, which basically says that in a world with no structure at all, no better way than brute force exists if you want to find something. Fortunately, in a world such as ours which has structure, the result does not apply. Basically the only thing that the result has going for it is its cool name, something that creationist charlatan William Dembski exploited energetically to try to give the impression that biological evolution is impossible, and now Peter Bentley is attempting the analogous trick for superintelligent AI.
3. At one point in his chapter, Bentley proclaims that "even if we could create a super-intelligence, there is no evidence that such a super-intelligent AI would ever wish to harm us". What the hell? Bentley knows about Omohundro-Bostrom theory for instrumental vs final AI goals (see my chapter in the report for a brief introduction) and how it predicts catastrophic consequences in case we fail to equip the superintelligent AI with goals that are well-aligned with human values. He knows it by virtue of having read my book Here Be Dragons (or at least he cites it and quotes it), on top of which he actually heard me present the topic at the Brussels seminar in October. Perhaps he has reasons to believe Omohundro-Bostrom theory to be flawed, in which case he should explain why. Simply stating out of the blue, as he does, that no reason exists for believing that a superintelligent AI might turn agianst us is deeply dishonest.
4. Bentley spends a large part of his chapter attacking the silly straw man that the mere progress of Moore's law, giving increasing access to computer power, will somehow spontaneously create superintelligent AI. Many serious thinkers speculate about an AI breakthrough, but none of them (not even Ray Kurzweil) think computer power on its own will be enough.
5. The more advanced an AI gets, the more involved will the testing step of its development be, claims Bentley, and goes on to argue that the amount of testing needed grows exponentially with the complexity of the situation, essentially preventing rapid development of advanced AI. His premise for this is that "partial testing is not sufficient - the intelligence must be tested on all likely permutations of the problem for its designed lifetime otherwise its capabilities may not be trustable", and to illustrate the immensity of this task he points out that if the machine's input consists of a mere 100 variables that each can take 10 values, then there are 10100 cases to test. And for readers for whom it is not evident that 10100 is a very large number, he writes it in decimal. Oh please. If Bentley doesn't know that "partial testing" is what all engineering projects need to resort to, then I'm beginning to wonder what planet he comes from. Here's a piece of homework for him: calculate how many cases the developers of the latest version of Microsoft Word would have needed to test, in order not to fall back on "partial testing", and how many pages would be needed for writing that number in decimal.
6. Among the four contributors to the report, Bentley is alone in claiming to be able to predict the future. He just knows that superintelligent AI will not happen. Funny, then, that not even his claim that "we are terrible at predicting the future, and almost without exception the predictions (even by world experts) are completely wrong" doesn't seem to induce as much as a iota of empistemic humility into his prophecy.
7. In the final paragraph of his chapter, Bentley reveals his motivation for writing it: "Do not be fearful of AI - marvel at the persistence and skill of those human specialists who are dedicating their lives to help create it. And appreciate that AI is helping to improve our lives every day." He is simply offended! He and his colleagues work so hard on AI, they just want to make the world a better place, and along comes a bunch of other people who have the insolence to come and talk about AI risks. How dare they! Well, I've got news for Bentley: The future development of AI comes with big risks, and to see that we do not even need to invoke the kind of superintelligence breakthrough that is the topic of the present discussion. There are plenty of more down-to-earth reasons to be "fearful" of what may come out of AI. One such example, which I touch upon in my own chapter in the report, is the development of AI technology for autonomous weapons, and how to keep this technology away from the hands of terrorists.
lördag 17 mars 2018
- Ett lands växthusgasutsläpp (liksom annan miljöbelastning) tenderar att öka med ökad folkmängd. Om invandringen till ett land är stor leder det till ökad folkmängd. Eftersom det är angeläget att hålla nere de svenska växthusgasutsläppen leder detta till att vi bör begränsa invandringen.
- Vi anser att vi här i Sverige kan tillåta oss att tillskansa oss en hög levnadsstandard baserad på CO2-utsläpp som är högre än vad som är globalt hållbart, och högre än i andra länder. Detta privilegium bör vi dock vara ytterst försiktiga med att dela med oss av till Rashid och andra utlänningar, ty vår planet håller inte för om alla skulle leva som vi.
måndag 12 mars 2018
måndag 26 februari 2018