tag:blogger.com,1999:blog-8131794231697217573.post7161872150786072006..comments2024-03-28T09:06:16.955+01:00Comments on Häggström hävdar: The AI meeting in Brussels last weekOlle Häggströmhttp://www.blogger.com/profile/07965864908005378943noreply@blogger.comBlogger12125tag:blogger.com,1999:blog-8131794231697217573.post-18438282956604295032017-10-29T22:57:24.385+01:002017-10-29T22:57:24.385+01:00A very big difference being that the possibility o...A very big difference being that the possibility of a runaway nuclear chain reaction was perfectly well understood in 1938 and hence also the possibility of a nuclear bomb; it was just a matter of getting the technicalities right. Whereas the runaway AI reaction remains firmly in the sci-fi realm and as Bentley repeatedly insisted (much to the dismay of the blog owner apparently) during the meeting it has pretty much nothing to do with what actual AI researchers are working on (or thinking about) today.<br /><br />Finally: von Neumann of course had the good sense of asking experts on chemistry and ordnance to design the explosive lenses needed...mahmoudnoreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-62983028180929624392017-10-29T14:23:33.805+01:002017-10-29T14:23:33.805+01:00Probably, yes. If you wanted to advance bomb-makin...Probably, yes. If you wanted to advance bomb-making technology in 1938, would you ask organic chemists with experience in TNT adaptations, or particle physicists who were playing with graphite piles and had never set foot in an ordnance factory?Peter Scotthttp://humancusp.com/noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-10363494216449120042017-10-29T12:50:03.807+01:002017-10-29T12:50:03.807+01:00"Pinker's arguments for why we [C1] shoul..."Pinker's arguments for why we [C1] should not take seriously or [C2] publicly discuss the risk for an existential catastrophe caused by the emergence of superintelligent AGI"<br /><br />Claim C1 and C2 are distinct. And the "we" is underspecified.<br /><br />If "we" means the scientific community - researchers and grant giving scientific bodies - then Pinker's argument 1 fails to support claim C1. Since science can increase super AI risk research without doing a lot of public outreach about the topic.<br /><br />Pinker is also question-begging when assuming super AI risk is a smaller problem than the nuclear or climate threats. <br /><br />First, the risks can be interconnected. super AI risk could increase nuclear threat. For example super AI creates online agitprop that stokes war or super AI intrudes nuclear weapon control systems. <br /><br />Second, super AI risk may be greater since it can pose global transgenerational terminal risk (See Phil Torres 2017 Morality, Foresight and Human Flourishing). In contrast nuclear threat and climate threat seem less likely to destroy ALL humans FOREVER. Parfit's point from the end of Reasons and Persons applies here.<br /><br />Pinker's argument 2, 3 and 4 are worryingly weak. Worryingly because they indicate that Pinker hasn't done enough background reading to even understand the arguments he now, it seems, is about to launch a book and public lecture tour against.mnoreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-27949468674244280792017-10-24T14:09:19.358+02:002017-10-24T14:09:19.358+02:00and instead you would like to ask some futurologis...and instead you would like to ask some futurologists who have never set their foot on a farm, but read a lot of sci-fi novels about how food in the future would come in the form of little pills?mahmoudnoreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-34777207937134277382017-10-24T12:51:39.146+02:002017-10-24T12:51:39.146+02:00Very good, I can accept that modified analogy. It ...Very good, I can accept that modified analogy. It still works to support my point, because I believe it is plain obvious to everyone that in order to make sensible projections regarding the future of agriculture, it is a good idea to go beyond merely asking farmers what they believe. Olle Häggströmhttps://www.blogger.com/profile/07965864908005378943noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-81280067028627563882017-10-24T12:28:17.123+02:002017-10-24T12:28:17.123+02:00Never mind, works much better in firefox (than chr...Never mind, works much better in firefox (than chrome).Anonymoushttps://www.blogger.com/profile/04295937429355923260noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-92220742878990645092017-10-24T10:30:04.914+02:002017-10-24T10:30:04.914+02:00Olle, the AI developers are analogous to farmers, ...Olle, the AI developers are analogous to farmers, not sheep. The sheep are the computers that you fear so much. The analogy breaks down there since I cannot figure out who fears the sheep.Yuval Pereshttps://www.blogger.com/profile/01531548898872291806noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-29904460748831454312017-10-24T10:12:38.724+02:002017-10-24T10:12:38.724+02:00Is there any way to download the video? I'm ha...Is there any way to download the video? I'm having it stop regularly and start from the beginning, whether I watch it here or in the link :(Anonymoushttps://www.blogger.com/profile/04295937429355923260noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-6622574372303982132017-10-24T10:10:13.588+02:002017-10-24T10:10:13.588+02:00Ok I give upOk I give upMagnushttps://www.blogger.com/profile/01617272924116099306noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-30571115330856365692017-10-24T10:00:50.533+02:002017-10-24T10:00:50.533+02:00Hi Yuval, glad to hear you liked my blog post!
Nu...Hi Yuval, glad to hear you liked my blog post!<br /><br />Nuclear war, global warming and technological unemployment are all huge and pressing problems that humanity needs to address, and I would never dream of suggesting to divert resources away from these fields and towards superintelligence risk studies (but note that none of these problems make superintelligence risk magically go away). If we are to move resources towards such AI futurology studies, then I'd much rather take them from, say, junk food or junk TV production, or perhaps even (if you forgive me for such an example) from the study of fractional Brownian motion. <br /><br />As to your rationale for not taking superintelligence risk seriously, a less tendentious study of what leading AI development experts think can be found via <a href="https://www.technologyreview.com/s/602776/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/" rel="nofollow">this article</a> coauthored by UC Berkeley computer scientist and AI researcher Stuart Russell. The bottom line here is that AI experts are deeply divided on this issue. I think it is a good idea in general to not holds strong beliefs on issues where experts are deeply divided (unless one's own expertise in the field is extraordinary).<br /><br />Let me add that I think the general obsession with what leading AI development researchers think about AI futurology is a bit exaggerated. It is not clear to me that they are the only ones with valuable insights into the topic, or even that they understand AI futurology better than AI futurologists do. The analogous thing for the future of agriculture would be to rely narrowly on what cows and sheep think about such issues, but for some reason I never hear anyone suggest that.Olle Häggströmhttps://www.blogger.com/profile/07965864908005378943noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-13339793700246222352017-10-23T23:34:27.571+02:002017-10-23T23:34:27.571+02:00I'm not convinced you're right about the &...I'm not convinced you're right about the "key difference." Here's an actual survey on the opinions of ai researchers.<br /><br />https://aiimpacts.org/2016-expert-survey-on-progress-in-ai/<br /><br />The big takeaway from this survey (at lest for me) is that ai researchers don't have constant mental models of what future ai development will look like. e.g., asking "how long until full automation of labor is possible" vs "how long until machines can do every job just as well as a human" causes experts to give very different responses to what should be the same question.Anonymousnoreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-37502954276677395842017-10-23T20:12:56.315+02:002017-10-23T20:12:56.315+02:00Thanks for giving such a nice summary of Steven Pi...Thanks for giving such a nice summary of Steven Pinker's argument in your blog http://haggstrom.blogspot.ca/…/the-ai-meeting-in-brussels-l… , since it makes it clear he is right. The public can be alarmed about at most one or two future apocalypses. If we assume that climate and nuclear dangers are at least ten times more urgent than rogue AI (any smaller ratio make you a climate denier :-) ), then focusing on the latter is a distraction that increases the dangers from the former. (Especially if it is the same people who are raising the alarm, since it reduces their credibility on the more serious issues.)<br />One key difference between climate and nuclear fears, on the one hand, and apocalyptic AI fears, on the other, is the following:<br />Climate Scientists are among the most vocal and worried about the effects of climate change; similarly for atomic scientists and nuclear dangers. But (on average), the more someone is knowledgeable about AI (e.g. Eric Horvitz or Yann LeCun), the less he/she is worried about Robots trying to kill their creators; this is more a fear of people who were raised on too much science fiction. <br />Current more substantial dangers from AI include unemployment resulting from AI/automation leading to nationalism, racism and wars; and biases in automated decision making reflecting or enhancing biases of the creators of the software (e.g. Loan approvals, hiring decisions).Yuval Pereshttps://www.blogger.com/profile/01531548898872291806noreply@blogger.com