tag:blogger.com,1999:blog-8131794231697217573.post1379891960595424576..comments2024-03-28T09:06:16.955+01:00Comments on Häggström hävdar: VulgopopperianismOlle Häggströmhttp://www.blogger.com/profile/07965864908005378943noreply@blogger.comBlogger13125tag:blogger.com,1999:blog-8131794231697217573.post-18943456669952373802017-02-22T23:00:54.450+01:002017-02-22T23:00:54.450+01:004. I reread the piece by Deutsch fairly slowly, bu...4. I reread <a href="https://aeon.co/essays/how-close-are-we-to-creating-artificial-intelligence" rel="nofollow">the piece by Deutsch</a> fairly slowly, but it didn't do magic for me. I read it as reiterating the familiar (and IMHO respectable) statement that true creativity or true ability to think outside the box involves something that we do not (yet) understand or know how to formalize. Olle Häggströmhttps://www.blogger.com/profile/07965864908005378943noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-77604537662241014012017-02-22T22:58:34.280+01:002017-02-22T22:58:34.280+01:00I fully recognize that we both have locked-in posi...I fully recognize that we both have locked-in positions, and sorry for not being updated on computer scientist surveys. It's just that to me the superintelligence fear has strong similarities with “Jesus is our Lord and savior” – a position also supported by many intelligent people. I'll read a little that pops up now and then for amusement and annoyance, but for as long as no new evidence turns up there is no argument that can bring me to the other side, and I can't be bothered to get myself fully updated on the discussion, even though I realize it weakens my credibility. ctailhttps://www.blogger.com/profile/10120443153591800477noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-41302701312728929782017-02-22T22:27:45.453+01:002017-02-22T22:27:45.453+01:00Thanks. Some telegraphic reactions.
1. I am not (...Thanks. Some telegraphic reactions.<br /><br />1. I am not (consciously) trying to bypass anything. I honestly want to understand the superintelligence-is-silly position and am trying my best, but have so far not been successful.<br /><br />2a. I appreciate the clever ironic passage <i>"...your locked in position. And you will never be able to convince me..."</i>.<br /><br />2b. It is you who have a strong opinion about which of hypotheses (H1) and (H2) is true - not me! So who of us has a <i>"locked in position"</i>? (Rhetorical question.) <br /><br />3. In view of the surveys reported, e.g., by <a href="https://www.technologyreview.com/s/602776/yes-we-are-worried-about-the-existential-risk-of-artificial-intelligence/" rel="nofollow">Dafoe and Russell</a>, it seems like a pretty bad misnomer to describe your position as <i>"the computer scientists' position"</i>.Olle Häggströmhttps://www.blogger.com/profile/07965864908005378943noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-71998187678231631912017-02-22T20:32:13.952+01:002017-02-22T20:32:13.952+01:00No problem, I don't expect to be able to convi...No problem, I don't expect to be able to convince you and no need to reiterate the contents of your sources. I was merely trying to help you understand what Devdatt meant, since it appeared you didn't. It seems to me that your processing of counterarguments consists to a large part of elaborate theorizing to allow you to bypass them and keep your locked-in position. And you will never be able to convince me since the fear of artificial superintelligence is so obviously unfounded from my computer science perspective. But if you truly want to understand the computer scientists position (or at least mine, but it seems to harmonize with that of all computer scientists I know), I urge you to reread the Deutsch article with an open mind. Not that I think it will convince you.ctailhttps://www.blogger.com/profile/10120443153591800477noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-23729878017274189522017-02-22T17:55:44.621+01:002017-02-22T17:55:44.621+01:00Apologies, ctail, for my unnecessarily grumpy comm...Apologies, ctail, for my unnecessarily grumpy comment 12:44. It's just that in your comment 10:24, you make three statements, and all three strike me as ungrounded and clearly wrong.Olle Häggströmhttps://www.blogger.com/profile/07965864908005378943noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-19297957358127719562017-02-22T12:44:19.364+01:002017-02-22T12:44:19.364+01:00See the references given in the blog post.See the references given in the blog post.Olle Häggströmhttps://www.blogger.com/profile/07965864908005378943noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-53134899777689537012017-02-22T10:24:17.950+01:002017-02-22T10:24:17.950+01:00But then recent development of AI is irrelevant to...But then recent development of AI is irrelevant to the superintelligence discussion. The point is that current AI methods lack any mechanism from which it is reasonable to think that superintelligence might develop, and does not have any bearing on whether such a philosophical breakthrough is imminent or even possible.ctailhttps://www.blogger.com/profile/10120443153591800477noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-86878760361975522092017-02-22T09:48:46.387+01:002017-02-22T09:48:46.387+01:00I had expected Devdatt to react, swiftly and massi...I had expected Devdatt to react, swiftly and massively, in this comments section, but that does not seem to be happening. Elsewhere (on Facebook) he has, however, reacted with a long (and I'd say somewhat Trumpian) rant. To illustrate the qaulity of his arguments, let me give just one snapshot from his rant. He writes:<br /><br /><i>"Simplistic arguments about AI systems rewriting themselves or continually self-improving betray a total lack of understanding of where AI currently is."</i><br /><br />Well, forgive me for saying so, but this comment betrays a total lack of understanding of what <a href="https://intelligence.org/files/IEM.pdf" rel="nofollow">such escalating-spiral-of-self-improvement arguments</a> are about. They are <i>not</i> about what is happening right now or what <i>current</i> AI state-of-the-art is capable of producing. They are about what might happen upon an AI breakthrough in the (possibly distant) future.Olle Häggströmhttps://www.blogger.com/profile/07965864908005378943noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-60636626794530220172017-02-21T10:46:19.678+01:002017-02-21T10:46:19.678+01:00Assuming you are right about the alignment problem...Assuming you are right about <a href="http://lesswrong.com/lw/lbc/stuart_russell_ai_value_alignment_problem_must_be/" rel="nofollow">the alignment problem</a> (but I don't see why), does this have any bearing on the timing of emergence of superintelligence? Well, maybe, if the entire AI community comes together to agree upon <i>"wait guys, hold it, we haven't yet solved the alignment problem, so let's postpone launching a superintelligence until it is solved"</i>. That might happen. And then again, it might not. Olle Häggströmhttps://www.blogger.com/profile/07965864908005378943noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-43854005976078809982017-02-21T10:14:37.334+01:002017-02-21T10:14:37.334+01:00Somewhat tangential to the post, but by year 2100 ...Somewhat tangential to the post, but by year 2100 is not only early but very early. For the problem at hand (solving the alignment problem) year 2400 or so should probably also be considered early (though then maybe not very early)?<br />/tonyfAnonymousnoreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-29893503226109615542017-02-20T17:31:45.313+01:002017-02-20T17:31:45.313+01:00A few months after reading it, I reviewed Turing&#...A few months after reading it, I reviewed Turing's 1950 article, and noted that “Lady Lovelace's objection” might, generously interpreted, be taken to be essentially the same as Deutsch's argument about creativity. In any case, Turing misses the point when he tries to sidestep the Lovelace quote by trying to substitute “originate” with “surprise”, and mixes in the irrelevant topic of consciousness. That tells me something about the depth of Deutsch's article: me gave me the ability to spot an error by Alan Turing! To his defense, he may only have had only that sentence from Lovelace, plus the distracting comment by Hartree, to react to.ctailhttps://www.blogger.com/profile/10120443153591800477noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-43825185267310565902017-02-20T17:08:57.836+01:002017-02-20T17:08:57.836+01:00I have read it before, but will read it again with...I have read it before, but will read it again with your recommendation in mind. Olle Häggströmhttps://www.blogger.com/profile/07965864908005378943noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-45399576308422999512017-02-20T16:23:17.257+01:002017-02-20T16:23:17.257+01:00Did you read this article by David Deutsch? You ma...Did you read <a href="https://aeon.co/essays/how-close-are-we-to-creating-artificial-intelligence" rel="nofollow">this article by David Deutsch</a>? You may have commented it in your book (which I haven't read), but I have never seen you refer to it directly. I think he does an extremely good job there of summing up the core argument that I think is behind most computer scientists view on the subject, even though we might not realize it. I didn't quite see it before reading that, anyway.ctailhttps://www.blogger.com/profile/10120443153591800477noreply@blogger.com