tag:blogger.com,1999:blog-8131794231697217573.post1202807701287473903..comments2024-03-28T09:06:16.955+01:00Comments on Häggström hävdar: Brett Hall tells us not to worry about AI ArmageddonOlle Häggströmhttp://www.blogger.com/profile/07965864908005378943noreply@blogger.comBlogger2125tag:blogger.com,1999:blog-8131794231697217573.post-51298209335041799682016-09-24T16:55:41.617+02:002016-09-24T16:55:41.617+02:00Isn't it that "Intelligence" in a ma...Isn't it that "Intelligence" in a machine context is misinterpreted by extrapolating its meaning to how we humans commonly use the word?<br /><br />The use of the word intelligence instead of another word that could represent an overall combination in the capability of an agent to plan, reason and execute the necessary steps to maximize some goal-oriented utility function, creates confusion as to whether the agent in question has a sense of motivation behind its actions (as an "intelligent" human would).<br /><br />I think that the idea of a machine that has sufficient computing power and clever engineering to accurately find feasible general problem solutions in the vast search space of all possible solutions, would be much more acceptable to the layman and the general population if we found a better word to describe the general aptitude of the machine without all the inherent bias.Anonymoushttps://www.blogger.com/profile/16881541169029215224noreply@blogger.comtag:blogger.com,1999:blog-8131794231697217573.post-3437157020490958912016-09-23T15:38:04.271+02:002016-09-23T15:38:04.271+02:00After just have read Bostroms ”Superintelligence” ...After just have read Bostroms ”Superintelligence” (not thoroughly, I admit) I agree with your opinion about Hall’s objections to it. Still, I think there are other serious objections to be made against Bostrom analysis of a potential intelligence explosion. Firstly, I find his discussion about the possibilities of an AI to develop a moral standpoint above the human level (”moral rightness”) to be extremely anthropocentric and shallow. Secondly, an AI with superintelligence has to possess creativity, which I believe must be a combination of randomly created ideas and control systems (presumably the crucial mechanisms behind both evolution and human inventiveness). Both these objections against Bostrom actually make the prognosis about the possibilities to control a potential superintelligence more pessimistic, since lack of both predictability and ”humanistic” moral in combination with superior intelligence seem extremely dangerous. Thirdly, I have lived long enough to have heard scientists predict exponential growth leading to e.g. catastrophic population ”explosion” and global forest death (among other similar mathematically based prophecies), both of which has turned out to be exaggerated to say the least. In reality, an approximate logistic curve is the common outcome, which Bostrom himself seems to put some belief in, according to his Figure 7. However, I cannot find him pondering over this potential outcome in the text. This third objection makes me more optimistic than Bostrom, but the two first objections make me think that if a superintelligence actually appears, humans have no greater possibilities to stop it from dominating their race than the Neanderthals had possibilities to stop Homo Sapiens, owning non-Neanderthalistic moral and (from the Neanderthal point of view) unpredictable inventiveness, from driving them out of competition. Fourthly, I think we probably can postpone the development of superintelligence, if such is possible, but I deeply doubt that we are able to block it in the long run. And honestly, are there any reasons, other than crypto-racial, to preserve the human domination of this planet?<br /> Björn SAnonymousnoreply@blogger.com