tisdag 30 september 2014

En invariansprincip

Bland oss som inte sympatiserar med Sverigedemokraterna talas det i dessa dagar mycket om hur vi bör förhålla oss till dem. Är de rumsrena? Bör de behandlas som vilket annat riksdagsparti som helst? Eller är deras åsikter av sådant slag att vi inte bör ta i dem ens med tång?

Min avsikt med denna bloggpost är inte att sätta ned foten i dessa frågor. Istället har jag en synpunkt på vad för slags övervägande jag tycker bör (respektive inte bör) beaktas när man besvarar dem. Ofta hörs i dessa dagar synpunkter i stil med "Nu när de faktiskt har fått nästan 13% i ett riksdagsval (och till och med besitter posten som riksdagens andre vice talman) är det hög tid att behandla dem med respekt och som vilket annat parti som helst". En sådan synpunkt är enligt min mening irrelevant.

Inom fysiken talar man om invariansprinciper eller konserveringslagar, vilka anger hur någon viss kvantitet under vissa omständigheter måste förbli konstant: kända exempel är energikonserveringsprincipen och lagen om rörelsemängdens bevarande. I analogi med detta vill jag (dock fullt medveten om att analogin inte är perfekt, inte minst då jag överträder gränsen mellan fakta och värderingar) formulera följande politiska invariansprincip:
    Givet den politik ett parti för, dess ideologi, och de åsikter som deras företrädare uttrycker, så är den respekt och den rumsrenhet vi bör tillerkänna partiet konstant, oberoende av hur många anhängare det har.
Antag, hypotetiskt, att jag får ta del av partiprogram, ideologiska dokument, och yttranden och utspel från företrädare för två för mig tidigare okända partier A och B. Antag att jag på basis av denna information känner att de båda partierna står för lika förkastliga ideologier och åsikter, och att båda ligger strax bortom min gräns för vad jag finner respektabelt, rumsrent och något som det går an att förhandla med. Antag att jag sedan får reda på att A-partiet har stöd av 0,001% av den svenska väljarkåren, medan B-partiet har stöd av nära 13%. Om jag under de omständigheterna bestämmer mig för att upprätthålla min hårdföra linje mot A-partiet, medan jag gentemot B-partiet mjuknar och säger att "ett sådant parti måste man ju ändå respektera och tala med som med vilket annat som helst, allt annat vore ju odemokratiskt", då gör jag mig till en principlös ynkrygg och en efter-vinden-kappvändare.1 Lite grand som om jag vore högstadielärare och vände mig till klassen med följande besked:
    Hallå där Kalle, tag genast av dig din keps märkt Ljungskile SK, vi har faktiskt kepsförbud här i klassrumet! Lite ordning och reda får det väl ändå vara! Ni tiotalet ynglingar som sitter där med Hammarby-kepsar får däremot behålla dem på, ty ni representerar en så utbredd uppfattning att den måste respekteras.

Fotnot

1) Min invariansprincip är ett ideal. Jag vill inte vara så självförhärligande att jag påstår att jag aldrig någonsin skulle förfalla till kappvänderi. Det är naturligtvis härligt att försöka föreställa sig att man i Nazityskland skulle uppträda lika rakryggat som August Landmesser, men hur jag skulle agera under sådana extrema omständigheter (måtte jag aldrig utsättas för dem!) kan jag inte veta säkert.

söndag 28 september 2014

Hur vi kan undvika att lura oss själva

Kom och lyssna till mitt föredrag
    Hur vi kan undvika att lura oss själva:
    matematisk statistik som en korrigering av
    felkalibreringar i den mänskliga hjärnan
    1
på onsdag i nästa vecka klockan 12.00! Lokal är Sällskapsrum Birgit Thilander i Academicum, Medicinaregatan 3 i Göteborg.

Klicka på bilden för en större version!

Fotnot

1) Feg som jag är vågade jag inte använda Johan Wästlunds förslag till alternativ (och i stort sett synonym, men lite spänstigare) rubrik:
    Varför jag har rätt hela tiden och hur ni lagar era hjärnor

fredag 26 september 2014

tisdag 23 september 2014

Högskolan - ett orimligt krav på skattebetalarna?

På DN Debatt idag skriver professor Bo Becker vid Handelshögskolan i Stockholm om den brain gain som USA och Australien erhåller genom att deras främsta universitet attraherar utländska toppstudenter, och han oroar sig över hur svenska lärosäten skall klara konkurrensen. Hans svar är...

TERMINSAVGIFTER!

Ja, ni läste rätt: terminsavgifter! Det är med terminsavgifter som vi skall attrahera utländska studenter.

För att nu vara lite rättvis mot Becker så är det nog inte terminsavgifterna i sig som han tror skall attrahera de utländska studenterna, utan hög kvalitet i utbildningen.1 Och kvalitet kostar. Becker frågar retoriskt hur vi skall ha råd med hög kvalitet i universitet och högskolor "utan att ställa orimliga krav på skattebetalarna". Mitt svar är enkelt: den högre utbildningen skall även fortsättningsvis vara skattefinansierad. Detta är inte ett "orimligt krav på skattebetalarna", utan ett högst rimligt sådant, som vi hittills varit överens om (frånsett måhända en och annan medlem i Skattebetalarnas förening - huruvida Becker har medlemskort där vill jag inte spekulera över).

Det vore väldigt olyckligt om vi började rucka på principen att högre utbildning skall finansieras via skattsedeln och alltså vara gratis för studenten.2 Becker talar visserligen om att gå försiktigt fram: vi bör börja med ett tak på "10.000 kronor per termin", och "de lärosäten som vill får ha lägre avgifter" (åh tack snälla snälla schysstaste hygglo-Becker för att du så generöst vill ge oss den möjligheten!). Icke desto mindre skulle hans förslag innebära att vi slår in en kil mot det gamla systemet, tillkommet för att ge våra ungdomar rättvis tillgång till högre utbildning, så att det inte blir en exklusiv förmån för dem som kommer från välbärgade och studievana hemmiljöer.

Det finns även annat i Beckers framställning jag inte gillar. Han talar t.ex. om att det behövs incitament (eller, som han formulerar det, en "morot") för att vi på universiteten skall göra ett bra jobb, och han ser avgifterna som ett ekonomiskt sådant. Well, I've got news for you, Bo Becker: vi har redan gott om incitament. Framför allt består incitamenten i den yrkesstolthet som vi universitetslärare känner och som driver oss att göra vårt bästa (undantag finns givetvis, men huruvida Becker hör till dem vill jag inte spekulera över). Vi har till och med ekonomiska incitament, i fall att nu Becker har fått för sig att det endast är sådana som fungerar, ty de skattemedel vi får till våra utbildningar är inte ovillkorade. Det vore också bra om Becker satte sig in i vad den psykologiska forskningen säger om vad ekonomiska incitament gör med våra drivkrafter.

Fotnoter

1) Att kvalitetsproblem föreligger här och var i det svenska universitetsväsendet vill jag inte bestrida, men Beckers argumentation för att så är fallet är beklämmande svag. Han skriver att "inget svenskt universitet utom Karolinska institutet rankas numera bland världens topp 100", men detta är inte en indikation på att kvaliteten i svensk högre utbildning är låg, utan att den är hög.

Eftersom den beckerska argumentationen på denna punkt är vanligt förekommande (jag har hört den från både professorer och rektorer) tar jag mig friheten att förklara hur det ligger till. Världens befolkning är i runda slängar 7 miljarder, vilket betyder att om de 100 främsta universiteten i världen vore jämnt utspridda skulle det finnas ett per 70 miljoner invånare. Ett land som Sverige, med cirka 9,5 miljoner invånare, skulle väntas ha 0,14 sådana universitet. Nu är siffran istället 1 enligt vilken ranking det nu är som Becker stödjer sig på (han anger inte sin källa), vilket rimligtvis får anses vara ett gott utfall. (Det finns gott om universitetsrankingar, och utfallen varierar, men det kan i sammanhanget vara värt att nämna att enligt två av de mest välkända - såväl Shanghairankingen som (den tidigare i år här på bloggen omnämnda) Times Higher Education-rankingen - har vi inte mindre än 3 universitet på topp 100, vilket är en fantastiskt fin siffra med tanke på vilket litet land vi är!)

2) Noga taget så har vi redan börjat rucka på den saken, genom den avgiftsbeläggning för utomeuropeiska studenter som infördes 2011. Men denna bör tas bort, snarare än att vi som Becker föreslår fortsätter längre in på denna olyckliga väg.

fredag 19 september 2014

Superintelligence odds and ends: index page

Nick Bostrom's Superintelligence: Paths, Dangers, Strategies is such an important, interesting and thought-provoking book that it has taken me several blog posts to comment on it. Here, to help the reader find her way in my writings on this topic, I provide a list of links to these posts, plus a few others.

After two initial mentions of Bostrom's book when it had just been released in July this year... ...I posted my review of the book on September 10: I then quickly followed up my review with a sequence of five blog posts with further comments on the book, under the joint heading Superintelligence odds and ends: That exhausts, for the time being, my list of blog posts devoted explicitly to Bostrom's Superintelligence, but I have a large number of further blog posts that treat the same or closely related topics as his book, such as the following: For those readers who, due to their weak or non-existent knowledge of Swedish, feel prevented from reading some of these posts, perhaps Google Translate can provide some assistance. Its translations are neither beautiful, nor perfectly accurate, but in many cases they can help readers identify the gist of a blog post.

torsdag 18 september 2014

Superintelligence odds and ends V: What is an important research accomplishment?

My review of Nick Bostrom's important book Superintelligence: Paths, Dangers, Strategies appeared first in Axess 6/2014, and then, in English translation, in a blog post here on September 10. The present blog post is the last in a series of five in which I offer various additional comments on the book (here is an index page for the series).

*

The following two-sentence paragraph, which opens Chapter 15 of Superintelligence, is likely to anger many of my mathematician colleagues.
    A colleague of mine likes to point out that a Fields Medal (the highest honor in mathematics) indicates two things about the recipient: that he was capable of accomplishing something important, and that he didn't. Though harsh, the remark hints at a truth.

At this point, I urge the angry mathematicians reading this not to stop reading, and not to conclude that Bostrom is a jackass and/or a moron unworthy of further attention. There is more to his "harsh" position than first meets the eye. And less, because to those of us who continue reading, it quickly becomes clear that he is not saying that the mathematical results discovered by some or all Fields Medalists are unimportant. Instead, he has two interesting and original points to make about research in mathematics (and in other disciplines), one general, and one more concrete. The general point is that the value of the discovery of a result is not equal to the value of the result itself, but rather the value of how much earlier we, as a consequence of the discovery, learned the result compared to what would have been the case without that particular discovery.1 The more concrete point is that even if deep and ground-breaking results in pure mathematics are valuable in themselves (as opposed to whatever scientific or engineering applications may eventually grow out of them), then there may be a vastly more efficient way to advance mathematics (compared to what the typical Fields Medalist engages in), namely to contribute to the development of AI or of transhumanistic technologies for enhancement of human cognitive capacities, in order for the next generation of mathematicians (made of flesh-and-blood or of silicon) to be in a vastly better position to make even deeper and even more ground-breaking discoveries. Here's how Bostrom explains his position:
    Think of a "discovery" as an act that moves the arrival of information from a later point in time to an earlier time. The discovery's value does not equal the value of the information discovered but rather the value of having the information available earlier than it otherwise would have been. A scientist or a mathematician may show great skill by being the first to find a solution that has eluded many others; yet if the problem would soon have been solved anyway, then the work probably has not much benefited the world. There are cases in which having a solution even slightly sooner is immensely valuable, but this is more plausible when the solution is immediately put to use, either by being deployed for some practical end or serving as the foundation to further theoretical work. And in the latter case [...] there is great value in obtaining the solution slightly sooner only if the further work it enables is itself both important and urgent.

    The question, then, is [...] whether it was important that the medalist enabled the publication of the result to occur at an earlier date. The value of this temporal transport should be compared to the value that a world-class mathematical mind could have generated by working on something else. At least in some cases, the Fields Medal might indicate a life spent solving the wrong problem - perhaps a problem whose allure consisted primarily in being famously difficult to solve.

    Similar barbs could be directed at other fields, such as academic philosophy. Philosophy covers some problems that are relevant to existential risk mitigation - we encountered several in this book. Yet there are also subfields within philosophy that have no apparent link to existential risk or indeed any practical concern. As with pure mathematics, some of the problems that philosophy studies might be regarded as intrinsically important, in the sense that humans have reason to care about them independently of any practical application. The fundamental nature of reality, for instance, might be worth knowing about, for its own sake. The world would arguably be less glorious if nobody studied metaphysics, cosmology, or string theory. However, the dawning prospect of an intelligence explosion shines a new light on this ancient quest for wisdom.

    The outlook now suggests that philosophic progress can be maximized via an indirect path rather than by immediate philosophizing. One of the many tasks on which superintelligence (or even just moderately enhanced human intelligence) would outperform the current cast of thinkers is in answering fundamental questions in science and philosophy. This reflection suggests a strategy of deferred gratification. We could postpone work on some of the eternal questions for a little while, delegating that task to our hopefully more competent successors - in order to focus our own attention on a more pressing challenge: increasing the chance that we will actually have competent successors. This would be high-impact philosophy and high-impact mathematics.

If this is not enough to calm down those readers feeling anger on behalf of mathematics and mathematicians, Bostrom furthermore offers the following conciliatory footnote:
    I am not suggesting that nobody should work on pure mathematics or philosophy. I am also not suggesting that these endeavors are especially wasteful compared to all the other dissipations of academia or society at large. It is probably very good that some people can devote themselves to the life of the mind and follow their intellectual curiosity wherever it leads, independent of any thought of utility or impact. The suggestion is that at the margin, some of the best minds might, upon realizing that their cognitive performance may become obsolete in the forseeable future, want to shift their attention to those theoretical problems for which it makes a difference whether we get the solution a little sooner.
The view of research priorities and the value of mathematical, philosophical and scientific progress that Bostrom offers in the above passages may seem provocative at first, but on second reflection it strikes me as wise and balanced. Are there any aspects of this issue he has failed to take into account? Of course there are, but the question should be whether there are any such aspects that are sufficiently relevant to overthrow his conclusion. Here's the best one I can come up with for the moment:

Perhaps the main value of a mathematical discovery lies not in the result itself, but in the process leading up to the discovery, and perhaps it is important that the cognitive work is done by an ordinary human tather than an enhanced human or some super-AI. Well, a bit of enhancement is OK - many years of education, plus some caffeine - but anything much beyond that reduces the value of the discovery significantly.

Something along those lines. But, honestly, doesn't it sound arbitrary, artificial, and more than a little anthropochauvinistic? It is certainly not an argument with which the mathematical community can hope to convince tax payers to support research in mathematics. Perhaps some similar argument might work for music or for literature, as the audience might have a preference for songs or novels they know are written by ordinary humans rather than by some superintelligence.2 But the case is very different for mathematics, because the population of people who can appreciate and enjoy, say, Wiles' proof of Fermat's Last Theorem or Perelman's proof of the Poincaré conjecture, is very small and consists almost exclusively of professional mathematicians. So using the argument for mathematics comes very close to asking taxpayers to support mathematical research because it is enjoyable to mathematicians.

The process-more-important-then-result objection fails to convince. All in all, I think that Bostrom's new perspective on the value of research findings, although of course not the only valid viewpoint, is very much worth putting on the table when discussing priorities regarding which research areas to fund.3

Footnotes

1) This notion of the value of a discovery is not entirely unproblematic, however. Consider the case of my friends Svante Linusson and Johan Wästlund, and their their solution to the famous problem of proving Parisi's conjecture. On the very same day that they announced their result, another group, consisting of Chandra Nair, Balaji Prabhakar and Mayank Sharma, announced that they had achieved the same thing (using a different approach). For the sake of the argument, let us make the following simplifying assumptions:
    (a) the two works and their timings were independent (almost true),

    (b) there is no extra value in having the two different proofs of the result compared to having just one (plain false),

    (c) without the two works, it would have taken another ten years for the scientific community to come up with a proof of Parisi's conjecture (pure speculation on my part).

With these assumptions, Bostrom's way of attaching value to research discoveries has some strange consequences. The work of Linusson and Wästlund is deemed worthless (because in view of the Nair-Prabhakar-Sharma paper, they did not accelerate the proof of Parisi's conjecture). Similarly and symmetrically, the Nair-Prabhakar-Sharma paper is deemed worthless. Yet, Bostrom has to accept that the two papers, taken together, are valuable, because they gave us proof of Parisi's conjecture ten years earlier than what would have been the case without them.

Such superadditivity of values is not unusual. A hot dog on its own may be worthless to me, and the same may go for a bun, but together they constitute a highly delicious and valuable meal. But the Linusson-Wästlund and the Nair-Prabhakar-Sharma papers, exhibiting the same superadditivity, still does not fit the hot-dog-and-bun pattern, because unlike the hot dog and the bun, each of the papers contains, on its own, the whole thing we value (the early arrival of the proof of Parisi's conjecture). Strange.

2) And chess. As a chess amateur, I enjoy studying the games of world champions and other grandmasters. For more than a decade, there have been computer programs that play clearly better chess than the very best human chess players. And yet, I do not find even remotely the same thrill in studying games between these programs, compared to those played between humans.

3) It will be interesting to see how this statement will be received by my friends and colleagues in the mathematics community. My hope and my belief is that the position I'm endorsing will be appreciated for its nuances and recognized as a point of view that merits discussion. But I am not certain about this. If worst comes to worst, my statement will be widely condemned and perhaps even mark the end of a 15-or-so years period during which I have received a steady stream of invitations and requests to take on various positions of trust in which I am expected to defend the interests of research mathematics. I would not welcome such a scenario, but I much prefer it to one in which I refrain from speaking openly on important issues.

tisdag 16 september 2014

Superintelligence odds and ends IV: Geniuses working on the control problem

My review of Nick Bostrom's important book Superintelligence: Paths, Dangers, Strategies appeared first in Axess 6/2014, and then, in English translation, in a blog post here on September 10. The present blog post is the fourth in a series of five in which I offer various additional comments on the book (here is an index page for the series).

*

The topic of Bostrom's Superintelligence is dead serious: the author believes the survival and future of humanity is at stake, and he may well be right. He treats the topic with utmost seriousness. Yet, his subtle sense of humor surfaces from time to time, diverting nothing from his serious intent, but providing bits of enjoyment for the reader. Here I wish to draw attention to a footnote which I consider a particularly striking example of Bostrom's way of exhibiting a slightly dry humor at the same time as he means every word he writes. What I have in mind is Footnote 10 in the book's Chapter 14, p 236. The context is a discussion on whether it improves or worsens the odds of a favorable outcome of an AI breakthrough with a fast takeoff (a.k.a. the Singularity) if, prior to that, we have performed transhumanistic cognitive enhancement of humans. As usual, there are pros and cons. Among the pros, Bostrom suggests that improved cognitive skills may make it easier for individual researchers as well as society as a whole to recognize the crucial importance of what he calls the control problem, i.e., the problem of how to turn an intelligence explosion into a controlled detonation with consequences that are in line with human values and favorable to humanity. And here's the footnote:
    Anecdotally, it appears those currently seriously interested in the control problem are disproportionately sampled from one extreme end of the intelligence distribution, though there could be alternative explanations of this impression. If the field becomes fashionable, it will undoubtedly be flooded with mediocrities and cranks.
The community of researchers currently working seriously on the control problem is very small - if their head count even reaches the realm of two-digit numbers, it is not by much. Bostrom is one of its two most well-known members; the other is Eliezer Yudkowsky. I'd judge both of them to have cognitive capacities fairly far into the high end of "the intelligence distribution" (and I imagine myself to be in a reasonable position to calibrate - as a research mathematician, I know a fair number of people (including Fields Medalists) in various parts of that high end). Bostrom is undoubtedly aware of his own unusual talents, as well as of the strong social norm saying that one should not talk about one's own high intelligence, yet his devotion to honest unbisaed matter-of-fact presentation of what he perceives as the truth (always with uncertainty bars) leads him in this case to override the social norm.

I like that kind of honesty, even though it carries with it a nonnegligible risk of antagonizing others. Yudkowsky, in fact, has been known for going far - much further than Bostrom does here - in speaking openly about his own cognitive talents. And he does receive a good deal of shit for that, such as in Alexander Kruels's recent blogpost devoted to what he considers to be "Yudkowsky’s narcissistic tendencies".

All this makes the footnote multi-layered in a humorous kind of way. I also think the footnote's final sentence about what happens "if the field becomes fashionable" carries with it a nice touch of humor. Bostrom has a farily extreme propensity to question premises and conclusions, he is well aware of this, and I do think this last sentence (which points out a downside to what is clearly a main purpose of the book - namely to draw attention to the control problem) is written with a wink to that propensity.

måndag 15 september 2014

Superintelligence odds and ends III: Political reality and second-guessing

My review of Nick Bostrom's important book Superintelligence: Paths, Dangers, Strategies appeared first in Axess 6/2014, and then, in English translation, in a blog post here on September 10. The present blog post is the third in a series of five in which I offer various additional comments on the book (here is an index page for the series).

*

A breakthrough in AI leading to a superintelligence would, as Bostrom underlines in his book, be a terribly dangerous thing. Among many other aspects and considerations, he discusses whether our chances of surviving such an event are better if technological progress in this area speeds up or is slowed down, and this turns out to be a complicated and far from straightforward issue. On balance, however, I tend to think that in most cases we're better off with a slower progress towards an AI breakthrough.

Yet, in recent years I've participated in a couple of projects (with Claes Strannegård) ultimately aimed at creating an artificial general intelligence (AGI); see, e.g., this paper and this one. Am I deliberately worsening humanity's survival chances in order to do work I enjoy or to promote my academic career?

That would be bad, but I think what I'm doing is actually defensible. I might of course be deluding myself, but what I tell myself is this: The problem is not so much the speed of progress towards AGI itself, but rather the ratio between this speed and the speed at which we make concrete progress on what Bostrom calls the control problem, i.e., the problem of figuring out how to make sure that a future intelligence explosion becomes a controlled detonation with benign consequences for humanity. Even though the two papers cited in the previous paragraph show no hint of work on the control problem, I do think that in the slighly longer run it is probably on balance beneficial if, through my involvement in AI work and participation in the AI community, I improve the (currently dismally low) proportion of AI researchers caring about the control problem - both through my own head count of one, and by influencing others in the field. This is in line with a piece of advice recently offered by philosopher Nick Beckstaed: "My intuition is that any negative effects from speeding up technological development in these areas are likely to be small in comparison with the positive effects from putting people in place who might be in a position to influence the technical and social context that these technologies develop in."

On p 239 of Superintelligence, Bostrom outlines an alternative argument, borrowed from Eric Drexler, that I might use to defend my involvement in AGI research:
    1. The risks of X are great.
    2. Reducing these risks will require a period of serious preparation.
    3. Serious preparation will begin only once the prospect of X is taken seriously by broad sectors of society.
    4. Broad sectors of society will take the prospect of X seriously only once a large research effort to develop X is underway.
    5.The earlier a serious research effort is initiated, the longer it will take to deliver (bacause it starts from a lower level of pre-existing enabling technologies).
    6. Therefore, the earlier a serious research effort is initiated, the longer the period during which serious preparation will be taking place, and the greater the reduction of the risks.
    7. Therefore, a serious research effort toward X should be initiated immediately.
Thus, in Bostrom's words, "what initially looks like a reason for going slow or stopping - the risks of X being great - ends up, on this line of thinking, as a reason for the opposite conclusion." The context in which he discusses this is the complexity of political reality, where, even if we figure out what needs to be done and go public with it, and even if our argument is watertight, we cannot take for granted that our proposal will be implemented. Any idea we have arrived at concerning the best way forward...
    ...must be embodied in the form of a concrete message, which is entered into the arena of rhetorical and political reality. There it will be ingored, misunderstood, distorted, or appropriated for various conflicting purposes; it will bounce around like a pinball, causing actions and reactions, ushering in a cascade of consequences, the upshot of which need bear no straightforward relationship to the intentions of the original sender. (p 238)
In such a "rhetorical and political reality" there may be reason to send not the message that most straightforwardly and accurately describes what's on our mind, but rather the one that we, after careful strategic deliberation, consider most likely to trigger the responses we're hoping for. The 7-step argument about technology X is an example of such second-guessing.

I feel very uneasy about this kind of strategic thinking. Here's my translation of what I wrote in a blog post in Swedish earlier this year:
    I am very aware that my statements and my actions are not always strategically optimal [and I often do this deliberately]. I am highly suspicious of too much strategic thinking in public debate, because if everyone just says what he or she considers strategically optimal to say, as opposed than offering their true opinions, then we'll eventually end up in a situation where we can no longer see what anyone actually thinks is right. To me that is a nightmare scenario.
Bostrom has similar qualms:
    Ther may [...] be a moral case for de-emphasizing or refraining from second-guessing moves. Trying to outwit one another looks like a zero-sum game - or negative-sum, when one considers the time and energy that would be dissipated by the practice as well as the likelihood that it would make it generally harder for anybody to discover what others truly think and to be trusted when expressing their own opinions. A full-throttled deployment of the practices of strategic communication would kill candor and leave turth bereft to fend for herself in the backstabbing night of political bogeys. (p 240)

lördag 13 september 2014

Superintelligence odds and ends II: The Milky Way preserve

My review of Nick Bostrom's important book Superintelligence: Paths, Dangers, Strategies appeared first in Axess 6/2014, and then, in English translation, in a blog post here on September 10. The present blog post is the second in a series of five in which I offer various additional comments on the book (here is an index page for the series).

*

Concerning the crucial problem of what values we should try to instill into an AI that may turn into a superintelligence, Bostrom discusses several approaches. In part I of this Superintelligence odds and ends series I focused on Eliezer Yudkowsky's so-called coherent extrapolated volition, which Bostrom holds forth as a major option worthy of further consideration. Today, let me focus on the alternative that Bostrom calls moral rightness, and introduces on p 217 of his book. The idea is that a superintelligence might be successful at the task (where we humans have so far failed) of figuring out what is objectively morally right. It should then take objective morality to heart as its own values.1,2

Bostrom sees a number of pros and cons of this idea. A major concern is that objective morality may not be in humanity's best interest. Suppose for instance (not entirely implausibly) that objective morality is a kind of hedonistic utilitarianism, where "an action is morally right (and morally permissible) if and only if, among all feasible actions, no other action would produce a greater balance of pleasure over suffering" (p 219). Some years ago I offered a thought experiment to demonstrate that such a morality is not necessarily in humanity's best interest. Bostrom reaches the same conclusion via a different thought experiment, which I'll stick with here in order to follow his line of reasoning.3 Here is his scenario:
    The AI [...] might maximize the surfeit of pleasure by converting the accessible universe into hedonium, a process that may involve building computronium and using it to perform computations that instantiate pleasurable experiences. Since simulating any existing human brain is not the most efficient way of producing pleasure, a likely consequence is that we all die.
Bostrom is reluctant to accept such a sacrifice for "a greater good", and goes on to suggest a compromise:
    The sacrifice looks even less appealing when we reflect that the superintelligence could realize a nearly-as-great good (in fractional terms) while sacrificing much less of our own potential well-being. Suppose that we agreed to allow almost the entire accessible universe to be converted into hedonium - everything except a small preserve, say the Milky Way, which would be set aside to accommodate our own needs. Then there would still be a hundred billion galaxies devoted to the maximization of pleasure. But we would have one galaxy within which to create wonderful civilizations that could last for billions of years and in which humans and nonhuman animals could survive and thrive, and have the opportunity to develop into beatific posthuman spirits.

    If one prefers this latter option (as I would be inclined to do) it implies that one does not have an unconditional lexically dominant preference for acting morally permissibly. But it is consistent with placing great weight on morality. (p 219-220)

What? Is it? Is it "consistent with placing great weight on morality"? Imagine Bostrom in a situation where he does the final bit of programming of the coming superintelligence, to decide between these two worlds, i.e., the all-hedonium one versus the all-hedonium-except-in-the-Milky-Way-preserve.4 And imagine that he goes for the latter option. The only difference it makes to the world is to what happens in the Milky Way, so what happens elsewhere is irrelevant to the moral evaluation of his decision.5 This may mean that Bostrom opts for a scenario where, say, 1024 sentient beings will thrive in the Milky Way in a way that is sustainable for trillions of years, rather than a scenarion where, say, 1045 sentient beings will be even happier for a comparable amount of time. Wouldn't that be an act of immorality that dwarfs all other immoral acts carried out on our planet, by many many orders of magnitude? How could that be "consistent with placing great weight on morality"?6

Footnotes

1) It may well turn out (as I am inclined to believe) that no objective morality exists or that the notion does not make sense. We may instruct the AI to, in case it discovers that to be the case, shut itself down or to carry out some other default action that we have judged to be harmless.

2) A possibility that Bostrom does not consider is that perhaps any sufficiently advanced superintelligence will do so, i.e., it will discover objective morality and go on to act upon it. Perhaps there is some, yet unknown, principle of nature that dictates that any sufficiently intelligent creature will do so. In my experience, many people who are not used to thinking about superintelligence the way, e.g., Bostrom and Yudkowsky do, suggest that something like this might be the case. If I had to make a guess, I'd say this is probably not the case, but on the other hand it doesn't seem so implausible as to be ruled out. It would contradict Bostrom's so-called orthogonality thesis (introduced in Chapter 7 of the book and playing a central role in much of the rest of the book), which says (roughly) that almost any values are compatible with arbitrarily high intelligence. It would also contradict the principle of goal-content integrity (also defended in Bostrom's Chapter 7), stating (again roughly) that any sufficiently advanced intelligence will act to conserve its ultimate goal and value function. While I do think both the orthogonality thesis and the goal-content integrity principle are plausible, they have by no means been deductively demonstrated, and either of them (or both) might simply be false.7

3) A related scenario is this: Suppose that the AI figures out that hedonistic utilitarianism is the objectively true morality, and that it also figures out that any sentient being always comes out negatively on its "pleasure minus suffering" balance, so that the world's grand total of "pleasure minus suffering" will always sum up to something negative, except in the one case where there are no sentient creatures at all in the world. This one case of course yields a balance of zero, which turns out to be optimal. Such an AI would proceed to do as best as it can to exterminate all sentient beings in the world.

But could such a sad statement about the set of possible "pleasure minus suffering" balances really be true? Well, why not? I am well aware that many people (including myself) report being mostly happy, and experiencing more pleasure than suffering. But are such reports trustworthy? Mightn't evolution have shaped us into having highly delusional views about our own happiness? I don't see why not.

4) For some hints about the kinds of lives Bostrom hopes we might live in this preserve, I recommend his 2006 essay Why I want to be a posthuman when I grow up.

5) Bostrom probably disagrees with me here, because his talk of "nearly-as-great good (in fractional terms)" suggests that the amount of hedonium elsewhere has an impact on what we can do in the Milky Way while still acting in a way "consistent with placing great weight on morality". But maybe such talk is as misguided as it would be (or so it seems) to justify murder with reference to the fact that there will still be over 7 billion other humans remaining well and alive?

6) I'm not claiming that if it were up to me, rather than Bostrom, I'd go for the all-hedonium option. I do share his intuitive preference for the all-hedonium-except-in-the-Milky-Way-preserve option. I don't know what I'd do under these extreme circumstances. Perhaps I'd even be seduced by the "in fractional terms" argument that I condemned in Footnote 5. But the issue here is not what I would do, or what Bostrom would do. The issue is what is "consistent with placing great weight on morality".

7) For a very interesting critique of the goal-content integrity principle, see Max Tegmark's very recent paper Friendly Artificial Intelligence: the Physics Challenge and his subsequent discussion with Eliezer Yudkowsky.

fredag 12 september 2014

Superintelligence odds and ends I: What if human values are fundamentally incoherent?

My review of Nick Bostrom's important book Superintelligence: Paths, Dangers, Strategies appeared first in Axess 6/2014, and then, in English translation, in a blog post here on September 10. The present blog post is the first in a series of five in which I offer various additional comments on the book (here is an index page for the series).

*

The control problem, in Bostrom's terminology, is the problem of turning the intelligence explosion into a controlled detonation, with benign consequences for humanity. The difficulties seem daunting, and certainly beyond our current knowledge and capabilities, but Bostrom does a good job (or seemingly so) systematically partitioning it into subtasks. One such subtask is to work out what values to instill in the AI that we expect to become our first superintelligence. What should it want to do?

Giving an explicit collection of values that does not admit what Bostrom calls perverse instantiation seems hard or undoable.1 He therefore focuses mainly on various indirect methods, and the one he seems most inclined to tentatively endorse is Eliezer Yudkowsky's so-called coherent extrapolated volition (CEV):
    Coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.
The language here is, as Yudkowsky admits, a bit poetic. See his original paper for a careful explication of all the involved concepts. The idea is that we would like to have the AI take on our values, but for various reasons (we do not agree with each other, our values are confused and incoherent, many of us are jerks or jackasses, and so on) it is better that the AI works a bit more on our actual values to arrive at something that we would eventually recognize as better and more coherent.

CEV is an interesting Idea. Maybe it can work, maybe it can't, but it does seem to be worth thinking more about. (Yudkowsky has thought about it hard for a decade now, and has attracted a sizeable community of followers.) Here's one thing that worries me:

Human values exhibit, at least on the surface, plenty of incoherence. That much is hardly controversial. But what if the incoherence goes deeper, and is fundamental in such a way that any attempt to untangle it is bound to fail? Perhaps any search for our CEV is bound to lead to more and more glaring contradictions? Of course any value system can be modified into something coherent, but perhaps not all value systems cannot be so modified without sacrificing some of its most central tenets? And perhaps human values have that property?

Let me offer a candidate for what such a fundamental contradiction might consist in. Imagine a future where all humans are permanently hooked up to life-support machines, lying still in beds with no communication with each other, but with electrodes connected to the pleasure centra of our brains in such a way as to constantly give us the most pleasurable experiences possible (given our brain architectures). I think nearly everyone would attach a low value to such a future, deeming it absurd and unacceptable (thus agreeing with Robert Nozick). The reason we find it unacceptable is that in such a scenario we no longer have anything to strive for, and therefore no meaning in our lives. So we want instead a future where we have something to strive for. Imagine such a future F1. In F1 we have something to strive for, so there must be something missing in our lives. Now let F2 be similar to F1, the only difference being that that something is no longer missing in F2, so almost by definition F2 is better than F1 (because otherwise that something wouldn't be worth striving for). And as long as there is still something worth striving for in F2, there's an even better future F3 that we should prefer. And so on. What if any such procedure quickly takes us to an absurd and meaningless scenario with life-suport machines and electrodes, or something along those lines. Then no future will be good enough for our preferences, so not even a superintelligence will have anything to offer us that aligns acceptably with our values.2

Now, I don't know how serious this particular problem is. Perhaps there is some way to gently circumvent its contradictions. But even then, there might be some other fundamental inconsistency in our values - one that cannot be circumvented. If that is the case, it will throw a spanner in the works of CEV. And perhaps not only for CEV, but for any serious attempt to set up a long-term future for humanity that aligns with our values, with or without a superintelligence.

Footnotes

1) Bostrom gives many examples of perverse instantiations. Here's one: If we instill the value "make us smile", the AI might settle for paralyzing human facial musculatures in such a way that we go around endlessly smiling (regardless of mood).

2) And what about replacing "superintelligence" with "God" in this last sentence? I have often mocked Christians for their inability to solve the problem of evil: with an omnipotent and omnibenevolent God, how can there still be suffering in the world? Well, perhaps here we have stumbled upon an answer, and upon God's central dilemma. On one hand, he cannot go for a world of eternal life-support and electrical stimulus of our pleasure centra, because that would leave us bereft of any meaning of our lives. On the other hand, the alternative of going for one of the suboptimal worlds such as F1 or F2 would leave us complaining. In such a world, in order for us to be motivated to do anything at all, there must be some variation in our level of well-being. Perhaps it is the case that no matter how high our general such level is, we will perceive any dip in well-being as suffering, and be morally outraged at the idea of a God who allows it. But he still had to choose some such level, and here we are.

(Still, I think there is something to be said about the level of suffering God chose for us. He opted for the Haiti 2010 earthquake and the Holocaust, when he could have gone for something on the level of, say, the irritation of a dust speck in the eye. How can we not conclude that this makes him evil?)

torsdag 11 september 2014

Om väder och klimat i Upsala Nya Tidning

Upsala Nya Tidnings historik vad gäller debattinlägg i klimatfrågan är långt ifrån entydigt ärorik och briljant. Man skulle kanske kunna tro att den artikel av Lennart Bengtsson från 2009 rubricerad "Växthusgasernas inverkan är ringa" som jag nämnde under Bengtsson-turbulensen i våras skulle vara det absoluta lågvattenmärket, men faktum är att det finns ännu värre exempel, som Wibjörn Karléns debattinlägg 2009, och Sten Kaijsers 2011.

Idag bjuds emellertid UNT:s läsare på en ordentlig uppryckning i form av en text om förhållandet mellan väder och klimat, rubricerad "Sårbart samhälle ingen valfråga?", undertecknad av yours truly tillsammans med Mikael Karlsson, ordförande i European Environmental Bureau (och tidigare i Svenska Naturskyddsföreningen), och meteorologen Pär Holmgren. Så här inleds den:
    Det väder som råder en viss dag på en viss plats kan beskrivas som en liten bit i ett stort pussel, som i sin helhet ger en bild av klimatet.

    Vädret utspelas alltså inom klimatpusslets ramar, men den pågående klimatförändringen gör att hela ramen förflyttas. Extremväder ligger nära kanten på ramen och när ramen är i rörelse förändras också sannolikheterna för såväl kraftig nederbörd och översvämningar som torka och storbränder. Det är mot den bakgrunden som sommarens extremväder för det första ska förstås. För det andra är extremvädret en varningssignal i nutid om en framtid, där dagens extrema händelser kan bli det normala.

    De som säger att extremvädret inte har med klimatförändringen att göra har därför fel.

    Visst hade de extrema väderhändelserna den här sommaren kunnat ske även utan mänsklig klimatpåverkan, men det kan sägas om vädret även i ett förändrat klimat år 2050. Med den logiken beror då ingen väderhändelse på klimatet. Det är en missvisande retorik som bygger på en statisk och avgränsad syn på vetenskapen om klimatet.

    En förklaring till den förvirrade debatten kan vara att extremväder är en känslig fråga ett valår, särskilt när politiken varit alltför passiv med tanke på de många studier som pekat på stora behov av klimatanpassning. Den som spelat ner klimatfrågan och arbetet med att förebygga utsläpp och anpassa samhället till den klimatförändring som redan pågår vill förstås hellre skylla på vädret än erkänna att man gjort för lite.

Läs hela artikeln här!

onsdag 10 september 2014

Superintelligence review

With the publication in Axess 6/2014 of my review (which I advertised in two blog posts in July) of Nick Bostrom's Superintelligence: Paths, Dangers, Strategies, I am now ready to present it to the readers of this blog.1,2 The book is, in my humble opinion, terribly important, and it deserves to be as widely discussed as possible. Hoping to stimulate such discussion, I decided to translate the review into English; my translation is given below. The issues I wanted to discuss vastly outnumber what I could reasonably fit into the review, and for this reason I will, during the next week or so, follow up this review with a handful of further blog posts devoted to what I will call "Superintelligence odds and ends".

*

In Axess 3/2014, readers were treated to a theme section on future robotization and its consequences to society. Machines replacing human labor is of course not a new phenomenon, and we have always found new tasks for human workers at about the same rate as machines have taken over the old ones (albeit with some variation in the booms and recessions of the economy). Today, however, things are happening faster than ever, and it is not clear what to expect when advances in artificial intelligence (AI) causes the automatization not only of manual labor, but also of an increasing number of increasingly advanced intellectual tasks. On a basic level, our liberation from the hardship of labor is a good thing, letting us focus instead on art, culture, sports, love or whatever we wish to fill our lives with, but can the transition to such a utopia be accomplished without negative social consequences of monstrous proportions? These are some of the issues discussed by the eight writers of the Axess theme section, with Erik Brynjolfsson's and Andrew McAfee's important and influential recent book The Second Machine Age as the starting point.

There is, however, a longer and more radical perspective on AI, where far greater values than merely a turbulent labor market are at stake. Earlier this year, physicist Stephen Hawking and three coauthors wrote in The Independent that "whereas the short-term impact of AI depends on who controls it, the long-term impact depends on whether it can be controlled at all", and they warned that while "it's tempting to dismiss the notion of highly intelligent machines as mere science fiction, [...] this would be a mistake, and potentially our worst mistake in history".

Already Alan Turing, the father of modern computer science, anticipated in his 1951 essay Ingelligent machinery, a heretical theory how machines would eventually reach superhuman intelligence levels and then quickly take control of the world. Since then, the field has been haunted by a series of overly optimistic and in retrospect a bit embarrassing predictions about how soon an AI breakthrough could be expected. No AI with a general intelligence on the level of a human or higher is anywhere to be seen. On the other hand, AI research has made impressive advances in more specialized areas. Nowadays, computers beat even the strongest human opposition in chess as well as in the quiz game Jeopardy. Google, with their driverless cars' flawless performance both on highways and in city traffic, has achieved results that only ten years ago would have been considered utopian. The list of examples goes on and on, but what makes it easy to underestimate the achievements of AI research is the phenomenon that AI pioneer John McCarthy summarized by saying that "as soon as it works, no one calls it AI anymore".

What, then, can we expect from the promised breakthrough? There are theoretical arguments supporting the conclusion that an AI exceeding human general intelligence can eventually be built, and that from that point onwards, the development will escalate very quickly, towards levels of superintelligence leaving all human cognitive abilities far behind. What makes the latter development likely are variations of the following feedback mechanism: once an AI has a general intelligence exceeding ours, it is also better than us at building AI, so it will be in a position to build an even better AI, and so on in a spiral climbing towards higher and higher intelligence levels, in what is sometimes called the Singularity.

Few, if anyone, have thought deeper and more systematically about these issues than the Swedish-born Oxford philosopher Nick Bostrom. In his groundbreaking new book Superintelligence: Paths, Dangers, Strategies, he emphasizes what a crucial turning point in human history an AI breakthrough may become. On one hand, the creation of a superintelligence has the potential to solve all our problems and to give us all that we wish for. On the other hand, a breakthrough can turn out to be extremely dangerous, and in the worst case lead to the extinction of humanity. A central message in the book is the importance of thinking things through very carefully and taking suitable action before the breakthrough hits us, because when it does it may well be too late.

Talk of the danger of AI may trigger us to think about drones and other military technology, but Bostrom emphasizes that even an AI with seemingly harmless tasks may bring about disaster. He exemplifies with a machine designed to produce paperclips. If such a machine becomes the seed of an intelligence explosion, then, unless we have planned it with extreme caution, it may well result in the entire solar system (including ourselves) being turned into a grotesque heap of paperclips.

No reliable timelines are available for when the big AI breakthrough can be expected - it is even hard to say which of the years 2030 or 2200 is the more realistic prediction. More generally, making concrete predictions about what a breakthrough will entail is a very shaky undertaking. Bostrom maintains an exemplary attitude of epistemic humility when he subjects even the most obvious-sounding arguments - his own and others' - to his utmost scrutiny, and there is hardly a prediction or a recommendation in the book that isn't followed by a qualification along the lines of "on the other hand, it might be that...". In the preface he stresses that many of his points are likely to be outright wrong (although he is unable to tell which ones). And then he adds the following.
    This is no false modesty: for while I believe that my book is likely to be seriously wrong and misleading, I think that the alternative views that have been presented in the literature are substantially worse - including the default view, or "null hypothesis", according to which we can for the time being safely or reasonably ignore the prospect of superintelligence.
The difficulty of this area dooms every detailed scenario to being wrong. Yet, there may be pedagogical reasons to outline such scenarios, as Bostrom does on a few occasions, for instance in order to indicate how a superintelligent AI might outwit a naive strategy along the lines of "there is no danger, as we can always pull the plug". Most of his reasoning, however, is on a relatively high level of abstraction. And yet, due to his eminently clear thinking, this does not make the book difficult to read.

A cornerstone of the book is the chapter in which Bostrom argues for two theses which he calls, respectively, the orthogonality thesis and the instrumental convergence thesis. To understand these, we need to distinguish between the AI's means and its ends. Its ends consist in its ultimate drive, while its means are whatever instrumental drives that serve as tools to support its ultimate drive. The orthogonality thesis states that superintelligence is compatible with pretty much any ultimate drive, be it paperclip production or the maximization of the total hedonistic level (pleasure minus suffering) of all sentient beings of the entire universe. The instrumental convergence thesis states that there are a number of instrumental drives that any sufficiently intelligent AI will tend to attain, almost regardless of its ultimate drive. An example is the desire not to be turned off, because an AI that is turned off will not be able to do anything to promote its ultimate aim. For similar reasons, an AI can be expected to wish to improve its intelligence, to copy its code to other machines, and to take control of as much hardware and other resources as possible. Plus, it will want to preserve its ultimate drive.

The problem we need to solve, according to Bostrom, is what he calls the control problem: how do we turn the intelligence explosion into a controlled detonation, with consequences that are beneficial to humanity? To do so, we have to instill the AI with the right drives (and to do so before it reaches superintelligence levels, because by then it will not allow such tampering). The more one looks at this challenge, the less innocuous and the more difficult does it seem, and even the slightest mistake may trigger complete disaster. An AI equipped with the ultimate drive of doing things that make us happy may decide to rewire our brains in such a way that we are constantly happy regardless of circumstances.

The difficulties that are lined up in chapter after chapter may eventually be too much for some readers, and cause them to throw up their hands and declare the situation hopeless. But Bostrom refuses to give in, convinced as his is that he is working on one of the most important problems ever encountered, and that it needs to be solved. But he cannot do it on his own. Unfortunately, among the many thousands of researchers working on AI today, only a tiny fraction show a serious interest in the control problem. Bostrom is eager to change this, and speaks of "a research challenge worthy of some of the next generation's best mathematical talent".

There are a number of recent books, other than Bostrom's, that treat the intelligence explosion and related radical AI scenarios. These include Singularity Rising by economist James Miller, and Our Final Invention by documentary film maker James Barrat. Especially the latter is written in a more popular and less technical language than Supreintelligence. Bostrom makes up for this with his extraordinary sagacity and clarity, enabling him to combine his wide-ranging knowledge over an impressively broad spectrum of disciplines - engineering, natural sciences, medicine, social sciences and philosophy - into a comprehensible whole. For anyone content with reading just one of these books, I do not hesitate to recommend Superintelligence. If this book gets the reception that it deserves, it may turn out the most important alarm bell since Rachel Carson's Silent Spring from 1962, or ever.

Footnotes

1) The editors of Axess gave my review the headline En ren kontrollfråga, which translates into English as Purely a matter of control. The word ren (pure) here is a bit misplaced, possibly in an attempt to create a (rather pointless) pun that I do not care to explain here.

2) Among other reviews and comments on Bostrom's book, two of the most interesting ones I've come across so far are Max Tegmark's recent manuscript Friendly Artificial Intelligence: the Physics Challenge and Robin Hansons's blog post I Still Don’t Get Foom. Tegmark gives reasons for why Bostrom's control problem may be even more difficult than suggested in the book, while Hanson expresses skepticism about the very rapid ascension from human-level artificial intelligence that Bostrom calls "fast takeoff" and that many others call "the Singularity". Hanson's argument, which is clearly worthy of serious attention, echoes his reasoning in The Hanson-Yudkowsky AI Foom Debate (which I reviewed here).

torsdag 4 september 2014

Gästinlägg av Stefan Svallfors: Universitetet - en förlorad kärlek

Som en uppföljning till gårdagens positiva recension av Stefan Svallfors bok Kunskapens människa har jag idag nöjet att bjuda på en pinfärsk text av samme författare - ett förtvivlat kärleksbrev till ett universitet. Vi är många som idag hyser liknande känslor för våra respektive lärosäten, men jag tvivlar på att det finns någon som lyckats bättre än Stefan med att sätta ord på dem. /OH

* * *

Universitetet - en förlorad kärlek

Stefan Svallfors

(Detta är ett helt och hållet fiktivt kärleksbrev från en helt och hållet fiktiv forskare till ett helt och hållet fiktivt universitet. Alla eventuella likheter med nu levande forskare och universitet är oavsedda, ja förmodligen oundvikliga.)

Käraste käraste Universitetet!

Vi måste prata. Det har inte känts så bra på sista tiden. Det känns som om du kväver mig och som om vi hela tiden är irriterade på varandra. Eller kanske inte litar på varandra.

När jag tänker efter är det ganska längesedan det varit riktigt bra. Ändå hade vi någonting särskilt, du och jag. Vart tog det vägen? När hamnade vi egentligen i utförsbacken? Hur gick det till?

Åh jag minns den där första tiden, den av rusig förälskelse. Känslan av djupaste innebörd, av sammanhang. Känslan av att ha kommit hem, hittat det jag letat efter så länge. Att jag fick redskap att förstå hur allt hänger ihop. Att jag var bland likar. För första gången i mitt liv omsluten, innefattad.

Den sena höstluftens skarpa kristallkänsla när jag steg ut från biblioteket och mörkret redan fallit, känslan av klarhet och rymd. Av mening och tillhörighet. Som jag älskade dig då.

Men den mogna kärleken var kanske på sätt och vis ännu bättre. När vi lärt känna varandra, när jag tyckte mig se precis vem du var, med alla dina fel och brister, och ändå bara älskade dig mer. Du var lite omständlig ibland, och hade ett visst kontrollbehov – som om du aldrig riktigt litade på min kärlek. Men du lät mig ändå växa och jag gav dig allt jag hade. Vi hade det fint tillsammans. Jag lärde mig att naturligt säga ”vi” och mena du och jag, och för mig har sådant aldrig kommit lätt.

När någon frågade om de kanske skulle försöka få ihop det med ett Universitet svarade jag alltid: självklart. Om det är det här du vill och kan finns inget bättre ställe i denna värld. Självklart ska du försöka.

Nuförtiden är jag inte lika säker på vad jag ska svara. Kanske, kommer det dröjande. Om du tror att ni kan ordna det på detta och detta viset. Om du tror du kan stå ut med det här och det här och det här. Annars inte. Sök då någon annan.

Visst borde jag sett tecknen tidigare. Den där gnagande misstänksamheten om att jag inte riktigt gjorde allt jag borde. Trots att jag gav dig allt, mycket mer än någon kunnat förvänta sig. Jag tittade ibland lite roat överseende på andra, de som tycktes ha ett mycket svalare förhållande till sina Universitet, som om de bara var vilka anställda som helst, på vilket ställe som helst. Men för mig var du Universitetet och jag ville bara dig. Men det var som om du inte riktigt trodde på mig, som om du ständigt måste kontrollera hur mycket jag egentligen brydde mig. Och ingenting jag gjorde verkade riktigt hjälpa. Alla forskningsmiljonerna jag släpade hem, alla publikationerna i ditt och mitt namn. Ja du log och tog emot men det var som om det aldrig riktigt dög.

Jag måste tala klarspråk. Det är inte jag, det är du.

Jag tror mycket av det berodde på detta att du är en Myndighet. Och en Myndighet förväntas vara på ett visst sätt. Som inte riktigt gick ihop med den du var innerst inne. En Myndighet ska brösta sig, styra och ställa, vänta på och utfärda order, tvinga sin vilja igenom. Och varje gång du försökte vara en Myndighet drog jag mig undan, gjorde inte som du ville, satte mig på tvären. Blev sarkastisk. Och du blev stött. Men det var ju bara för att jag ville att du skulle vara allt det fina du kunde vara som jag var så besvärlig. Jag ville inget illa.

Det blev alltid värre när du träffat de andra Myndigheterna. De som tyckte de var riktiga Myndigheter medan du inte riktigt dög i deras ögon. Stöddiga bråkstakar som Regeringskansliet och Riksrevisionsverket, som hånade dig för att du inte riktigt visste vad dina anställda höll på med, för att vi så ofta ignorerade alla dina försök att styra oss. ”Men jag står för andra värden”, försökte du, men det lät halvhjärtat och jag såg att deras hånskratt tärde på dig. Varje gång du kom hem från de andra Myndigheterna råkade vi i gräl. Du ville styra och ställa mer, som för att visa att du också var en riktig Myndighet. Och jag blev undvikande, elak, började gå omvägar för att slippa träffa dig. Vi bråkade, smällde i dörrar, surade, återfann varandra. Men för varje gång var det som om något dog i mig.

Och ännu värre blev det när du började lyssna på de där självutnämnda relationsexperterna, de där som kallade sig ”New Public Management.” Bara namnet borde ju varit en varningssignal. Men de lovade att vi skulle få det mycket bättre tillsammans om vi bara lyssnade på deras råd. Jag skulle veta lite bättre vad det var jag egentligen skulle göra och du skulle få lite bättre koll på vad jag egentligen höll på med. Sedan skulle vi utvärdera alltsammans och allt skulle bli så bra. Men det blev ju bara värre. Jag kände mig allt mer kvävd och irriterad och du blev allt mer misstänksam ju fler rapporter om vad jag gjorde som skickades. Till sist blev det hela självuppfyllande: jag började allt oftare gå bakom din rygg, hitta på saker jag inte alls gjort, hålla mig undan, ägna mig åt mitt. Efter ett tag hade jag nästan inte dåligt samvete längre, bara lite.

Och jag började tycka – och jag gråter lite när jag skriver det – att du var ful och motbjudande. Att allt det där jag attraherades så mycket av i början hade försvunnit. Trots att jag visste att det fanns kvar där, längst inne, bara dolt av alla ”visionsdokument” och ”policys” och ”utvecklingsplaner”. Men den hemska sanningen var att du börjat äckla mig, du och alla de där som bara förstörde för oss. Och eftersom jag ändå aldrig riktigt slutat älska dig – hur konstigt det än kan låta – slets jag nästan sönder. Vi kunde vara så mycket för varandra och nu är vi nästan inget. Jag borde kanske lämna dig. Men det gör så ont att tänka så. Kanske är jag bara rädd – det är ju det jag gör och kan med dig som är det enda jag kan. Hur skulle jag klara mig utan dig? Och jag vet att jag har så mycket att ge dig, om du bara kunde ta emot det.

Men så här kan vi inte ha det längre. Vi måste hitta något annat sätt att leva tillsammans. Där vi kan se på varandra med kärlek och respekt igen. Där vi inte är så misstänksamma längre. Där du kan släppa lite på ditt kontrollbehov och jag kan låta bli mina ständiga elakheter. Jag vill gärna försöka. Vill du?

Jag sitter i bibliotekscaféet i eftermiddag, mellan tre och fyra. Titta förbi om du vill prata. Men kom bara om du verkligen tror vi kan börja på nytt. Jag orkar inte med fler besvikelser.

onsdag 3 september 2014

Forskaren är människa, hävdar Stefan Svallfors

Jag har känt Stefan Svallfors, professor i sociologi, via sociala medier i något år eller så, och från början haft en påtaglig känsla av värdegemenskap och sympati. Denna förstärktes i mitt första möte IRL med Stefan för några veckor sedan, och ännu mer då jag nu läst hans lilla bok Kunskapens människa - om korppen, kollektivet och kunskapspolitiken från 2012, i vilken han framhåller att en forskare också är en människa, och som sådan bärare av kroppsliga och sociala egenskaper och behov, något som emellertid totalt ignoreras av forskningspolitik och forskningsbyråkrati, med långtgående negativa konsekvenser. Han tar stöd både i sina egna erfarenheter och i djuplodande intervjuer han gjort med ett litet antal forskarkollegor i olika ämnen. Jag känner igen mig och nickar instämmande nästan hela tiden inför hans personliga och självutlämnande beskrivningar av forskningens glädjeämnen och frustrationer, alltifrån den euforiska känslan när man - ensam eller i grupp - gjort vad som verkar vara ett intellektuellt genombrott och sammahangen faller på plats, via de demoner av sviktande självförtroende som drabbar en och mot vilka inga forskningsframgångar i världen verkar fungera som vaccin, till allt det elände som det forskningsbyråkratiska komplexet öser över en, som följande:
    Umeå november 2007

    "Hur definierar ni egentligen välfärd?" Kanslichefens fråga är riktad till mig. Jag får en sådan lust att be honom hålla tyst. Svarar istället så gott jag kan, trots att jag vet att han inte riktigt förstår vad jag pratar om.

    Känner mig ensam och illa till mods, trots att rummet är nästan fullt. Framför mig en rad nollställda ansikten. Rektors bedömningsgrupp för den ansökan om forskningscentrum i välfärdsforskning som strax skall lämnas in. Ingen av dem har någon kunskap om mitt forskningsområde men några har ändå synpunkter. De har suttit här hela dagen och har sett den ena efter den andra av universitetets forskarstjärnor göra sina krumbukter på podiet. Kanske har de lika liten lust att vara här som jag har. Lusten att bara resa sig och gå är nästan oemotståndlig. Sitter kvar i någon slags blandning av pliktkänsla och rädsla att göra mig omöjlig.

    Nu ritar den strategiska planeraren på sitt block. Här är vår ansökan, ritar han, men var är "mervärdet" och "visionen"? Det borde vi förstärka. Och hur ser egentligen vägen till de långsiktiga målen ut? Har vi gjort någon ordentlig SWOT-analys? Jag inser att han bara gör sitt jobb, men fylls ändå av aggression. Tycker inte om mig själv när jag försöker berätta hur vi tänkt. Borde inte vara här, längtar bort.

    Veckorna innan har ägnats åt att krysta fram en stor ansökan om tioårigt forskningsanslag, att tillställa Vetenskapsrådet för Umeå universitets räkning. Åt försök att sy ihop saker med kollegor som tycker jag skall jobba med deras forskningsagenda istället för min egen. Tanken på att jobba ihop med dem under ett decennium fyller mig med vämjelse. Men jag spelar spelet, inser att ingen annan på min institution kan ro detta i land. Plikten besegrar lusten.

    Några månader senare kommer beskedet. Inga pengar den här gången. Positiva utvärderingar, men vår ansökan faller precis under strecket. Tio minuters besvikelse, sedan en oändlig lättnad. Jag slipper. Och kanske är det över nu? (s 93-94)

Precis så är det. Förutom att det så klart inte är över. Den här sortens erbarmliga tillställningar kommer att fortsätta jaga både Stefan och mig tills vi är döda eller åtminstone pensionerade.

Den lätt återhållna vreden i ovanstående citat eskalerar mot nya nivåer i följande passage:
    I takt med att universiteten i högre grad fåtts att tävla med varandra, och resurser allokerats till storskaliga "excellens-centra", har klassen av forskningsadministratörer vuxit i antal och betydelse. Lägg till detta EU-forskningsfinansieringen, vars extremt komplicerade ansökningsförfarande och uppföljningsrutiner gjort det nödvändigt för universiteten att skapa ytterligare nya forskningsadministrativa funktioner.

    Dessa forskningsadministrativa positioner befolkas av personer som nu funnit nya sätt att trygga sin utkomst, utan att behöva utsätta sig för den ständiga prövning och de oupphörliga krav på originalitet som är forskarlivets lott. En del av dessa forskningssystemets parasiter har - kanske föga förvånande - fått för sig att det är de som bestämmer hur det ska vara och vart skåpet ska stå. Allt medan värddjuret - forskarna - tvingas till allt märkligare krumbukter.

    Musiker påstår ju ibland att alla musikkritiker är misslyckade rockmusiker, att det är därför de skriver så elakt och nedlåtande. En dyster och elak tanke som emellanåt gjort sig påmind hos mig är att det är på samma sätt med en del forskningsadministratörer: misslyckade forskare, som går till sin dagliga gärning med en gansk mörk bild av hur forskningen fungerar om den lämnas i fred. Jag hoppas att jag har fel. (s 96-97)

Cirka tre fjärdedelar in i boken drabbas jag av ett tvivel. Stefans lägesbeskrivning är visserligen träffande, men har han verkligen något konstruktivt motsförslag på hur forskningssverige bör organsieras, och kan det vara så att han verkligen inte insett att i ett system där efterfrågan på forskningsmedel överstiger tillgången (ett förhållande som knappast kommer att, eller ens bör, ändras) så krävs prioriteringar, vilka med nödvändighet kommer att kännas plågsamma? Vidare läsning lugnar mig: Stefan har förstått även den saken, och han har en del konstruktiva förslag om bättre system för forskningsmedelsfördelning. Det är emellertid inte dessa förslag som är bokens behållning, utan hans insiktsfulla tankar kring ett antal psykologiska och sociala aspekter på vad det innebär att vara forskare, och kring de djupgående bristerna i det nuvarande systemet.

Kunskapens människa har, med sina 146 luftigt satta sidor, ett aptitligt format, och jag lockades att sträckläsa den på bara några timmar. Dess språk bär, den är full av klokskaper, och den lyckas vid ett eller ett par tillfällen sätta ord på något viktigt som jag själv tidigare bara haft en vag aning om men inte förmått formulera. Läs den!