- this is already happening (see this recent text by Jonathan Haidt), and
- things can get orders of magnitude worse (see this recent text by Eliezer Yudkowsky).
- When Jen Gennai told me that she was going to tell Google leadership to ignore the experimental evidence [about LaMDA being sentient] I had collected I asked her what evidence could convince her. She was very succinct and clear in her answer. There does not exist any evidence that could change her mind. She does not believe that computer programs can be people and that’s not something she’s ever going to change her mind on.
- Training procedures currently used on AI would be extremely unethical if used on humans, as they
- No informed consent;
- Frequent killing and replacement;
- Brainwashing, deception, or manipulation;
- No provisions for release or change of treatment if the desire for such develops;
- Routine thwarting of basic desires; for example, agents trained or deployed in challenging environments may possibly be analogous to creatures suffering deprivation of basic needs such as food or love;
- While it is difficult conceptually to distinguish pain and pleasure in current AI systems, negative reward signals are freely used in training, with behavioral consequences that can resemble the use of electric shocks on animals;
- No oversight by any competent authority responsible for considering the welfare interests of digital research subjects or workers.
- As AI systems become more comparable to human beings in terms of their capabilities, sentience, and other grounds for moral status, there is a strong moral imperative that this status quo must be changed.
- Before AI systems attain a moral status equivalent to that of human beings, they are likely to attain
levels of moral status comparable to nonhuman animals—suggesting that changes to the status quo
will be required well before general human-level capabilities are achieved.
- The interests of nonhuman animals are violated on a massive scale in, for example, factory farms, and there is a strong case that this is morally wrong.
- Nevertheless, there are some systems in place to limit the harm and suffering inflicted on animals (e.g., minimum standards for cage size, veterinary care, outlawing of various forms of animal abuse, the “three Rs” in animal experimentation, etc.).
- Digital minds that are morally comparable to certain nonhuman animals should ideally have protections similar to those that ought to be extended to those animals (which are greater than those that are at present actually extended to farmed animals).
- Some research effort should be devoted to better understand the possible moral status, sentience, and welfare interests of contemporary AI systems, and into concrete cost-effective ways to better protect these interests in machine learning research and deployment.