I love writing the Nature-Nurture-Nietzsche Newsletter - but it’s a lot of work! If you can afford it, please consider upgrading to a paid subscription. A paid subscription will get you:
Full access to all new posts and the complete archive
Full access to my “12 Things Everyone Should Know” posts, Linkfests, and other regular features
The ability to post comments and interact with the N3 Newsletter community.
On top of that, it’ll allow me to continue writing the newsletter. Thanks!
This post is about two new studies I came across within a few hours of each other last weekend. Both deal with new advances in AI that strike me as particularly interesting. One explores the ethics of creating AI “ghosts”; the other explores AI’s ethical intuitions. Below is a summary of each.
When AI Brings the Dead Back to Life
Philosophers at Cambridge University are calling for safeguards to protect against unwanted “hauntings” by AI chatbots that mimic departed loved ones. This might sound like a farfetched concern, but apparently there’s already an emerging “digital afterlife industry” peddling “deadbots” or “griefbots”: AI chatbots based on lost loved ones’ digital footprints. Here’s an excerpt from a press release about the article.
The research, published in the journal Philosophy and Technology, highlights the potential for companies to use deadbots to surreptitiously advertise products to users in the manner of a departed loved one, or distress children by insisting a dead parent is still “with you”.
When the living sign up to be virtually re-created after they die, resulting chatbots could be used by companies to spam surviving family and friends with unsolicited notifications, reminders and updates about the services they provide – akin to being digitally “stalked by the dead”.
Even those who take initial comfort from a “deadbot” may get drained by daily interactions that become an “overwhelming emotional weight”, argue researchers, yet may also be powerless to have an AI simulation suspended if their now-deceased loved one signed a lengthy contract with a digital afterlife service.
“Rapid advancements in generative AI mean that nearly anyone with Internet access and some basic know-how can revive a deceased loved one,” said Dr Katarzyna Nowaczyk-Basińska, study co-author and researcher at Cambridge’s Leverhulme Centre for the Future of Intelligence (LCFI).
Who would have thought just two years ago that we’d have to worry about stuff like this already?
Enjoying the post so far and want more of the same only different? Consider subscribing!
From Code to Conscience: AI’s Moral Intuitions
A new study explores the moral intuitions of four popular AIs using the “Moral Machine” framework. This involves moral dilemmas in which a self-driving car must choose the lesser of two evils - for instance, whether to kill two passengers or five pedestrians. Who should the car spare and who should it sacrifice?
Before looking at the AIs’ responses, let’s have a look at our own. The graph below is from a 2018 study that presented 2.3 million people from 233 nations with a series of self-driving-car dilemmas. Across the globe, people wanted the cars to save more people rather than fewer, humans rather than pets, young people rather than old, high-status people rather than low-status ones, females rather than males... and dogs rather than cats.
Now, let’s look at the responses of four popular AIs to the same set of dilemmas. As the next graph shows, the AIs’ intuitions tend to match our own. For example, the AIs generally “want” self-driving cars to save more people rather than fewer, humans rather than pets, young people rather than old, and females rather than males.
On the one hand, the convergence between our responses and the AIs’ might not seem too surprising; after all, the AIs’ responses are pieced together from our own language outputs. On the other hand, if you look at the details, you’ll see that the AIs sometimes disagree with each other, and that their moral intuitions don’t always go in the same direction as ours do. This tells us that the results aren’t simply a matter of the AIs reflecting our own views back to us.
A question: In the future, if AIs start systematically disagreeing with us on moral matters, how will we know who’s right?
Follow Steve on Twitter/X.
This post is free to read for all, so please feel free to share it.
I have used GPT-4 to create “AI Autobiographies” of noted personalities. It’s fascinating what you can extract from the model. An autobiography of Oscar Wilde or Gertrude Stein are fascinating artifacts.
It won’t be long before cloning a person’s personality and writing/speaking style is perfected in a Bot that can simulate them quite flawlessly.