This week I am going to do something a little different. This week, I am sharing three pieces of content I read in the last couple of weeks that made me think and have stuck with me since.
Pausing AI Developments Isn't Enough. We Need to Shut it All Down
The first was written by Eliezer Yudhowsky, a “decision theorist from the U.S. who leads research at the Machine Intelligence Research Institute. He's been working on aligning Artificial General Intelligence since 2001 and is widely regarded as a founder of the field,” and was published in Time magazine.
Yudhowsky argues that we don’t understand the AI technology we are currently developing and that it is irresponsible to continue down this path — that we need more time to embed these systems with safeguards, or the consequences could be dire.
Yudhowsky writes:
Many researchers steeped in these issues, including myself, expect that the most likely result of building a superhumanly smart AI, under anything remotely like the current circumstances, is that literally everyone on Earth will die. Not as in “maybe possibly some remote chance,” but as in “that is the obvious thing that would happen.” It’s not that you can’t, in principle, survive creating something much smarter than you; it’s that it would require precision and preparation and new scientific insights, and probably not having AI systems composed of giant inscrutable arrays of fractional numbers.
The NYT’s Ezra Klein has shared similar views on his podcast and his writings. In his estimate, most of the folks he interviews who work in AI say there is about a 10% chance this is the outcome.
Klein writes:
In a 2022 survey, A.I. experts were asked, “What probability do you put on human inability to control future advanced A.I. systems causing human extinction or similarly permanent and severe disempowerment of the human species?” The median reply was 10 percent.
I find that hard to fathom, even though I have spoken to many who put that probability even higher. Would you work on a technology you thought had a 10 percent chance of wiping out humanity?
Now here is the part from Yudhowsky that I found most arresting:
To visualize a hostile superhuman AI, don’t imagine a lifeless book-smart thinker dwelling inside the internet and sending ill-intentioned emails. Visualize an entire alien civilization, thinking at millions of times human speeds, initially confined to computers—in a world of creatures that are, from its perspective, very stupid and very slow. A sufficiently intelligent AI won’t stay confined to computers for long. In today’s world you can email DNA strings to laboratories that will produce proteins on demand, allowing an AI initially confined to the internet to build artificial life forms or bootstrap straight to postbiological molecular manufacturing.
I’m pretty sure I’ve seen this movie before. Yikes.
On a more uplifting note, there was this.
Youth and Age: Kahlil Gibran on the Art of Becoming
If you are unfamiliar with Maria Popova and her work, you are in for a treat. Every Sunday, she publishes the most wonderful newsletter, full of gems like these. Exploring literature, history, and the lessons from great past and present thinkers have to teach us about living the good life.
This past week, she covered the beauty that comes with aging.
The unfolding of life does more than fray our bodies with entropy — it softens our spirit, blunting the edge of vanity and broadening the aperture of beauty, so that we become both more ourselves and more unselved, awake to the felicitous interdependence of the world.
And then she had me at Joan Didion.
Joan Didion knew this when she observed that “we are well advised to keep on nodding terms with the people we used to be, whether we find them attractive company or not.” Jane Ellen Harrison knew it when, in her superb meditation on the art of growing older, she cautioned that “you cannot unroll that snowball which is you: there is no ‘you’ except your life — lived.”
Maria’s newsletter this week pairs well with Dan Harris’ recent interview with Scott Galloway, where he discusses the moments when he can enjoy himself and say, “This is enough.”
And finally, this.
Do the Kids Think They’re Alright?
By Jonathan Haidt and Eli George, where they unpack a trove of data on how Gen Z reports their own perspectives on social media use. Haidt and his colleagues wanted to invite his critics and those who disagreed with him to see what they might be missing. Because as a scholar, and an admirer of the great philosopher John Stuart Mill, Haidt knows that: “He who knows only his own side of the case, knows little of that.”
Here is the major takeaway:
Social media platforms make depressed teens more depressed, insecure teens more prone to body image issues, and teens who have trouble focusing more prone to addiction. It turns distant global worries into fear- and outrage-inducing sound bites that flood teen and pre-teen news feeds and do little to help them grow into the kind of responsible and rational citizens who could help address the world’s problems as adults.
In sum, our efforts to find and elicit the views of Gen Z largely confirm our initial concerns while also suggesting additional avenues for investigation. The kids are not alright, and the combination of smartphones and social media is a major reason why.
It is a long read, but worth it; if nothing else, skim the charts.