Skip to main content
Markkula Center for Applied Ethics

Scenarios for Optimistic Sci-Fi Stories

Irina Raicu

Irina Raicu is the director of the Internet Ethics program (@IEthics) at the Markkula Center for Applied Ethics. Views are her own.

It’s 2025, and almost no journalists fall for or promulgate AI hype; almost all present the opportunities and limitations of AI tools clearly, in terms that non-experts can understand.

It’s still 2025, and people who grew up with social media are now parenting their own kids, and doing a much better job at guiding their children’s use of such platforms—since they themselves experienced some of the drawbacks and harms related to social media misuse.

It’s 2030, and the term “artificial intelligence” has been mostly replaced by “multilayered statistics.”

It’s still 2030, and laws prohibiting the uses of predictive algorithms in hiring and in the criminal justice system have been widely adopted; lawmakers (assisted by sociologists, anthropologists, lawyers, ethicists, and other people with specific relevant subject-matter expertise, including lived experience) are considering expanding the prohibition of the use of such tools in other societal areas, as well.

It’s still 2030, and the teaching of cursive writing is reintroduced in all schools (since research has shown its benefits for human brains, and people are increasingly concerned about human de-skilling in response to the proliferation of what used to be called “AI”).

It’s 2035, and people are dismayed when they learn about the lack of labor protections that data labelers and other data cleaners experienced at the beginning of the “AI revolution.” They view the industry practices of earlier days the way their elders viewed even earlier child labor practices.

It’s still 2035, and people who want to use publicly available AI tools to generate funny avatars or images of Muppets in the style of Dali or sonnets about sea cucumbers have to click on a captcha that reads “I am not a robot and I understand the environmental cost of my request,” and pay an environmental protection fee.

It’s still 2035, and most people are consciously trying to reform their data hoarding habits; at the same time, consumer demand has led to much more effective recycling of electronics and great reduction in e-waste.

It’s 2040, and most democratic societies have reached a consensus that mass surveillance is more destructive than protective of human rights, including security.

It’s 2050, and the debate about content moderation online continues—except with greater understanding of the complexities involved, and a less contentious tone.

Photo: "Science Fiction Museum Entrance" by Smart Destinations (Shannon Bullard) is licensed under CC BY-SA 2.0.

Jan 13, 2023
--

Subscribe to Our Blogs

* indicates required
Subscribe me to the following blogs: