I’ll be honest, when I first saw the reading for this week of class, Towards Intellectual Freedom in an AI Ethics Global Community (Ebell et al., 2021), I expected it to be one of those dense, academic articles I’d have to reread three times just to make sense of. But once I got into it, I realized it was actually raising some questions I’ve been thinking about more and more lately. Not just as a student, but as a mom and basically a human being who lives in a world where my phone knows what I’m thinking before I do.

The article breaks down this growing concern that we’re speeding full-throttle into an AI-powered future without a shared moral compass to guide us. Basically, everyone’s building robots, but no one can agree on what values those robots should follow. It’s kind of like letting a bunch of people from different countries write the rules for raising your kids… and then handing those rules to a machine that never sleeps, never forgets, and definitely doesn’t care how tired you are.
Reading it gave me iRobot flashbacks, the Will Smith classic that every tech-loving millennial secretly loves… or maybe just me. (I will pause here for my husband to finish laughing at me for finally getting a Will Smith reference into one of these posts). Remember when the robot followed the rules but still made a choice that no human parent would have made? That’s the tension the article is talking about. Even if the AI follows the code, if the code isn’t written with diverse human values in mind, we’re still in trouble.

The authors argue that we need global conversations, ones that include more than just Silicon Valley boardrooms, to figure out what ethical AI really looks like. It’s about intellectual freedom, listening to different voices, and not rushing toward automation just because we can. That hit home for me. Because whether it’s a school fundraiser or a robot uprising, if only a few people make the rules, it never ends well.
So here’s my takeaway, from one human being to another: If we want a future where AI reflects real people, we’ve got to stay involved. Even if we’re just reading academic articles in our minivans between Target pickup and baseball practice. I mean, Will Smith tried to warn us.
Reference:
Ebell, C., Baeza-Yates, R., Benjamins, R. et al. Towards intellectual freedom in an AI Ethics Global Community. AI Ethics 1, 131–138 (2021). https://doi.org/10.1007/s43681-021-00052-5
Leave a comment