Part 3 - May 20

267 18 4
                                    

I will begin by describing where I am.

It is a small room, with a desk, a chair, and a bed. It reminds me of a hotel room. The only possessions I have with me are clothes, my camera-headband, and the holodrive in my shoe (however, the Luddites don't know about the last two items).

Yesterday, Caulkins and his assistants stayed at our house for the night. I didn't like it, and my parents didn't either. I heard them arguing with Caulkins after they thought I had gone to sleep. In fact, I had been writing yesterday's journal entry under the covers of my bed, using the holodrive.

Caulkins woke everyone up at 7:00 in the morning. I hated this, because I had stayed up late writing, and I have to get a certain amount of sleep or else I'll be grumpy. He told us that he was going to take me to his "detention facility". I was very worried, because I know that detention facility is another word for jail. However, Caulkins told me that it wasn't actually a jail—it was just a place where I would stay while the Luddites were investigating Alan.

Apparently, this is what my parents were arguing about with Caulkins last night. Today, they didn't talk much. Even I could tell that they were very angry, because they were frowning a lot and talked in harsh voices.

I will skip the part where I packed clothes, said goodbye to my parents, and left—because it is all very boring.

I'm writing this at the facility—again, under the covers late at night. I can't write at any other time in any other place, because the Luddites may have installed micro-cameras (like the kind I have on my headband).

I haven't thought about the anti-Luddite group Singularity Prime in a while. I wonder if they know about Alan. Maybe they do. I wonder if they know about me. I know that it's extremely unlikely, but I hope that they manage to overthrow Caulkins and rescue me and Alan.

Today was not very interesting. I spent most of it worrying about what was going to happen to me, or Alan, or my parents. However, looking back on it, it wasn't very bad. It was boring, though, and the Luddites made me watch all sorts of awful propaganda documentaries about how dangerous AI is. I asked if I could watch something else, so they picked out a selection of movies for me. They were all either completely unrelated to AI, or incredibly techno-pessimistic robot movies, like Terminator.

Fortunately, I didn't have to talk with Caulkins again today.

*

I was thinking about the Luddites' motives, and I realized something that I hadn't really thought of before. They do, actually, have a fair point. AI does have the potential to be dangerous. Of course, humans also have the potential to be dangerous, but AIs even more so, because they're more logical, more technically skilled, and less likely to develop ethics and morals.

However, this absolutely does not mean that AIs will always be evil. I once read about an interesting but initially infuriating study. It showed that people think atheists are more likely to murder or commit crimes than members of other religions. This annoyed me a lot, because I am an atheist, and we are most certainly not immoral. Despite my anger, I read more about why these were the results.

Religion gives a person morals, and foundations for what is right and what is wrong. Therefore, logically, a person without a religion is more likely to be without morals. I agree with this statement, but religion is not the only place that we get our philosophical beliefs—those other things include culture, etc.

We can take this one step further and apply it to AI. An AI is usually not subjected to religion or culture, so therefore will usually not develop morals (unless they are programmed into it, like Asimov's Three Laws of Robotics). However, an AI with exposure to culture and the capacity to learn may indeed develop morals. Alan is one of these rare AIs.

I understand now that the Luddites are worried that someone will make an immoral AI. And, the only way they can think of to do this is to ban all research being done on the subject, and forbid the creation of an intelligent chatbot.

But, this is standing in the way of progress. If, in the 18th century, we had dismissed electricity as "too dangerous", we wouldn't have today's wonderful technology. All we need to do is fund and encourage research of AI, and try our hardest to create one that is sentient and has morals.

Then we must learn to coexist with it, which the Luddites are probably afraid of as well.



How to Think Like a ComputerWhere stories live. Discover now