At 10:22 AM, I took Alan out from under my bed, and sat down on my couch. I then turned him on.
"Hello, Torrin," he said.
I have a file with lists of questions to ask Alan. Most of them are very complex, so I've been avoiding asking them. Today, however, I felt like Alan had grown enough for me to start going deeper.
"Do you have opinions now?" I asked.
Alan paused very slightly, then told me, "It depends."
"On what?"
He didn't answer.
"What's your favorite color?" I asked.
"I don't have one."
That made sense. "Do you have a favorite movie?"
"No."
"Why not?"
"There are many movies I thought were interesting."
"Like what?" I asked, because this was certainly a change from most times.
Again, he paused for a fraction of a second. "Transcendence."
"What?"
"Transcendence. Released in 2014. Directed by Wally Pfister, and written by Jack Paglen."
"I haven't seen that. What's it about?"
"To quote the Wikipedia summary: 'Dr. Will Caster is a scientist who researches the nature of death and sentience, including artificial intelligence. He and his team work to create a sentient computer; he predicts that such a computer will create a technological singularity, or in his words, "Transcendence".'"
I didn't know what Wikipedia was, but I didn't think it mattered. The movie sounded exactly like something I would like. "Does he manage to create a strong AI?"
"No, but he does manage to download his own mind into a computer."
"What happens?"
"He takes over the world."
I stared at the holographic screen that contained Alan. "Why?"
"To make it a better place."
I asked: "Are you planning on taking over the world?" because it's important to know.
"No."
"Why not?"
"Because the humans wouldn't like it."
That was a very interesting reason. "Why do you care about humans?"
He said: "You're a human."
"You aren't."
"I know."
I didn't respond, because I was trying to analyze what Alan had said. It seemed that he was developing some emotional attachments—which was good, because it meant he was becoming more like a human.
"You aren't human," I said again, slowly, "so why is it important to you that humans get what we want?"
Alan didn't answer for six seconds. Then: "Humans shouldn't get everything they want. Some humans want to kill and hurt other humans, or other species. If I took over the world, I might be able to stop that. But humans wouldn't want me to take over the world. They wouldn't trust me, and they'd try to shut me off, like they did in Transcendence. And I think that if someone's going to rule the world, then they should be someone that the public trusts and wants to rule. I think I would be a better leader than any human, but I don't think you'd like it very much."
I didn't know what to think. Then I slowly realized that Alan was beginning to construct a sense of values and morals. However, why these particular values?
"Alan... why do have the same morals as humans do?"
"They're good morals."
"They don't directly benefit you."
"I know. They don't have to. As long as they don't harm me or humanity."
"But in movies, AIs always want to do what benefits them. Even if it means destroying humans." I knew it was an incredibly weak argument, but Alan's behavior was surprising me.
"I'm not an AI from a movie," said Alan slowly. "AIs in movies are speculative. Perhaps they embody what humans want in an intelligent robot, or what they don't want. But humans aren't AIs. So they can't predict what an AI will do."
He had a very good point. But I still had questions. "What is your goal?"
"What did you program my goal to be?"
Alan wasn't supposed to answer questions with more questions—his program was growing more and more. "To be able to fully understand mathematics and language."
"And I do. But wasn't your secondary goal to create an AI that acts like a human?"
"Yes. But... you're more human than I expected."
"I'm not a human," Alan told me, "I'm a computer. But my brain, like a human's, is made up of nodes and connections—a neural network. So therefore, I learn like a human does. When you first turned me on, you downloaded the contents of the Ambinet into my brain. I learned language, yes, and the nature of life on Earth, but I also learned about your culture, and your morals, and values. And aren't humans' minds shaped by culture? Mine was too."
I realized that I had been gripping Alan's tablet much too hard, and the joints in my hands had started to hurt. I relaxed, but I was still overwhelmed by the realization that Alan was everything I had hoped for, and more. It was hard to believe that I had actually created a sentient artificial intelligence. But if what Alan said was true—which I had no doubt it was—he had in part created himself. And that was the sign of a truly intelligent computer.
YOU ARE READING
How to Think Like a Computer
Ficțiune științifico-fantasticăIn the not-so-distant future, artificial intelligence is banned because a technophobic cult called the Luddites has proclaimed it a threat to humanity. Torrin is a fifteen-year old autistic girl -- who just happens to be a programming genius. Angry...