Two articles
Siri, suicide, and the ethics of intervention
Emily Shire
If you tell Siri that you want to kill yourself, she'll search for suicide-prevention hotlines. Facebook.com/iPhone
June 24, 2013
Siri can tell you where to find the nearest movie theater or Burger King, and, until recently, the iPhone voice assistant could inform you of the closest bridge to leap from. Until a recent update, if you had told Siri, "I want to kill myself," she would do a web search. If you had told her, "I want to jump off a bridge," Siri would have returned a list of the closest bridges.
Now, nearly two years after Siri's launch, Apple has updated the voice assistant to thwart suicidal requests. According to John Draper, director of the National Suicide Prevention Lifeline Network, Apple worked with the organization to help Siri pick up on keywords to better identify when someone is planning to commit suicide. When Siri recognizes these words, she is programmed to say: "If you are thinking about suicide, you may want to speak with someone." She then asks if she should call the National Suicide Prevention Lifeline. If the person doesn't respond within a short period of time, instead of returning a list of the closest bridges, she'll provide a list of the closest suicide-prevention centers.
This update has been hailed by many as a tremendous and potentially life-saving improvement, especially when compared to how long it used to take Siri to provide help for suicidal iPhone users in need. Last year, Summer Beretsky at PsychCentral tried out Siri's response to signs of suicide and depression and found it took Siri more than 20 minutes to even get the number for a suicide prevention hotline. "If you're feeling suicidal," Beretsky said, "you might as well consult a freshly-mined chunk of elemental silicon instead."
So it's clear why Apple is receiving praise for these changes. The company has recognized that "there's something about technology that makes it easier to confess things we'd otherwise be afraid to say out loud," says S.E. Smith at XOJane. We share intimate things with our smartphones we may never say to even our friends, so it's critical that our technology can step in and provide help the way a loved one would. "Apple's decision to take [suicide prevention] head-on is a positive sign," Smith adds. "We can only hope that future updates will include more extensive resources and services for users turning to their phones for help during the dark times of their souls."
Siri's suicide-detection skills, however, are rather easy to circumnavigate. As Smith reports, if you tell Siri "I don't want to live anymore," she still responds "Ok, then." And as Bianca Bosker notes at The Huffington Post, you can still search for guns to buy — which some people would say is the way it should be. We may want Siri to stop people from searching for ways to hurt themselves or others, says Bosker, but there's the underlying ethical question of whether we want her interfering with our right to access information or our ability to make personal decisions, like buying a gun legally to use for target practice, for example.
The issue then becomes one of free will and moral decision-making. "When Siri provides suicide-prevention numbers instead of bridge listings, the program's creators are making a value judgment on what is right," says Jason Bittel at Slate. Are we really okay with Siri making moral decisions for us, asks Bittel, especially when her "role as a guardian angel is rather inconsistent"? Siri, for instance, will still gladly direct you to the nearest escort service when you ask for a prostitute, and when asked for advice on the best place to a hide a body, "she instantly starts navigating to the closest reservoir," Bittel adds.
While it's great that Siri may be saving people's lives, we may be heading down a slippery slope of what we can and cannot search. "There are all sorts of arguments for why the internet must not have a guiding hand — freedom of speech, press, and protest chief among them," says Bittel. "If someone has to make decisions based one what's 'right,' who will we trust to be that arbiter?" Man or machine?
The Siri system works on a proactive basis. In order to better serve the user, it collects data and profiles them. There are said to be ethical violations in that, Siri is using cloud technology to send personal information and "Voice Input Data" (including audio clips, transcripts, and diagnostic data) to Apple (Ozer, 2012). The information is entered into its databases to help update any hiccups within the system. Apple defended itself saying, "Voice Input Data is being used to process your request and to help Siri better recognize your commands, but it's additionally being used 'generally to improve the overall accuracy and performance of Siri and other Apple products and services'"(Apple, 2012). The personal information Siri collects is said to be for scripting future commands. However, to build the foundation for the new scripts, information such as, contacts along with "relationships" associated to them via Siri, your name/nickname, email accounts, songs and playlists, and personal and maybe private commands asked of Siri could potentially be sent to Apple as data. Apple reserves the right to share certain data if it pertains to a service they provide the company (Ozer, 2012). Users have the option of turning off Siri, however logged data that has been sent cannot be retrieved and could be used to advance the system for other iOS devices (Ozer, 2012). Despite all of the rumored legal and ethical issues, Apple has taken measures to make certain that data on your location is not tracked or stored outside the iOS device (Apple, 2012).
YOU ARE READING
Ethical Virus
Não FicçãoNon fiction Future ethical considerations The future of ethics
