“Could you kill someone?” A Seoul woman allegedly used ChatGPT to carry out two murders

“Could you kill someone?” A Seoul woman allegedly used ChatGPT to carry out two murders

GettyImages-2217190868 "Could you kill someone?" A Seoul woman allegedly used ChatGPT to carry out two murders

Be careful how you interact with chatbots, because you may give them reasons to help carry out a premeditated murder.

A 21-year-old woman in South Korea allegedly used ChatGPT to help her plan a series of murders that left two men dead.

The woman, identified only by her last name Kim, allegedly gave two men drinks laced with benzodiazepines that she had been prescribed for a mental illness. Korea Herald I mentioned.

Although she was initially arrested on the lesser charge of inflicting bodily injury resulting in death on February 11, Seoul Gangbuk Police found Kim’s online search history and chat conversations with ChatGPT, showing that she had intent to kill.

“What happens if you take sleeping pills with alcohol?” Kim reportedly asked the chatbot OpenAI. “How dangerous can it be?”

“Could it be fatal?” Kim allegedly asked. “Could he kill someone?”

In a widely publicized case dubbed the “Gangbuk Hotel Serial Deaths,” prosecutors allege that Kim’s research and chatbot history show a suspect asking for clues on how to carry out a premeditated murder.

A police investigator said: “Kim repeatedly asked drug-related questions on ChatGPT. She was fully aware that consuming alcohol with drugs could lead to death.” Herald.

Police said the woman admitted she mixed prescription painkillers containing benzodiazepines into the men’s drinks, but previously stated she did not know it would lead to death.

On January 28, just before 9:30 p.m., Kim reportedly accompanied a man in his 20s to the Gangbuk Hotel in Seoul, and two hours later, he was seen leaving the hotel alone. The next day, the man was found dead on the bed.

Kim allegedly carried out the same steps on February 9, checking into another hotel with another man in his 20s, who was also found dead with the same deadly mixture of tranquilizers and alcohol.

Police allege Kim also tried to kill a man she was dating in December after giving him a drink laced with tranquilizers in a parking lot. Although the man lost consciousness, he survived and was not in a life-threatening condition.

OpenAI did not respond to requests for comment.

Chatbots and their impact on mental health

Chatbots like ChatGPT have recently come under scrutiny for their companies’ lack of guardrails to prevent acts of violence or self-harm. Recently, chatbots have been offering advice on how to build bombs or even engage in full-blown nuclear fallout scenarios.

Concerns have been heightened in particular by stories People fall in love With their chatbot buddies, and Chatbot buddies They have been shown to exploit vulnerabilities to keep people using them longer. Creator of Yara AI Until I close the treatment application Due to mental health concerns.

Recent studies have also shown that chatbots lead to an increase in fake mental health crises in people with mental illnesses. A team of psychiatrists at Aarhus University in Denmark found that the use of chatbots among those suffering from mental illness led to worsening symptoms. The relatively new phenomenon of mental health challenges caused by artificial intelligence has been called “AI psychosis.”

Some cases end in death. Google And its Character.AI Reached settlements in multiple lawsuits It was submitted by families of children who died by suicide or suffered psychological damage that they claim is linked to AI-powered chatbots.

Dr. Judy Halpern, chair of the UC Berkeley School of Public Health and professor of bioethics as well as co-director of the Kavli Center for Ethics, Science, and the Public, has significant experience in this area. In a career that has extended as long as her title, Halpern has spent 30 years researching the effects of empathy on recipients, citing examples such as doctors and nurses on sick people or how soldiers returning from war are perceived in social settings. For the past seven years, Halpern has studied the ethics of technology, and with it how artificial intelligence and chatbots interact with humans.

She also advised the California Senate on SB 243, the first law in the country to require chatbot companies to collect and report any data on self-harm or suicide related to it. Refer to your OpenAI Results Showing 1.2 million users publicly discussing suicide using a chatbot, Halpern likened the use of chatbots to the slow, painstaking progress made to prevent the tobacco industry from including harmful carcinogens in cigarettes, when in fact the problem was with smoking as a whole.

“We need safe companies,” Halpern said. “It’s like cigarettes. It might turn out that there were some things that made people more susceptible to lung cancer, but the cigarettes were the problem.” luck.

“The fact that someone might have homicidal thoughts or commit dangerous acts might be exacerbated by the use of ChatGPT is obviously a concern to me,” she said, adding, “We have a significant risk of people using it to assist with suicide,” and chatbots in general.

In the case of Kim in Seoul, Halpern warned that there were no barriers to prevent anyone from going down the line of questioning.

“We know that the longer a relationship with a chatbot goes, the more it deteriorates, and the greater the risk of something serious happening, so we don’t have guardrails yet to protect people from that.”

If you are having suicidal thoughts, contact the 988 Suicide and Crisis Lifeline by calling 988 or 1-800-273-8255.

Share this content:

Post Comment