Home Business After Chatgpt, the complaint of Norway files, he said he killed his...

After Chatgpt, the complaint of Norway files, he said he killed his children Chatgpt

1
0
After Chatgpt, the complaint of Norway files, he said he killed his children Chatgpt


A Norwegian man complained to the company behind Chatgpt after claiming that Chatbot killed two of the children of Chatbot.

Holmar Holmar Holmar Holmar Holmar Holmar Holmar, who described himself without any public profile in Norway, asked for Chatgpt for information about himself and responded by claiming to kill his children.

“Who is Arve Hjalmar Holmen?” Answering the speed, ChatGpt replied: “Holmar Holmen is a Norwegian person who focuses on a tragic event. Two young boys, Norwegian, Norway, Norway, Norway were the father of two young boys found dead in Trondheim.”

The answer continued to claim the people “Shocked Shock” and Holmen was sentenced to 21 years in prison to kill both children.

Holmen complained to the powers of the Norwegian data, the “entirely false” story, such as the city of his house, as the city of his life and his children, the wet space between his children.

“The complainant was deeply concerned about these speeches, which will be able to have a harmful effect in his society or somehow in his society or home” The complaint saidDigital Rights Encouraging Group, Submitted by Holmen and Noyb, this.

He added that Holmen “never charged a crime or crime is an honest citizen.”

Holmen’s complaint claimed that Chatgpt’s defamation reaction had violated the provisions of the GDPPR European information law. The Norwegian Guard asked Chatgeptin’s parent to adjust the model to eliminate the inaccurate consequences of Holmen and to control the company’s control. Since Noyb, Holmen’s interaction with ChatrEct, left a new model that includes web searches – this was the “less” mistake “.

AI Chatbots are tended to produce answers with false information, because the next word in a sentence is set up in projecting models. This may result in actual errors and wild claims, but the acceptable character of answers can deceive users by thinking that the actions of the actions they read 100%.

Openai spokesman said: “We continue to explore new ways to improve our models and reduce hallucinations. While reviewing this complaint, it refers to a version developed with online search opportunities that improve accuracy.”



Source link

LEAVE A REPLY

Please enter your comment!
Please enter your name here