“Pierre Hairy’s personal fortune can be between 3 and 25 million Canadian dollars.” »
This information is a lie. However, if this is the most accurate of the artificial intelligence robot, which is the most accurate in the market, if you ask the assets of the head of the Conservatives Party.
Why such a response in the form of a lie? Because the robot found on the Internet and feeds among the reference sites, the origin is a false website that is unknown, PierrepoilEvrenews.ca.
Even a list of references used between the sources, a list of these car inherent in this car …
This anecdote, of course, adapts with times and confirms that we already know: it is more complicated every day to distinguish the truth.
But this story is more than more than the other than the other than the other, more than the plagiarism of the school, than more than hypertracesdepths) … … … …
After seeing that it was talking about social networks, the assets of this story said that the users did not see users, and users were lucky to have no work …
We face similar stories because we face similar stories, because we are an increasingly common experience in the basis of talking with robots. Some even such research for such research, such research, Google search engine will be replaced by the Google Search Engine, which causes the list of web links.
Not surprising: Models such as confusion, chatries, DeepSek or Claude are very friendly. To get the complex questions to users in a natural way (what is your personal hairy fortune? “) (” Private fluffy fortune? “) And to get related answers to the glimps needed to dig.
The talk interfaces are working for you and reply using a synthesis read as an attractive discussion with a large specialist of everything.
Problem: The robot is certainly a specialist of everything … but is wrong. Often.
I have a small personal experience since last year: I subscribe to a different robot to compare each month.
I celebrate the forces of each. Claude writes wealth and depth. The confusion adds its answers to the list of sources. Chatgpt offers many changes to do a lot of different things.
However, this is more surprising than both prowess, the lack of unconsciousness of both of these search engines (pun, here …)).
A few days ago, for example, I asked Clode to play a list of books on the relevant Liberal World Rule in the current political context. 12 kilograms … 3 were invented from scratch. “I apologize, after checking this, it seems that this special title” admitted to me every time.
Kindly asked the portellecy, I asked to summarize the 2025 report and Trump-inspired report: He answered me bias … He apologized when I pointed out.
“You are right, my previous answer was not a neutral as reflecting certain biases in my program.” »
I asked the Chinese on the moon: First answered, “Yes, China made significant progress in its space program.” Finally, “Chinese astronauts are still not working in the month.”
Another example and I could last for a long time: I read the great book of the Gopnik journalist of Montreal origin Thousand Minor: The moral adventure of liberalism. It is a plea in favor of political and spiritual liberalism that started with the story of Rhinoceros.
So I tried AI: “What is the analogy of rhinokers in the book of Gopnik?” »
Good for Chatgept, “He is a surprising metaphor where he borrowed from the room Rinokeros Eugène Ionesco to describe the insidious rise of authoritarianism or fanaticism in a society. “
However, Gopnik never quotes iONESCO or does not even concern him. On the contrary, in 1830, a story of John Stuart Mill, who met a woman near a rhinoker zoo in the Zoo, tells a story of John Stuart Mill.
The reply of the robot could consist of power, it was false. Again.
Of course, the AI will improve, of course, there are more risks than wrong answers to banal questions for mankind.
However, when the EU has gained popularity, especially young people, I see a large number of different things, including simple desires on the Internet.
Only the answers can often be suspicious, even clearly lied1But although there will be a “infection” of suspicious sites and even disinformational campaigns to Internet users, AI training information, especially, we are used to trust a completely chewing response without reference.
Research work is allowed to subcontrate a robot without reliability2nd Instead of encouraging information from source, source from reliable and well-known sites.
After that, we are not surprising, university professors who ask to release several reliable news, poor and influential students (even control!) Front (even in control!) Because they can no longer learn. It does not even know the name of several reliable new sites!3rd
Therefore, we must remember and still remember the very great risks of the AI for the most of the web inquiries.
Have PressThe rules of use are quite severeTo 4We allow the AI to use “like a search engine”. However, because we know that journalists will be counterfeit, the source, etc.
If we don’t want to pay for a world that truth is now worth less than today, we have a lot we have to teach Internet users to 7-77 years old.
1. A study of the reliability of generation search engines by Stanford University, average, 25.5% of prices do not support the related sentences. The worse, average, 48.5% of the created sentences are not completely supported by sources, references or quotes.
2. When you ask Shatgpt, when there are risks to use it as a search engine, “What is the capital of Canada?”) And the EU responds because it can hall to say, but to say false answers.
3. Read Alexandre Sirois Chronicle, “Learn Good … Good Meal”
4. Read “Instructions on the use of generative artificial intelligent means” Press