One New scientific work Using artificial intelligence warns us to spread our critical thinking. The research implemented by the Scientific Team of the University of Microsoft and Carnegie Mellon decreases cognitive efforts applied to the work of AI instruments. In other words: If we use AI incorrect, we can fog.
“AI can synthesize, can increase the thought and promote the critical sign, and calls us to see us clearly and see out of our assumptions,” he said.
But to reap these benefits, Tankelevitch says that the AI’s AI says that the AI ​​needs to be treated as a thoughtful partner, but also AI. Many of this falls down to prepare a user experience that promotes critical modestness than passive trust. AI interface, which wants to check and clear the contents of AI’s meditation processes and the desired users of AI, can act as a substitute for human judgment, but also as a thought partner.
‘Task Execution’ to ‘Task Management’
The research-inquiry reduces the cognitive efforts applied to the work of people known to the EU, which is very confident of high confidence in the EU. “The higher the EU’s higher confidence is associated with less critical thinking,” The research states. As a participant in the study, I knew that I could do it without difficulty, so I did not think about it. ” It was not because it takes care.
There are great effects for the future of this mindset. Tankelevitch tells me that the AI ​​”Task Execution” says knowledge workers are changed to “perform task.” Instead of manually performing your hands, experts control the unique content, make decisions related to accuracy and integration. “Instead of simply accepting the first speech, the EU should actively manage, guide and clean,” Tankelevitch said.
Research, instead of passive acknowledgment, knowledge workers emphasize that they can develop decision-making processes. “The research also finds a push of experts who effectively affect their knowledge while working with AI” Tankelevitch points. “The AI ​​works best when managing man’s experience better decisions and stronger results.”
Research, many knowledge employees, because there are no necessary domain knowledge to assess their accuracy, the EU has struggled to fight criticism. “Although the users accept EU can be wrong, there is no experience to correct it,” Tankelevitch explains. This problem is especially sharp in technical areas that require deep topic knowledge to check the code, data analysis or financial statements, especially the AI.
Cognitive loading paradox
The belief in the AI ​​can lead to a problem called cognitive loading. This phenomenon is not new. People have long been removed from calculators for a long time to be a long time to GPS devices. Cognitive download is not negative. When done properly, users allow users to focus on higher ordinary considerations than secular, recurrent tasks, tankelevitch points.
However, complex text, code and analysis provides a new level of generative AI’s nature, potential mistakes and problems. Many people can take a blindness to the results of AI (and often these performances are bad or simply mistaken). This is the case of people that people feel are not important. “When people see a case as low stakes, people do not critically consider them,” Tankelevitch “collapses.
UX role
While AI developers compile AI user experience, they must take into account this opinion. This Chat UX should be arranged in a way that encourages users to think that the AI ​​should think through the substantiation of the emerging content.
This is a key to redesign the AI ​​interfaces to help in the new “task management” process and the critical engagement, which puts critical engagement, is key to reducing cognitive loading risks. “Deep thinking models make it more transparent, making the EU processes more transparent, learn to learn and learn the concepts of users,” he said. “Transparency issues. Users should only understand why AI says, why they say.”
You probably saw it on an AI platform like confusion. Its interface provides a clear logical way to reflect the thoughts and actions that EU received for the results. With the re-designing the AI ​​interfaces, if necessary, there are also confidence ratings or alternative perspectives, AI tools can direct users from blind confidence and the asset assessment of the results. Another UX intervention users can actively request this access to the user to ask more questions and asking direct questions from passive ways and to ask direct questions. The final product of this open cooperation between the AI ​​and man is better, especially when one person has completed the strongest parties of the other.
Some will get dumb
The research raises important questions related to the long-term impact of AI’s human understanding. If the knowledge workers were passive consumers of the AI ​​created content, critical thinking skills can take atrophy. However, the AI ​​interactive, designed as a thought-provoking tool and is used, can further develop human intelligence than to worsen.
Tankelevitch shows that this is not only theory. Proven in the field. For example, there are learners Indicate that the EU learning can increase Says when used in the right way. “In Nigeria, an early study show that the students can help students achieve two years in a total of six weeks,” he said. “Another study It was more to master the main topics of students working with teachers supported by the EU. “Key, Tankelevitt tells me that this is the leadership of all teachers:” Educators have encouraged this important critical thinking and headed the context.
The AI ​​also demonstrated that experts can increase the problem of scientific research they used to explore complex assumptions. “Researchers who use the EU decided to discover the AI ​​to confirm the human intuitiveness and critical sentence,” Tankelevitch notes. “The most successful AI appeals are the center of human control.”
Given the current state of the Generative AI, the impact of technology to human intelligence will not be dependent on the AI, but we choose to use. UX designers can undoubtedly encourage good behavior, but it is for us to do our work properly. AI can strengthen or overcome the critical thought, depending on whether his results are engaged in critical. The future of AI-auxiliary work will be determined by the sophistication of technology, but people. My betting, as in every technological revolution in the history of civilization, some people will get a lot of fog and others will be much smarter.