Since the beginning of the year, as much the designers of the ChatGPT of the Californian company OpenAI as those of the Bing of Microsoft.

 Since the beginning of the year, as much  the designers of the ChatGPT  of the Californian company OpenAI as  those of the Bing  of Microsoft or  the Bard  of Google, have all admitted that their creations had this propensity  to say falsehoods . Or,  in the language of computer scientists , to “hallucinate”.


And the reason is not at all mysterious: these robots  are first and foremost  word predictors. This means that from their huge database, they can predict the most likely sequence of words in a fraction of a second. This allows them, in most cases, to aim correctly, but they do not "understand" what is a truth or a falsehood: they operate only in terms of probabilities.


In  a study  pre-published in March by Microsoft researchers themselves—to accompany the release of the new version of the robot—we could read that it had "difficulty knowing when it should be confident and when it should should just guess”. With the result that he sometimes "invented facts that are not in his training data".


In other words, he unknowingly spreads disinformation. This can pose a serious problem, if part of the public starts to use the tool by placing blind faith in its answers...


The search for solutions

There is a lot of ongoing research testing different approaches to reduce—but not eliminate—the error rate. In  one of them  , for example, pre-published on May 23, researchers from MIT and Google report having experimented with a "debate" between two versions of ChatGPT, in the hope that by "comparing" their answers, they come to reduce the number of errors. In an  OpenAI working paper published in March, there was talk of what computer scientists call reinforcement  learning , from humans reacting to answers given by AI.


Some experts even doubt that a solution is possible. Interviewed by the CBS network's 60 Minutes news program   in April, Google CEO Sundar Pichai said that "no one in the field has yet solved the problem of hallucinations... Whether it's even possible to solve it , this is the subject of intense debate. I think we will make progress. Interviewed a month later  by The  Washington Post , AI expert Geoffrey Hinton, who recently resigned from Google  so he could, he says, speak more openly about his concerns about this new technology, predicted that we “will never get rid” of the specific problem of “hallucinations”.


For Sébastien Gambs, from the computer science department of the University of Quebec in Montreal, one of whose research fields is the ethics of AI, the solution is not in fact on the side of computer scientists, but on the side of policy and popularization: we will have to "raise people's awareness of the fact that a conversational agent is not a reliable and verified source of information". And the task, he admits, will not be easy, because of “the quality of the text produced”. Until recently, “we could see that it was not a human who had written. Today, we cannot distinguish between them”. The solution, for him, will therefore lie in the development of a better critical spirit of people and their “awareness of false information”.


Safeguards for chatbots

One could in theory install guardrails:


for example, forcing the robot to provide references, which would at least make it possible to judge whether the source of its information is credible;

add computer code that would require him to insert a warning in his text when he does not have absolute confidence in his info - in other words, to the lawyer's fictitious references, he would have added "I am not sure my examples”;

program it so that it relies on more than one source when it comes to nominative information (name, date of birth, legal or scientific reference, etc.), but this is an effort that will have little value if he is satisfied with two equally unreliable sources;

program it so that it relies only on reliable sources, but this would require regulations that achieve international consensus on how to “categorize” the sources in question.

https://herayon.info/2023/07/04/automotive-repair-and-services-green-car-washing-in-dubai/
https://antiabort.info/2023/07/04/how-to-select-the-company-for-your-electrical-services/
https://joshuapeterson.info/2023/07/04/swimming-pool-installation-and-maintenance-tips/
https://firestickjailbroken.info/2023/07/04/porsche-repair-and-maintenance-some-useful-tips-to-follow/
https://flashmediagroups.info/2023/07/04/best-electrical-services-for-commercial-needs/

Post a Comment

0 Comments