Artificial Intelligence Having Hallucination Problem- Difficult to Fix?

You know what Hallucination is. It is an apparent perception of objects, images and even persons. Usually these perceptions have no real existence. Many people have this problem, but have you ever heard Artificial Intelligence hallucinating? It sounds queer, but yes, that is what scientists are worried about. The deep-neural-network software that bolsters artificial intelligence is facing a grave issue. Whenever a subtle change is being made to images, text or audio, the Artificial Intelligence (AI) starts perceiving things that aren’t there.

The situation is alarming because the products using Artificial Intelligence are at stake. For example, the self driving cars. If the AI of such cars starts hallucinating, then it might lead to disastrous accidents. Researchers are trying to develop measures that can protect an AI from such hallucination attacks, but the resolution is not yet resilient enough.

In the month of January, a leading machine-learning conference announced that it had selected 11 new papers that would be presented in April. The papers have researches that propose different defensive measures against such adverse attacks. A first year student at MIT, named Anish Athalye created a webpage where he claimed to have “broken” seven of the new papers, including from boldface institutions such as Google, Amazon, and Stanford. “A creative attacker can still get around all these defenses,” says Athalye. He worked on the project with Nicholas Carlini and David Wagner, UC Berkeley. Battista Biggio, an assistant professor at the University of Cagliari, Italy, says, that all systems are vulnerable. Thus the challenge is exigent.

According to researchers, the machine learning researchers have to play mind games with AI in order to build stronger defenses against such attacks. Athalye and Biggio say the field should adopt practices from security research, which they say has a more rigorous tradition of testing new defensive techniques. “People tend to trust each other in machine learning,” says Biggio. “The security mindset is exactly the opposite, you have to be always suspicious that something bad may happen.”

A blogger with a zeal for learning technology. Enchanted to connect with wonderful people like you.
Exit mobile version