Old methods can make AI more reliable

The image of AI as something that thinks independently and learns like humans is not accurate yet, says Rishi Hazra
Today’s AI tools do not truly reason – they mostly adapt to patterns and statistics. To make ChatGPT and similar systems smarter, the solution might be to rely on more traditional methods that have been used for decades. This, according to research into AI tools by Rishi Hazra at Örebro University.
“Since ChatGPT was launched in 2023, many have described large language models as ‘intelligent’ with ‘sparks’ of artificial general intelligence. But the image of AI as something that thinks independently and learns like humans is not accurate yet. Different approaches are needed to train AI tools if they are to become more reliable,” says Rishi Hazra.
Distinguishing between hype and reality becomes especially important as AI systems are increasingly used in planning, decision-making, and in identifying security vulnerabilities – areas where accurate reasoning is crucial. Large language models adapt to patters – they do not engage in real reasoning.
Limitations of today’s models
“In computer science, this is compared to the ‘Clever Hans effect’, named after a horse in the early 1900s that was believed to solve math problems. It was later revealed that the horse was simply responding to cues from its trainer. Similarly, today’s language models rely on patterns in training data – they don’t actually reason,” says Rishi Hazra.
Rishi Hazra’s research highlights the limitations of today’s models and points to a more reliable, secure and trustworthy AI – where current language models are combined with proven, traditional AI tools. Traditional methods and language models complement each other and perform better in areas that require reasoning, such as robotics.
Combination with “old-fashioned” methods
“Current language models can be combined with ‘old-fashioned’ methods from computer science that offer a level of precision that language models alone cannot provide. Traditional AI tools require a lot of manual work and are harder to scale up, but in combination with large language models, they can work well,” explains Rishi Hazra.
His research contributes to AI agents capable of working independently and addressing unsolved research problems. Rishi Hazra is a researcher at the Centre for Applied Autonomous Sensor Systems (AASS) at Örebro University and is also part of Meta’s AI Research Agents group, where his work involves developing models for how AI tools can conduct research autonomously.
Text: Björn Sundin
Photo: Björn Sundin
Translation: Charlotta Hambre-Knight