+

Cookies on the Business Insider India website

Business Insider India has updated its Privacy and Cookie policy. We use cookies to ensure that we give you the better experience on our website. If you continue without changing your settings, we\'ll assume that you are happy to receive all cookies on the Business Insider India website. However, you can change your cookie setting at any time by clicking on our Cookie Policy at any time. You can also see our Privacy Policy.

Close
HomeQuizzoneWhatsappShare Flash Reads
 

Apple engineers assert the AI apps of today are essentially still just fakes that lack 'true' intelligence

Oct 17, 2024, 13:21 IST
Business Insider India
With how easy they can make our day-to-day lives, it is easy to get swept up by the impressive achievements of apps like ChatGPT, Midjourney, and DALL-E. These AI applications, powered by large language models (LLMs), can generate text, create images, and hold conversations that mimic human interactions. But according to a recent study by Apple Computer Company researchers, the intelligence displayed by these AI systems is far from genuine.
Advertisement

The illusion of intelligence

The Apple research team, led by a trio of AI experts, conducted a series of tests on various popular LLMs. They found that while these models appear intelligent on the surface, their responses are often unreliable when asked to perform tasks requiring true logical reasoning. This discrepancy, they argue, reveals that LLMs don’t actually “understand” the questions they’re answering. Instead, they simply recognise familiar patterns and respond based on statistical probabilities, rather than meaningful comprehension.

To illustrate this point, the researchers used a simple analogy: imagine a child asking their parent how many apples are in a bag. The child also mentions that some of the apples are too small to eat. Both the child and the parent understand that the size of the apples has no bearing on their quantity. However, when AI models are presented with similarly structured questions — questions with information that can be considered unnecessary (like the size of the apples) — they frequently become confused, offering nonsensical or incorrect answers.
In their study, the researchers asked the AI models hundreds of questions, including some with non-pertinent details. They found that even minor changes in phrasing or context were enough to throw off the models. These AIs, despite their advanced programming, struggled to filter out irrelevant information, suggesting that they lack the capacity for genuine logical reasoning.

How do modern AI apps work?

Understanding why AI models like ChatGPT struggle with logic and nuance requires a closer look at how these systems function. At their core, LLMs are based on complex machine learning algorithms that analyse vast amounts of text data to learn language patterns. Once trained, these models use probabilities to generate responses based on what they’ve seen in their training data. However, they don’t “understand” language the way humans do.

When an LLM receives a prompt, it breaks down the text into tokens and predicts the next word in a sequence based on statistical likelihood. This approach allows the AI to generate coherent, contextually relevant responses but doesn’t equip it to handle questions that require deeper, contextual reasoning. In essence, AI models are mimicking language without truly grasping the meaning behind it.

To elaborate on the earlier example, if asked, “How many apples are in a bag if three are too small to eat?” a human would immediately recognise that the size of the apples has no impact on the total count. An AI, on the other hand, may get tripped up by the additional detail and offer a convoluted response because it cannot separate what’s important from what’s not.
Advertisement


Apple’s researchers suggest that this limitation is a fundamental flaw in how AI models are designed. While the technology has advanced rapidly, the underlying mechanics remain rooted in pattern recognition rather than true comprehension. As a result, today’s AI applications are better suited for tasks that involve straightforward pattern matching, like language translation or text prediction, but struggle when asked to exhibit human-like intelligence.

Overestimating AI capabilities can have risks

Because these modern AI models can produce remarkably human-like text, many have a tendency to attribute greater intelligence and understanding to them than they actually possess. This misconception can lead to unintended consequences, such as relying on AI for critical decision-making in areas where accuracy and logical reasoning are paramount — such as in business decisions as more and more companies adopt AI.
The researchers also caution that the “illusion” of intelligence can make it difficult for users to recognise when an AI is providing incorrect information. In their tests, Apple’s team noted that LLMs frequently gave answers that sounded correct on the surface but fell apart upon closer inspection. These findings highlight the importance of scrutinising AI-generated content and using these tools as supplements to human expertise rather than replacements for it.

As AI continues to evolve, researchers will need to explore new approaches that go beyond pattern recognition. Achieving genuine AI intelligence will likely require breakthroughs in fields like cognitive computing and neuromorphic engineering — fields that aim to create systems capable of human-like understanding and reasoning.

For now, the takeaway from Apple’s research is clear: despite their impressive capabilities, today’s AI apps are tools, not thinkers. As Apple researchers pointed out, these models are designed to simulate understanding but lack the cognitive depth to handle complex reasoning. Recognising these limitations is crucial as we integrate AI into more aspects of our lives.

The findings of this research have been published in a preprint server and can be accessed here.
Advertisement
You are subscribed to notifications!
Looks like you've blocked notifications!
Next Article