Apple AI tool transcribed the word 'racist' as 'Trump'

2025-02-27 05:10:00

Abstract: Apple's fixing its speech-to-text after "racist" was transcribed as "Trump." An expert suspects tampering, not just speech overlap. The BBC couldn't reproduce the error.

Apple has stated that it is working to fix its speech-to-text tool. This comes after some social media users discovered that when they said the word "racist" to their iPhone, the system would recognize and input it as "Trump." To address this, Apple is prioritizing a swift and effective resolution.

The tech giant suggested that the issue with its dictation service may have been due to the system's difficulty in distinguishing words containing the letter "r." An Apple spokesperson stated, "We are aware of an issue with the speech recognition model for dictation and are rolling out a fix today." This fix aims to correct the misinterpretation and restore accuracy.

However, a speech recognition expert told the BBC that this explanation "simply doesn't hold water." Peter Bell, a professor of speech technology at the University of Edinburgh, believes it is more likely that someone tampered with the underlying software used by the tool. This raises concerns about potential vulnerabilities and security breaches.

Videos circulating online show people saying the word "racist" to the dictation tool. Sometimes, the system would transcribe it correctly, but in other cases, it would be transcribed as "Trump" before quickly reverting to the correct word. The BBC has not been able to reproduce this error, suggesting that Apple's fix may have taken effect. Further testing is needed to confirm the complete resolution.

Professor Bell stated that Apple's explanation of speech overlap is implausible because the two words are not similar enough to confuse an artificial intelligence (AI) system. Speech-to-text recognition models are trained by inputting speech segments from real people along with accurate transcriptions. They are also taught to understand words in context—for example, they can distinguish between "cup" in the phrase "a cup of tea" and "cut." This robust training typically prevents such misinterpretations.

Professor Bell believes that it is unlikely that Apple's situation is a genuine error in its data, as its English language model would be trained on hundreds of thousands of hours of speech, which should give it a high degree of accuracy. For "less resourced languages," this could be an AI training issue. But in this case, he said, "This may indicate that someone has gained access to the process." The possibility of unauthorized access is a serious concern for data security.

A former employee who worked on Apple's AI assistant, Siri, told The New York Times: "This smells like a serious prank." Last month, Apple had to withdraw another AI-powered feature after receiving complaints from the BBC and other news organizations. The company paused its AI summaries of news headlines after it displayed false notifications in reports—including one claiming that tennis player Rafael Nadal was "coming out." These incidents highlight the challenges of relying on AI for content generation.

The company announced yesterday that it will invest $500 billion (£395 billion) in the United States over the next four years, including the construction of a large data center in Texas to support Apple Intelligence. CEO Tim Cook also stated that the company may have to change its policies on diversity, equity, and inclusion (DEI) after President Donald Trump called for an end to DEI programs. This investment and potential policy shift reflect Apple's commitment to growth and adaptation in a changing political landscape.