Last updated on 20 November 2025
TalkBack, a part of the Android accessibility suite, is the screen reader developed by Google and comes pre-installed on most Android phones and tablets. Google Text-to-Speech is also developed by Google as part of Speech Recognition and Synthesis, and it serves as the text-to-speech engine built into most Android devices, including those from Samsung which has its own text-to-speech engine.
Since the text-to-speech engine is vital for a screen reader, as speech is passed from the screen reader to the engine to produce spoken audio, good integration between TalkBack, the built-in screen reader, and Google TTS should mean a more seamless user experience. However, it has been noticed that some features in TalkBack work exclusively when Google TTS is selected as the engine, a trend that is increasing with each new TalkBack version. This leaves behind users who don’t rely on Google TTS or who use it through other tools like Auto TTS.
This article discusses the current TalkBack features that only work with Google TTS and explains why this deep integration causes more harm than benefit.
Table of Contents
Current Forms of Exclusive Collaboration
With the introduction of proofreading in TalkBack came the first feature that works only with the Google TTS engine, with more features joining as new versions of TalkBack are released.
Spelling Out Suggestions When Using Proofreading
When navigating spelling suggestions through the actions of a spelling mistake, each suggestion is spelled out. This feature is very useful since there are words that share the same pronunciation despite differences in spelling. However, users who don’t use Google TTS as their default text-to-speech engine are deprived of hearing the spellings, and only hear the suggested word pronounced.
Punctuation Reading Level Granularity
It took Google a long time to introduce a form of granular punctuation reading, with available options being All, Most, and Some. Similar to spelling suggestions, this setting does not work as expected when using any TTS engine other than Google TTS. If the level is set to All, you might notice additional pauses when using certain TTS engines, but the actual punctuation is not read aloud.
Grouping Repeated Symbols
With TalkBack 16.0, Google added the option to group any sequence of four or more repeated symbols or emojis together instead of reading them individually, which could be tedious, especially in social media posts or messages where people might literally throw 20 smiling or angry faces in the middle of a message. As useful as it is, it is disappointing that this feature works only with Google TTS. If you use another TTS engine, you end up hearing the same emoji 20 times before you can continue reading the rest of the text.
Resuming from where TalkBack stops with the pause gesture
By default, tapping the screen with two fingers while TalkBack is reading pauses speech (similar to using the Shift key in a Windows screen reader like NVDA). Using the same gesture again resumes speech. If Google TTS is used, TalkBack resumes from the word it stopped at when paused, while if another TTS engine is used, TalkBack repeats the whole text it was reading from the beginning.
Adding the Emoji Label When Reading Emojis
While not as important as the previously mentioned features, this is still an example of Google TTS exclusivity. In TalkBack 16.1, TalkBack says the word “emoji” after reading an emoji. This label is missing when using other text-to-speech engines.
Exclusive Voice Quality That Works Only When Using TalkBack
The different qualities of voices in Google Text-to-Speech have always been a matter of confusion, with Google not taking the time or effort to clarify them. A few years ago, Google added higher-quality versions of the available voices across many languages, yet TalkBack users couldn’t benefit from these voices and were left with the older ones unless they opted for tools like AutoTTS.
A few months ago, a shift in strategy started to emerge. Some TalkBack users began noticing a difference in the voices they heard while using TalkBack. The scope of users noticing this was very limited, and many were unsure exactly what was happening or how to get the new voices on their devices. To add to the confusion, the new voices were temporarily pulled, and users returned to hearing the old voices.
After this halted rollout, it resumed, and this time all TalkBack users should be able to access the new voices, which are available only for English (United States) at the time of writing. Whether the new voices are better than what was already available is subjective, but the important point is that only TalkBack can use these voices. Other screen readers and apps that use text-to-speech engines are limited to the older voices that TalkBack previously couldn’t access.
These voices are also known to lose their improved quality when the speech rate exceeds a certain level, something not observed with the new voices in TalkBack. To make matters worse, users don’t have a choice over which voice quality to use, as Google TTS decides everything based on the app in use.
Why the Trend of Exclusive Features Is Concerning
On the surface, better integration between two Google apps might seem like a positive development. In fact, I am among those who repeatedly criticize Google for miscommunication between teams, which often results in unjustified accessibility issues and missing features in Google apps. In the case discussed in this article, this integration threatens freedom of choice, causes particular harm to multilingual users, and contradicts the generally inclusive and diverse nature of the built-in screen reader.
Freedom of Choice
Google TTS is not the only text-to-speech engine available for Android. There are third-party engines, both free and paid. For blind users who rely on speech, it is often very important to find a TTS engine that satisfies their hearing preferences or works well with their specific hearing impairments. TTS engines vary in voices, quality, how they perform at higher speeds, and responsiveness.
I am not revealing a secret when I say that Google TTS, despite ongoing improvements, is not among the most responsive TTS engines. It can struggle when encountering large chunks of text, sometimes taking a while to recover and return to its normal state on certain devices. Moreover, making certain features or announcements work only with Google TTS means that users who prefer other TTS voices are left either unable to use those features or forced to switch to Google TTS.
Multilingual Users
Although Google TTS has a form of automatic language detection, it is far from useful or reliable for most users. For example, I need to read and interact with texts in two languages daily, and Google TTS’s language detection feature cannot meet my needs. If I rely on it, I might miss texts that Google TTS ignores completely, or have to wait while typing for it to recognize that I switched languages—especially since the language I use alongside English cannot be pronounced by the Google TTS English voice.
For users like me, third-party tools such as AutoTTS are the go-to choice. These tools not only provide better language detection but also allow the use of more than one TTS engine at the same time. For example, I can set one engine for English and another for the other language. Even when using a Google TTS voice through a tool like AutoTTS, TalkBack interacts with the AutoTTS tool rather than directly with Google TTS. This means that the features in TalkBack that rely on Google TTS voices are not passed on to Google TTS when used with third-party tools.
In this situation, a multilingual user who cannot rely on Google TTS is forced to choose between losing exclusive features or facing a serious disruption to their Android experience.
The Nature of the Default Screen Reader
Although TalkBack is developed by Google, it is designed to work on a variety of devices produced by different manufacturers. While nothing prevents Google from giving its TTS engine special treatment, the logic behind a screen reader built for everyone is different. A blind user has the right to expect that the screen reader—the basic tool needed for daily navigation—does not discriminate against TTS engines. It should deliver speech to all engines equally so they can each convert text into spoken audio.
We should also remember that many people associate Android with openness. An open platform should not create barriers within one of its essential components for a specific group of users who rely on this tool to fully benefit from their devices.
The Controlled Environment Excuse
Because both Google TTS and TalkBack are developed by the same company, Google has full control over the development and modifications of both. This control allows the company to ensure that spoken information is delivered as intended, that certain voices are responsive enough to work well with TalkBack, and that new features can be tested thoroughly in a managed environment. Testing all the different third-party engines and tools individually is not a practical option.
However, this does not justify reserving all features and announcements exclusively for Google TTS or making certain voice qualities work only with TalkBack and no other tools. Developers should meet basic standards to ensure, at least in theory, an optimal experience across all engines. If issues arise, it becomes the responsibility of the TTS engine developers to address them. A simple warning that results may vary is far preferable to depriving all engines of certain texts or features.
Final Remarks
The TalkBack screen reader plays an important role in the lives of many blind Android users, who may or may not prefer the Google TTS engine. Passing certain announcements only to Google TTS, or making specific TTS voices work only with TalkBack, could leave many users at a disadvantage simply because they are unwilling or unable to use Google TTS as their system engine.
Blind users already face enough accessibility challenges, so adding artificial hurdles is neither expected nor welcomed.
As a blind user who has spent years with Android and various TTS engines, I find it alarming to see what began as a welcome handshake between a screen reader and a TTS engine evolve into a much deeper exclusive coupling, leaving all other engines and TTS-related tools aside.
It may seem a little skeptical, but with this continuing trend, isn’t there a serious possibility that TalkBack could eventually refuse to work with anything except Google TTS?

While talkback’s response immensely improved in the recent versions, the same cannot be said with the Google Speech recognition and Synthesis. This is something Google needs rework on as a matter of urgency.
I have since checked the Samsung text to speech and found out that it does read the word emoji when there is an emoji inserted as is the case with talkback 16.1.
Thank you for sharing your test result. However, the problem with Samsung TTS is that, starting with One UI 7, it cannot be used with screen readers other than Samsung TalkBack, and Samsung TalkBack has not yet been updated to version 16.1.
There are new higher quality voices for some Indian languages also.