Google has released a new dictation application for iOS that processes speech entirely on a user’s device. The app, called Google AI Edge Eloquent, represents a significant move into the competitive field of AI-powered transcription. It launched quietly on April 7, 2026, with an Android version referenced but not yet available.
How Google’s Offline Dictation App Works
The core function of Google AI Edge Eloquent is local speech-to-text conversion. Users download the app and then download its automatic speech recognition models. These models are based on Google’s Gemma, a family of open-source AI models. Once installed, the app transcribes speech without needing an internet connection.
Also read: Enhanced Secures $1M in Strategic Pre-Seed Funding to Bring Structured Yield to More Assets Onchain
This local processing is a key differentiator. Many voice assistants and transcription services rely on cloud servers. Google’s app keeps data on the device. According to the App Store description, this approach is designed for privacy and reliability.
The app provides a live transcription view. When a user pauses, the software automatically edits the text. It removes filler words like “um” and “ah.” It also attempts to polish mid-sentence corrections. The goal is to produce clean, usable prose from natural speech.
Features and Competitive Sector
Google’s entry challenges established apps like Wispr Flow, SuperWhisper, and Willow. These apps have gained users as speech recognition accuracy has improved. Industry watchers note that Google’s vast resources in AI research give it a potential advantage.
The Eloquent app includes several text transformation tools. Below each transcript, options appear for “Key points,” “Formal,” “Short,” and “Long” edits. These use AI to reformat the transcribed content for different purposes.
Users can also import specific terms from their Gmail account. This feature is optional. It allows the app to better recognize personal jargon, names, and keywords. A custom word list is also available for manual additions.
For analytics, the app tracks transcription history. It shows words dictated in the last session, words-per-minute speed, and total word counts. A search function lets users find past transcripts.
The Cloud Mode Toggle
While built for offline use, the app includes a cloud mode. When this is enabled, it uses Google’s cloud-based Gemini models for text cleanup. This suggests a hybrid approach. Simple transcription happens on-device, while more complex language tasks can optionally use more powerful cloud models. Users maintain control over when data leaves their phone.
Implications for Privacy and the Market
The launch of an offline-first AI app is notable. Data privacy concerns have grown among consumers and regulators. Processing data locally addresses some of these worries. It also allows functionality in areas with poor connectivity.
What this means for investors is a potential shift in how AI is deployed. Large tech companies are exploring ways to run sophisticated models directly on consumer hardware. This reduces server costs and latency. It also creates a new selling point for powerful smartphones.
The app’s description referenced “streamlined Android integration” as a future feature. It mentioned being set as a default keyboard for system-wide access. A floating button for quick transcription, similar to Wispr Flow’s Android feature, was also noted. However, the App Store listing was later updated, removing the Android reference and stating an iOS keyboard is “coming soon.” This suggests development plans may be in flux.
Analysis: Google’s Strategic Play
This appears to be an experimental release. Google has not announced the app widely. TechCrunch first reported its appearance. The company often tests new concepts through smaller, standalone apps before integrating successful features into its core products.
The use of the Gemma model is significant. Gemma is Google’s suite of open-source language models. Deploying it in a consumer app is a real-world test of its capabilities. Success here could lead to wider adoption of Gemma across Google’s ecosystem.
This test could signal future improvements to Google’s voice typing features on Android. The built-in voice input on Android devices is widely used but requires an internet connection for full accuracy. A successful local model could change that.
Industry analysts point to the growing demand for hands-free content creation. From journalists and students to professionals with accessibility needs, the market for reliable dictation is expanding. Google is positioning itself to capture a segment of that market with a privacy-focused tool.
Conclusion
Google’s new AI dictation app, Eloquent, is a quiet but important step into offline speech recognition. By processing audio locally, it offers privacy and constant availability. Its text-polishing features aim to bridge the gap between casual speech and written text. While currently an iOS-only experiment, its development will be closely watched. A successful launch could reshape expectations for voice-enabled applications and influence how AI is built into our daily tools.
FAQs
Q1: Is the Google AI Edge Eloquent app free?
The app is free to download from the iOS App Store. There is no indication of subscription fees or in-app purchases in its current description.
Q2: Does the app work completely without the internet?
Yes, for core dictation. Once the speech recognition models are downloaded, transcription happens on the device. An optional “cloud mode” uses the internet for advanced text polishing.
Q3: What AI model does the app use?
The app uses Google’s Gemma-based automatic speech recognition models for local transcription. When cloud mode is on, it uses the larger, cloud-based Gemini models.
Q4: Is there an Android version available?
Not yet. The initial App Store description referenced Android integration, but that was later removed. The listing now only mentions an iOS keyboard is “coming soon.” An Android release appears to be planned but not immediately available.
Q5: How does this app differ from Google’s existing voice typing?
Google’s standard voice typing in apps like Docs or Gboard often requires an internet connection for full functionality and accuracy. Eloquent is designed as a standalone, offline-first application with specific features for editing and polishing spoken-word transcripts.
This article was produced with AI assistance and reviewed by our editorial team for accuracy and quality.

Be the first to comment