Google Gemini Launches Task Automation on Android, Letting AI Handle Uber Rides and Food Deliveries
Google's AI, Gemini, now automates tasks on Android, handling things like ordering an Uber or a DoorDash delivery. The feature is available on Pixel 10 and Galaxy S26 series, running in a secure virtual environment with user oversight.
Google Gemini Automates Everyday Tasks on Android
Google has announced a significant update for its AI assistant, Gemini, introducing task automation capabilities on the Android operating system. This new feature allows Gemini to handle multi-step tasks within various apps, such as ordering a rideshare or food delivery, marking a major step towards a more integrated and intelligent mobile experience. [1] [2]
How It Works: AI in the Driver's Seat
The task automation feature works by launching the relevant app in a secure, virtual window on the user's device. Gemini then navigates the app's interface step-by-step to complete the requested task. Users can monitor the process in real-time and have the ability to stop the automation or take over manually at any point. Gemini will also prompt the user for input if it encounters choices, such as selecting between different options, or if an item is out of stock. However, for security, the final submission of an order or booking must be confirmed by the user. [2]
Supported Apps and Devices
The feature is initially launching in beta with support for a selection of popular apps in the following categories:
This functionality will first be available on the Pixel 10, Pixel 10 Pro, and the Samsung Galaxy S26 series. The initial rollout is limited to the United States and Korea. [1]
Security and User Control
Google has emphasized the security measures built into this new system. Key aspects include:
The Vision for an "Intelligence System"
According to Google executives, this development is part of a broader vision to transform Android from a simple operating system into an "intelligence system." The underlying technology that allows AI to automate tasks is expected to be a core part of Android 17, suggesting a wider implementation in the future. This move positions Google to compete more directly with other AI-driven assistants and automation tools, aiming to make daily life more convenient for users. [2]
---
References
[1] [Gemini can now automate some multi-step tasks on Android | TechCrunch](https://techcrunch.com/2026/02/25/gemini-can-now-automate-some-multi-step-tasks-on-android/)
[2] [Google Gemini can book an Uber or order food for you with new agentic AI features | The Verge](https://www.theverge.com/tech/884210/google-gemini-samsung-s26-pixel-10-uber)
Sources
Tools Mentioned in This Article
AI Newsletter
Get the latest AI tools and news delivered weekly
Related Articles
Updates & TrendsMWC 2026: Chinese Smartphone Makers Pivot to an AI-Powered Future
At MWC 2026, Chinese manufacturers like Honor and Xiaomi showcased a strong focus on AI, unveiling innovative products like the "Robot Phone" and a Leica co-developed smartphone, signaling a new era for the industry.
Google Overhauls Flow AI Creative Suite, Integrating Whisk and ImageFX
Google has significantly updated its AI creative suite, Flow, by integrating Whisk and ImageFX. This creates a unified workspace for generating, editing, and animating content, and introduces free image generation.
Samsung Galaxy AI Expands Multi-Agent Ecosystem — Perplexity Integration Enhances AI Experience
Samsung expands Galaxy AI's multi-agent ecosystem with Perplexity integration. Users can activate via 'Hey Plex' voice command. Deeply embedded across Samsung Notes, Gallery, Calendar and more for seamless AI workflows.