Apple is opening Siri to pick AI models, but there’s only only that makes sense to me 


Apple promised us a smarter, more capable Siri at WWDC 2024. The pitch was compelling: a Siri that understands your personal context, digs through your messages and emails, performs actions inside your apps, and evolves into a true assistant. 

Two years later, that dream still remains a dream. But here’s the thing that might change the course of Apple’s assistant. According to reports, Siri is no longer tied to a single AI brain. Apple is building it to be flexible, capable of routing requests to whichever external model does the job best.

This made me ask a question. If Siri can use any AI, which one should it use? Right now, the default external model is ChatGPT. But I’d argue that Gemini is the more logical choice, and here’s why.

Siri is a search engine

Think about how you actually use Siri on a daily basis. You ask for the day’s weather. You ask for the closest eateries near you. You ask it to look up things on the web. A significant portion of Siri usage involves search or search-like queries, and no company on the planet does searches better than Google.

Google has spent decades building the most powerful search engine, and that expertise now flows directly into Gemini. When you ask Gemini something, it does not just pull from a language model. It extracts data from Google’s real-time web index, Google Maps, Google Shopping, and more. 

Using that to power Siri’s search capability will take it to new heights that no other LLM provider can match. 

Apple promised personal intelligence, but Gemini is delivering it

One of the biggest talking points from Apple’s WWDC 2024 announcement was personal intelligence. Apple showed Siri surfacing contextual information from across your apps, answering questions like “when is my mom’s flight landing?” or “show me photos of Stacy in her pink coat from New York.” 

It was genuinely impressive in demo form. However, if I ask it to show me a photo of me wearing a black t-shirt, it shows random photos of people from the web wearing black t-shirts. I am not exaggerating when I say that Siri’s personal intelligence feature has been a colossal failure. 

Meanwhile, Gemini quietly rolled out its own Personal Intelligence feature. It taps into your Gmail, Calendar, Google Photos, Drive, and more to reason across your personal data and answer complex, life-context questions. It’s not perfect, but at least it’s working.

That is almost word-for-word what Apple was demoing as a future Siri capability, except Gemini is doing it today. If Apple wants to accelerate its delivery of those features to users, Gemini might be the shortcut they need.

Gemini already does what Siri promised

Apple Intelligence deploys a compact, capable AI model across system apps, combining on-device processing for privacy with cloud-based computing for more demanding tasks. The on-device processing and privacy aspects are what set Apple apart from the competition. But it’s not alone now. 

Gemini Nano is already doing this on Pixel and Samsung Galaxy devices. It powers offline summarization, smart replies, and contextual features, all without needing an internet connection. On Pixel 9 and newer, Gemini Nano is multimodal and can process images, texts, and languages directly on the device. 

Apple is building toward what Google has already shipped. Rather than reinventing that wheel, using Gemini’s existing Nano deployment as the foundation for on-device Siri features would save Apple a lot of headache and money.

Gemini’s creative toolkit is packed

Here’s where it gets genuinely exciting. Gemini is not just a text model. It comes with an entire creative ecosystem that Apple could tap into.

Veo handles video generation at up to 1080p, with cinematic styles and clips longer than a minute. Lyria, from Google DeepMind, handles music and audio generation. For images, Nano Banana (Google’s image generation service) recently received a major upgrade, with improved text rendering, subject consistency, and support for any aspect ratio. 

Apple has recently launched its own Creator Studio, giving users access to creative tools for a fixed monthly subscription. If the company is serious about competing with the likes of Adobe, it needs to offer generative capabilities. Guess what, Gemini already has all those capabilities, and it would make perfect sense to integrate it into Apple’s creative suite. 

The partnership already exists

This point isn’t discussed enough. Google reportedly pays Apple around 20 billion dollars every year to remain the default search engine in Safari. That is one of the most valuable distribution deals in the history of tech. The relationship between Apple and Google is deep, long-standing, and financially enormous for both companies.

Extending that relationship from “Google powers Safari search” to “Gemini powers Siri’s AI features” is not a dramatic leap. It is a natural evolution of a partnership that runs half of what happens when you open a browser on your iPhone.

So which model would I stick with?

Claude is excellent for long-context reading and nuanced reasoning. ChatGPT has a massive ecosystem and strong coding and agent tooling. Both work great as user-chosen specialists. I myself use Claude on my computer. 

But as the default engine under Siri’s hood? They are not the right pick. Gemini operates at the OS level on mobile, understands searches and personal contexts, exists in an on-device Nano form factor, and sits at the center of the most important commercial relationship Apple has with any tech company. 

The pieces are all there. It is not a question of whether Gemini could power a smarter Siri. It is a question of whether Google and Apple can hash out a mutually beneficial deal. And if the rumors are anything to go by, things might be heading in this direction already.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


Google Maps has a long list of hidden (and sometimes, just underrated) features that help you navigate seamlessly. But I was not a big fan of using Google Maps for walking: that is, until I started using the right set of features that helped me navigate better.

Add layers to your map

See more information on the screen

Layers are an incredibly useful yet underrated feature that can be utilized for all modes of transport. These help add more details to your map beyond the default view, so you can plan your journey better.

To use layers, open your Google Maps app (Android, iPhone). Tap the layer icon on the upper right side (under your profile picture and nearby attractions options). You can switch your map type from default to satellite or terrain, and overlay your map with details, such as traffic, transit, biking, street view (perfect for walking), and 3D (Android)/raised buildings (iPhone) (for buildings). To turn off map details, go back to Layers and tap again on the details you want to disable.

In particular, adding a street view and 3D/raised buildings layer can help you gauge the terrain and get more information about the landscape, so you can avoid tricky paths and discover shortcuts.

Set up Live View

Just hold up your phone

A feature that can help you set out on walks with good navigation is Google Maps’ Live View. This lets you use augmented reality (AR) technology to see real-time navigation: beyond the directions you see on your map, you are able to see directions in your live view through your camera, overlaying instructions with your real view. This feature is very useful for travel and new areas, since it gives you navigational insights for walking that go beyond a 2D map.

To use Live View, search for a location on Google Maps, then tap “Directions.” Once the route appears, tap “Walk,” then tap “Live View” in the navigation options. You will be prompted to point your camera at things like buildings, stores, and signs around you, so Google Maps can analyze your surroundings and give you accurate directions.

Download maps offline

Google Maps without an internet connection

Whether you’re on a hiking trip in a low-connectivity area or want offline maps for your favorite walking destinations, having specific map routes downloaded can be a great help. Google Maps lets you download maps to your device while you’re connected to Wi-Fi or mobile data, and use them when your device is offline.

For Android, open Google Maps and search for a specific place or location. In the placesheet, swipe right, then tap More > Download offline map > Download. For iPhone, search for a location on Google Maps, then, at the bottom of your screen, tap the name or address of the place. Tap More > Download offline map > Download.

After you download an area, use Google Maps as you normally would. If you go offline, your offline maps will guide you to your destination as long as the entire route is within the offline map.

Enable Detailed Voice Guidance

Get better instructions

Voice guidance is a basic yet powerful navigation tool that can come in handy during walks in unfamiliar locations and can be used to ensure your journey is on the right path. To ensure guidance audio is enabled, go to your Google Maps profile (upper right corner), then tap Settings > Navigation > Sound and Voice. Here, tap “Unmute” on “Guidance Audio.”

Apart from this, you can also use Google Assistant to help you along your journey, asking questions about your destination, nearby sights, detours, additional stops, etc. To use this feature on iPhone, map a walking route to a destination, then tap the mic icon in the upper-right corner. For Android, you can also say “Hey Google” after mapping your destination to activate the assistant.

Voice guidance is handy for both new and old places, like when you’re running errands and need to navigate hands-free.

Add multiple stops

Keep your trip going

If you walk regularly to run errands, Google Maps has a simple yet effective feature that can help you plan your route in a better way. With Maps’ multiple stop feature, you can add several stops between your current and final destination to minimize any wasted time and unnecessary detours.

To add multiple stops on Google Maps, search for a destination, then tap “Directions.” Select the walking option, then click the three dots on top (next to “Your Location”), and tap “Edit Stops.” You can now add a stop by searching for it and tapping “Add Stop,” and swap the stops at your convenience. Repeat this process by tapping “Add Stops” until your route is complete, then tap “Start” to begin your journey.

You can add up to ten stops in a single route on both mobile and desktop, and use the journey for multiple modes (walking, driving, and cycling) except public transport and flights. I find this Google Maps feature to be an essential tool for travel to walkable cities, especially when I’m planning a route I am unfamiliar with.


More to discover

A new feature to keep an eye out for, especially if you use Google Maps for walking and cycling, is Google’s Gemini boost, which will allow you to navigate hands-free and get real-time information about your journey. This feature has been rolling out for both Android and iOS users.



Source link