In the competitive landscape of AI agents, where businesses are closing investment deals everyday to build and expand their AI infrastructure and software, the companies that seemed to be leading the race are OpenAI, Anthropic, Microsoft, NVIDIA, Google, and Amazon.

But despite the success of its large language models (LLMs) family, one of the big tech companies that seem to be struggling to keep relevance is Meta.

Meta’s AI strategy is currently splitted between openness, scale, and control. The tech company’s next step has postponed its launching due to performance concerns, the code-named AI new model ‘Avocado’, while opening the door for a debate within the industry about open source and profitability.

Meta’s AI strategy: Meta AI, LlaMa and ‘Avocado’

The big tech company led by Mark Zuckerberg, has been accelerating its positioning into artificial intelligence with the introduction of Meta AI.

Originally launched as a chatbot integrated into WhatsApp, Instagram, Facebook, and Messenger in September 2023, the product took a significant step forward in April 2025 with the debut of a dedicated standalone app, unveiled at Meta’s LlamaCon developer conference, bringing with it a Discover Feed, voice capabilities, and deeper personalisation features.

It was designed as a consumer-facing interface for generative AI, allowing its users to generate content, hold conversations, interact through a Discover Feed and run ads within Meta’s ecosystem.

Behind the scenes, Meta AI is powered by LlaMa, which is Meta’s LLMs family. Llama was initially launched as a tool to help researchers and others who could not access large amounts of infrastructure to study AI models, positioning itself as a solution for democratizing access within the industry.

Over time, Llama has launched 4 models, as an open-source multimodal AI system. Additionally, Meta launched a limited preview of Llama API (Application Programming Interface) to enable developers to connect and use their Llama models.

But beyond Llama and Meta AI, reports suggest that Meta has been working on its next generation of AI models: ‘Avocado’. Despite that there is no official statement about it, a Meta spokesperson spoke with Reuters explaining how Meta has been working on this new frontier AI model different from their previous models.

While Llama is characterized for being open-source, ‘Avocado’ would be proprietary instead, making it impossible for outside developers to freely download its weights and related software components.

Avocado isn’t the main story, what it reveals about Meta is

Meta’s change of heart challenges the differentiation factor that Zuckerberg was so proud to embrace back in 2024 and which was the core idea after the deployment of Llama. That open-source would close the gap in AI development by allowing developers to improve it and create smaller versions.

One year after that first memo, Mark Zuckerberg shared a second one highlighting how he expected Meta to keep being a leader in Open Source, but due to safety concerns they would be more careful about “what we choose to open source”, suggesting a reconsideration in his initial approach.

This decision can be analyzed as a Meta’s shift from a clear AI strategy to a reactive one. First, after the rise of Deepseek as a major competitor in the AI landscape, where its R1 model, and a family of smaller distilled variants built on Llama and Qwen architectures, demonstrated that open-source components could be leveraged to build highly competitive systems.

This represented a significant disadvantage for Meta, since its open-source models provided crucial leverage to a competitor. Additionally, a closed-source AI model represents an economic relief for the massive investment Meta is doing to leverage their AI capabilities.

For instance, in June 2025, Meta invested $14.3 billion in data-labelling company Scale AI in exchange for a 49% stake, bringing Scale AI’s founder Alexandr Wang on board to lead the newly formed Meta Superintelligence Labs, the division now tasked with developing ‘Avocado’.

Disruption: Meta under pressure

However, recent developments suggest that Meta’s AI strategy may be entering an uncertainty phase. While the company initially positioned itself as a leader in open-source models with Llama, the rollout of the latest generation faced notable challenges.

The early reception of the new model was mixed, where some developers signaled an underperformance compared to competing systems. It also had a reduction in the reception and adoption compared to past models.

Additionally, Llama 4’s flagship model ‘Behemoth’ release, which was expected to be a much larger “teacher model”, has been repeatedly postponed as engineers struggle to improve its capabilities.

The new frontier model ‘Avocado’, was expected to be launched in March 2026, but seems to also be struggling to come to life. A person familiar with the matter told Reuters that the release was postponed to May or June.

The reason behind the delay is that ‘Avocado’ was falling short compared to Google’s Gemini 2.5 and Gemini 3 and other models from its AI competitors, in internal tests for reasoning, coding, and writing, said the sources.

Moreover, people with knowledge of the matter also said how Meta’s leadership is apparently discussing temporarily licensing Gemini from Google to power ‘Avocado’, and other company’s AI products, although no decisions have been made.

Implications

Taken together, these chain of events suggest not isolated issues, but instead, they raise questions about Meta’s long term strategy in AI. It seems that Meta is no longer executing a single, consistent AI strategy, but exploring multiple directions at once.

At the core of this shift in Meta’s AI strategy is the growing tension between openness and control. Llama’s success established Meta as a key player in the open-source AI ecosystem, enabling broad adoption and growth beyond the company.

The downfall was the challenge to maintain a competitive edge, since competitors such as DeepSeek took advantage of the open-source model to leverage their own models. The adoption of ‘Avocado’ suggests a strategic pivot in that sense, but it weakens the Meta’s differentiation factor to be an Open AI for everyone.

At the same time, the decision to change from open to close source models, comes as a solution to offset the expenses for AI development as a result for the intense investment strategy from Mark Zuckerberg to position Meta as one of the leaders in the AI competitive landscape, such as the investments of $600 billion committed to US AI infrastructure, data centres, energy projects, and workforce programmes by 2028.

An expenditure that seems to be very relevant for the company since the execution challenges raise further concerns. The delays, fixed receptions or models, and the reports of underperformance of ‘Avocado’ compared to the competition, indicate that Meta is falling behind at the frontier AI development, and needs to close that gap soon.

The most significant signal lies in the possibility of external dependency of Meta in Google, since the potential temporary licensing of models such as Gemini would outsource Meta’s technology to sustain its AI products. Marking a fundamental shift: from building core capabilities to acting as a distribution layer.

The final question is no longer whether Meta can build competitive AI models, but whether it can define a consistent, coherent strategy for them. Without that clarity, even its strategic positioning may not be enough to secure a leading position in the AI race.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


Google Maps has a long list of hidden (and sometimes, just underrated) features that help you navigate seamlessly. But I was not a big fan of using Google Maps for walking: that is, until I started using the right set of features that helped me navigate better.

Add layers to your map

See more information on the screen

Layers are an incredibly useful yet underrated feature that can be utilized for all modes of transport. These help add more details to your map beyond the default view, so you can plan your journey better.

To use layers, open your Google Maps app (Android, iPhone). Tap the layer icon on the upper right side (under your profile picture and nearby attractions options). You can switch your map type from default to satellite or terrain, and overlay your map with details, such as traffic, transit, biking, street view (perfect for walking), and 3D (Android)/raised buildings (iPhone) (for buildings). To turn off map details, go back to Layers and tap again on the details you want to disable.

In particular, adding a street view and 3D/raised buildings layer can help you gauge the terrain and get more information about the landscape, so you can avoid tricky paths and discover shortcuts.

Set up Live View

Just hold up your phone

A feature that can help you set out on walks with good navigation is Google Maps’ Live View. This lets you use augmented reality (AR) technology to see real-time navigation: beyond the directions you see on your map, you are able to see directions in your live view through your camera, overlaying instructions with your real view. This feature is very useful for travel and new areas, since it gives you navigational insights for walking that go beyond a 2D map.

To use Live View, search for a location on Google Maps, then tap “Directions.” Once the route appears, tap “Walk,” then tap “Live View” in the navigation options. You will be prompted to point your camera at things like buildings, stores, and signs around you, so Google Maps can analyze your surroundings and give you accurate directions.

Download maps offline

Google Maps without an internet connection

Whether you’re on a hiking trip in a low-connectivity area or want offline maps for your favorite walking destinations, having specific map routes downloaded can be a great help. Google Maps lets you download maps to your device while you’re connected to Wi-Fi or mobile data, and use them when your device is offline.

For Android, open Google Maps and search for a specific place or location. In the placesheet, swipe right, then tap More > Download offline map > Download. For iPhone, search for a location on Google Maps, then, at the bottom of your screen, tap the name or address of the place. Tap More > Download offline map > Download.

After you download an area, use Google Maps as you normally would. If you go offline, your offline maps will guide you to your destination as long as the entire route is within the offline map.

Enable Detailed Voice Guidance

Get better instructions

Voice guidance is a basic yet powerful navigation tool that can come in handy during walks in unfamiliar locations and can be used to ensure your journey is on the right path. To ensure guidance audio is enabled, go to your Google Maps profile (upper right corner), then tap Settings > Navigation > Sound and Voice. Here, tap “Unmute” on “Guidance Audio.”

Apart from this, you can also use Google Assistant to help you along your journey, asking questions about your destination, nearby sights, detours, additional stops, etc. To use this feature on iPhone, map a walking route to a destination, then tap the mic icon in the upper-right corner. For Android, you can also say “Hey Google” after mapping your destination to activate the assistant.

Voice guidance is handy for both new and old places, like when you’re running errands and need to navigate hands-free.

Add multiple stops

Keep your trip going

If you walk regularly to run errands, Google Maps has a simple yet effective feature that can help you plan your route in a better way. With Maps’ multiple stop feature, you can add several stops between your current and final destination to minimize any wasted time and unnecessary detours.

To add multiple stops on Google Maps, search for a destination, then tap “Directions.” Select the walking option, then click the three dots on top (next to “Your Location”), and tap “Edit Stops.” You can now add a stop by searching for it and tapping “Add Stop,” and swap the stops at your convenience. Repeat this process by tapping “Add Stops” until your route is complete, then tap “Start” to begin your journey.

You can add up to ten stops in a single route on both mobile and desktop, and use the journey for multiple modes (walking, driving, and cycling) except public transport and flights. I find this Google Maps feature to be an essential tool for travel to walkable cities, especially when I’m planning a route I am unfamiliar with.


More to discover

A new feature to keep an eye out for, especially if you use Google Maps for walking and cycling, is Google’s Gemini boost, which will allow you to navigate hands-free and get real-time information about your journey. This feature has been rolling out for both Android and iOS users.



Source link