LG Electronics and Nvidia are in talks on robotics, AI data centres


The discussions, triggered by a visit from Nvidia’s Madison Huang, would deepen LG’s physical AI ambitions and give Nvidia another major consumer electronics partner at a moment when physical AI is moving from lab to factory floor.


LG Electronics confirmed on Wednesday that it has been in discussions with Nvidia over potential cooperation in three areas: robotics, AI data centres, and mobility.

The announcement, reported by Reuters, came after Madison Huang, Nvidia’s senior director for physical AI platforms, and the eldest daughter of CEO Jensen Huang, visited LG Electronics’ headquarters in Yeouido, Seoul, along with several other major South Korean technology companies. LG CEO Ryu Jae-cheol attended the meeting directly.

No formal agreement has been announced. The talks are at an exploratory stage, and no specific products, investment amounts, or timelines have been confirmed. But the three areas under discussion map precisely onto both companies’ most publicised strategic priorities, and the breadth of the conversation signals this is more than a courtesy call.

What each side brings to the table?

The 💜 of EU tech

The latest rumblings from the EU tech scene, a story from our wise ol’ founder Boris, and some questionable AI art. It’s free, every week, in your inbox. Sign up now!

For LG, the strategic logic is straightforward. The company is one of the world’s largest home appliance manufacturers, but its growth thesis has shifted decisively towards AI-powered physical systems.

At CES 2026 in January, LG unveiled CLOiD, a home robot with two articulated arms, seven degrees of freedom per arm, and five individually actuated fingers per hand, the physical expression of what the company calls its ‘Zero Labor Home’ vision, in which connected robots and appliances automate the manual and cognitive load of household tasks.

LG’s broader CES presentation framed its AI strategy around three pillars: device excellence, an orchestrated smart home ecosystem, and expansion into AI-defined vehicles and AI data centre HVAC solutions.

The CLOiD robot runs on LG’s own ‘Affectionate Intelligence’ platform, which handles contextual awareness, natural interaction, and continuous learning from the home environment.

What it does not have is Nvidia’s Isaac robotics stack: the simulation environment, the pre-trained manipulation models, the Omniverse-based digital twin infrastructure, and the GPU compute optimised for real-time physical AI inference that Nvidia has been building out over the past two years.

Integrating Nvidia’s physical AI platform with CLOiD would give LG what every other serious robotics company is currently racing to access: a proven development-to-deployment pipeline that can compress the time between prototype and production.

For Nvidia, the attraction is consumer scale. Its existing robotics partnerships, including the Siemens factory trial, where a Humanoid HMND 01 Alpha running on Nvidia’s physical AI stack completed eight hours of live logistics operations at a factory in Erlangen, are concentrated in industrial and enterprise settings.

LG would represent a different category entirely: a company with mass-market distribution, a global installed base of connected home appliances through its ThinQ ecosystem, and specific plans to put a robot in people’s homes.

If Nvidia’s Isaac platform becomes the AI stack inside CLOiD, it gains access to one of the most data-rich training environments imaginable: real homes, real tasks, real variability.

The robotics thread is the most visible, but the data centre and mobility conversations are arguably of greater near-term commercial significance.

On data centres: LG’s CES presentation explicitly positioned the company as a provider of high-efficiency HVAC and thermal management solutions for AI data centres, a product category that is exploding in relevance as the power density of GPU clusters makes conventional cooling infrastructure inadequate.

Nvidia’s data centre business, which accounted for the overwhelming majority of its record revenues over the past two years, is the most important AI infrastructure deployment context in the world.

A partnership on data centre thermal management would position LG as a hardware supplier inside Nvidia’s ecosystem at the infrastructure level, complementing the AI compute layer rather than competing with it.

On mobility: both companies have well-established automotive AI programmes that are logical fits for collaboration. Nvidia’s DRIVE platform is among the most widely deployed AI computing systems in autonomous and semi-autonomous vehicles.

LG’s automotive components division, which produces in-vehicle infotainment, camera systems, EV components, and what it calls ‘AI-powered in-vehicle solutions’ including gaze-tracking, adaptive displays, and multimodal generative AI platforms, is one of the company’s fastest-growing segments.

The two companies are already operating in adjacent layers of the same vehicle; a formal collaboration would potentially integrate LG’s in-cabin AI experience layer with Nvidia’s DRIVE compute platform.

Wednesday’s announcement is the latest signal that the physical AI race, the deployment of AI in robots and autonomous systems operating in the real world, as distinct from software models running in the cloud, is accelerating beyond the controlled trials of the past two years into commercial partnership structures.

For example, Sereact raised $110 million to scale AI that makes any robot adaptable, underscoring how capital is flowing into the intelligence layer of the robotics stack. The Siemens–Nvidia factory deployment demonstrated that physical AI can run in live production environments; the LG talks suggest it is now extending into the consumer home.

For Nvidia, the expansion of physical AI partnerships beyond purely industrial settings into consumer electronics is strategically significant. The company’s Omniverse and Isaac platforms are designed to be the universal development infrastructure for physical AI, in the same way its GPU architecture became the universal infrastructure for cloud AI.

Every major robotics company that adopts the Nvidia stack strengthens that position. LG, with its scale in home appliances and its explicit commitment to bringing robots into the home, is a materially different kind of partner than a German factory or a logistics warehouse, and potentially a much larger one.



Source link

Leave a Reply

Subscribe to Our Newsletter

Get our latest articles delivered straight to your inbox. No spam, we promise.

Recent Reviews


The Samsung Keyboard supports glide typing, voice dictation, multiple languages, and deep customization through Good Lock. On paper, it’s a very capable and perfectly functional keyboard. However, it’s only when I started using it that I realized great features don’t necessarily translate to a great user experience. Here’s every problem I faced with the Samsung Keyboard, and why I’m permanently sticking with Gboard as my main Android keyboard.

I have been using Gboard and the Samsung Keyboard on a recently bought Galaxy S24, which I got at a massive discount.

Google’s voice typing doesn’t cut me off mid-sentence

Fewer corrections, fewer cutoffs, faster dictation

I might be a professional writer, but I hate typing—whether it’s on a physical keyboard or a virtual one. I type slower than I think, which I suspect is true for most people. That becomes a problem when I have multiple ideas in my head and need to get them down fast. It’s happened far too often: I start typing one idea and forget the other. Since jacking my brain into a computer isn’t an option (yet), I’ve been leaning more and more on voice typing as the fastest way to capture my thoughts.

Now, both Samsung Keyboard and Gboard support voice typing, but I’ve noticed that Gboard with Google’s voice engine is just better at transcription accuracy. It picks up on accents flawlessly and manages to output the right words. In my experience, it also seems to have a more up-to-date dictionary. When I mention a proper noun—something recently trending like a video game or a movie name—Samsung’s voice typing fails to catch it, but Google nails it.

That said, you can choose Google as your preferred voice typing engine inside Samsung Keyboard, but it’s a buggy experience. I’ve noticed that the transcription gets cut off while I’m in the middle of talking—even when I haven’t taken a long pause. This can be a real problem when I’m transcribing hands-free.

Gboard offers a more accurate glide typing experience

Google accurately maps my swipe gestures to the right words

Voice typing isn’t always possible, especially when you’re in a crowded place and want to be respectful (or secretive). At times like these, I settle for glide (or swipe) typing. It’s generally much faster than tapping on the keyboard—provided the prediction engine maps your gestures to the right word. If it doesn’t, you have to delete that word, draw that gesture again, or worse—type it out manually.

Now, both Samsung Keyboard and Gboard support glide typing, but I’ve noticed Gboard is far more accurate. That said, when I researched this online, I found a 50-50 divide—some people say Gboard is more accurate, others say Samsung is. I do have a theory on why this happens.

Before my Galaxy S24, I used a Pixel 6a, before that a Xiaomi, and before that a Nokia 6.1 Plus. All of my past smartphones came with Gboard by default. I believe Gboard learned my typing patterns over time—what word correlates to what gesture, which corrections I accept, and which ones I reject. After a decade of building up that prediction model, Gboard knows what I mean when my thumb traces a particular shape. Samsung Keyboard, on the other hand, is starting from zero on this Galaxy S24—leading to all the prediction errors. At least that’s my working theory.

There’s also the argument for muscle memory. While glide typing, you need to hit all the correct keycaps for the prediction engine to work. If you’re even off by a slight amount, the prediction model might think you meant to hit “S” instead of “W.” Now, because of my years of typing on Gboard, it’s likely that my muscle memory is optimized for its specific layout and has trouble adapting to Samsung’s.

Swiping vs typing.


Is Swiping Really Faster Than Typing on a Phone Keyboard?

Which typing method reigns supreme?

I mix three languages in one message, and Gboard just gets it

Predictive multilingual typing doesn’t get any better than this

I’m trilingual—I speak English, Hindi, and Bengali. When I’m messaging my friends and family, we’re basically code-mixing—jumping between languages in the same sentence using the Latin alphabet. Now, my friends and I have noticed that Gboard handles code-mixing much more seamlessly than Samsung Keyboard.

If you just have the English dictionary enabled, neither keyboard can guess that you’re trying to transliterate a different language into English. It’ll always try to autocorrect everything, which breaks the flow. The only way to fix this is by downloading a transliteration dictionary like Hinglish (Hindi + English) or Bangla (Latin). Both Samsung Keyboard and Gboard support these dictionaries, but the problem with Samsung Keyboard is that it can only use one dictionary at a time.

Let’s say I’m writing something in Latinized Bangla and suddenly drop a Hindi phrase. Samsung Keyboard will attempt to autocorrect those Hindi words. Gboard is more context-aware. Since my Hinglish keyboard is already installed, I don’t have to manually switch to it. Gboard can detect that I’m using a Hindi word even with the English or Bangla keyboard enabled, and it won’t try to autocorrect what I’m writing. This also works flawlessly with glide typing, which is a huge quality-of-life improvement over Samsung Keyboard.

This isn’t just an India-specific thing either. Code-mixing is how billions of people type every day—Spanglish in the US, Taglish in the Philippines, Franglais across parts of Europe and Africa.

Gboard looks good without me spending an hour on it

I don’t have time for manual customization

Samsung Keyboard is hands down the more customizable option, especially if you combine it with the Keys Cafe module inside Good Lock. You get granular control over almost every aspect of the keyboard—key colors, keycaps, gesture animations, and a whole lot more. While for some users, this is heaven, I just find it too overcomplicated and a massive time sink.

I don’t have the patience to sit and adjust every visual detail of my keyboard. Sure, it gets stale after a while, and you’d want to freshen it up, but I don’t want to spend the better part of an hour tweaking a virtual keyboard. This is where Gboard wins (at least for me) by doing less.

Android 16 brings Material 3 Expressive, which automatically themes your system apps using your wallpaper’s color scheme. With Gboard, all you have to do is change the wallpaper, and the keyboard updates to match—no Good Lock, no manual color picking. It’s a cleaner, more seamless way to keep your phone looking good without putting in the extra legwork.


The keyboard you don’t think about is the one that’s working

I didn’t switch to Gboard because Samsung Keyboard was broken. I switched because Gboard made typing feel effortless. If you’re a Samsung user who’s never tried it, it’s a free download and a five-second switch. You might not go back either.

Pixel 7 with the 8vim keyboard.


I Tried the Weirdest Android Keyboards So You Don’t Have To

Can strange layouts and gestures beat the good old-fashioned QWERTY?



Source link