
While much of the AI race has focused on chatbots and consumer tools, Google is now making a major push into something potentially far larger: giving industrial robots the ability to think, adapt, and operate more like humans inside real-world manufacturing environments.
The company is expanding its robotics ambitions by integrating its Gemini AI models into industrial automation systems through a growing network of partnerships with some of the biggest names in robotics and manufacturing. Rather than building robots itself at scale, Google appears increasingly focused on becoming the intelligence layer powering the next generation of machines.
One of the most significant developments came through a partnership with FANUC, the world’s largest industrial robot manufacturer. The collaboration allows FANUC systems to use Gemini Enterprise AI to process natural language instructions and better understand unpredictable environments — a major shift from the rigid, pre-programmed behavior that has traditionally defined factory robotics. Google is also working alongside Boston Dynamics to integrate Gemini models into the company’s Atlas humanoid robot platform, while DeepMind has partnered with Agile Robots to explore advanced AI-driven manufacturing systems.
At the center of Google’s strategy is a growing focus on what the company calls “Physical AI” — systems designed not just to generate text or images, but to interact with and understand the physical world. Its Gemini Robotics-ER models are being developed to improve spatial reasoning, motion planning, safety awareness, and real-time decision-making inside industrial settings. Combined with emerging vision-language-action systems, the technology could allow robots to see, understand, and respond to their surroundings with far greater flexibility than traditional automation systems.
The broader shift may fundamentally reshape manufacturing itself. Instead of relying on expensive hardware redesigns every time a factory changes processes, companies are increasingly moving toward software-defined robotics — machines that can adapt through AI updates rather than mechanical rebuilding. Google’s Intrinsic platform is also developing systems that allow multiple robots to coordinate tasks together using AI-optimized motion planning, potentially opening the door to smarter, more autonomous production lines.
For years, robotics has struggled to move beyond repetitive factory work performed inside carefully controlled environments. Google’s growing push into AI-powered automation signals a much larger ambition: creating machines capable of handling dynamic, unpredictable tasks across industries ranging from electronics manufacturing to logistics and advanced assembly. If successful, the next major AI revolution may not happen on screens — but on factory floors.























































