
Google has unveiled two new custom AI chips designed to handle the growing demands of advanced artificial intelligence, marking one of the company’s biggest infrastructure announcements of the year. The new processors, called TPU 8t and TPU 8i, were introduced during Google Cloud Next and are aimed at powering the next generation of AI systems.
The company says the chips were built for what it calls the “agentic era” — a shift toward AI tools that can reason, make decisions, and complete tasks with less human input. One chip is optimized for training large AI models, while the other is focused on inference, the real-time work of responding to prompts and serving AI applications at scale.
The move strengthens Google’s long-running effort to reduce dependence on outside suppliers such as Nvidia, whose graphics processors have become the gold standard for AI workloads. Rather than replacing those partnerships entirely, Google appears to be building a hybrid strategy that combines in-house chips with access to third-party hardware through its cloud platform.
For businesses, the announcement signals that the race to build AI is increasingly about who controls the computing power behind the scenes — and who can deliver that power fastest, cheapest, and at global scale.
——————–
Related:
Broadcom Gains Fresh Attention as AI Infrastructure Demand Grows and Meta Deal Adds Momentum






















































