Published on

AI News Summary - Week 14.2, 2026

Authors

AI News Summary — Week 14.2, 2026

Friday brings us more exciting AI developments! This week we see Microsoft's continued push into AI models, a significant funding round for AI-powered chip design, and Google's new open-source offering. Let's dive in.

Microsoft Launches Three New MAI Models

Microsoft has announced three new foundational AI models developed by its Microsoft AI division, signaling the company's continued push to build its own stack of multimodal AI models — even while maintaining its partnership with OpenAI.

The three new models are:

  • MAI-Transcribe-1: A speech transcription model supporting 25 languages, running 2.5x faster than Azure Fast at $0.36 per hour
  • MAI-Voice-1: An audio generation model producing 60 seconds of audio per second of processing, priced at $22 per million characters
  • MAI-Image-2: A video generation model available for 5permillioninputtokensand5 per million input tokens and 33 per million output tokens

These models were developed by Microsoft's MAI Superintelligence team, led by Mustafa Suleyman, which was formed in November 2025. Suleyman emphasized Microsoft's "Humanist AI" approach — putting humans at the center and optimizing for how people actually communicate.

A key differentiator? Microsoft claims these models are cheaper than comparable offerings from Google and OpenAI. The company reaffirmed its commitment to the OpenAI partnership while simultaneously building out its own model portfolio.

Cognichip Raises $60M to Use AI for Chip Design

Cognichip, a startup building AI models to assist engineers in designing computer chips, has raised $60 million in funding led by Seligman Ventures, with notable participation from Intel CEO Lip-Bu Tan.

The problem being solved: Advanced chips take 3-5 years from conception to mass production, with the design phase alone taking up to 2 years. The latest Nvidia GPUs contain 104 billion transistors — that's an enormous amount to coordinate.

Cognichip's AI approach aims to reduce chip development costs by more than 75% and cut timelines in half. The company uses its own model trained specifically on chip design data, rather than adapting general-purpose LLMs. This required building proprietary datasets, including synthetic data, and licensing from partners — since chip designers guard their IP closely (unlike software developers who share code openly).

The company is competing against established players like Synopsys and Cadence Design Systems, as well as well-funded startups like ChipAgents (74M)andRicursive(74M) and Ricursive (300M).

"Current capital into AI infrastructure is the largest I've seen in 40 years," said Umesh Padval of Seligman Ventures. "If it's a super cycle for semiconductors, it's a super cycle for companies like Cognichip."

Google Releases Gemma 4 Open Models

Google has unveiled Gemma 4, its most capable open-source model family to date, designed for advanced reasoning and agentic workflows.

Gemma 4 is being released under a more permissive Apache 2.0 license, positioning it as Google's answer to the growing competition from Chinese open-weights models. The release focuses on making the models more capable for coding tasks and autonomous agent workflows.

This continues Google's strategy of offering both frontier models (Gemini) and accessible open models (Gemma) to cater to different developer needs.

What's Hot This Week

  • Microsoft's multi-model strategy is becoming clear — betting on both OpenAI partnership and its own MAI models
  • AI-powered chip design is getting serious funding — Cognichip's $60M round shows the infrastructure play
  • Open models are heating up — Google's Gemma 4 release is partly a response to Chinese open-weights competition
  • Voice and transcription are commoditizing — Microsoft's $0.36/hour transcription pricing shows how fast this space is maturing

That's a wrap for this week's AI news. Have a great weekend!


Sources: TechCrunch, The Register, Microsoft AI Blog, Google Blog