
AI Revolution 2025: Industry Shifts and Legal Battles Unveiled
"AI is Cheap, But Your Decisions Will Cost You" – The Hidden Crisis in the Age of Instant Answers
In a captivating episode of Decoder, Jon Fortt interviews Cassie Kozyrkov, former Chief Decision Scientist at Google, to explore how generative AI is transforming the decision-making landscape. Kozyrkov, now founder and CEO of Kozyr, explains that while AI tools like ChatGPT offer rapid and inexpensive answers, they shift the burden back to humans to ask the right questions and define their priorities. The ease of access to information means less time to reflect, reassess, and refine choices — a process that was naturally built into the slower data cycles of the past.
Kozyrkov emphasizes that decision-making is more than just logic and data — it's a dance between psychology, neuroscience, and company values. Leaders must now evolve into architects of decision ecosystems, figuring out not just what to decide, but who or what gets to decide. She warns that behind every “objective” AI output lies a thick layer of subjective judgments. If organizations fail to define clear values and priorities, even the smartest AI will only amplify confusion.
In an AI-driven world, strategy is no longer about having the answers — it’s about asking the right questions. As an entrepreneur, you must train your team not to blindly follow the first suggestion from ChatGPT, but to think critically about goals, trade-offs, and what “success” truly means for your mission. Build internal frameworks for decision-making, invest in clarity of purpose, and resist the temptation to outsource core judgments. AI is your intern — not your CEO.
Source: Fortt, J. (2025, July 14). How decision making will change when AI answers are cheap and (too) easy. Decoder. The Verge. https://www.theverge.com/decoder-podcast-with-nilay-patel/703269/cassie-kozyrkov-interview-ai-google-decision-scientist
Billion-Dollar AI Coup: Ex-OpenAI Rebels Raise $2B to Build the Future—Before OpenAI Does
Thinking Machines Lab, the new AI powerhouse founded by Mira Murati—former CTO of OpenAI—and a team of ex-OpenAI researchers, has burst out of stealth mode with a record-shattering $2 billion seed round. The company is now valued at an eye-popping $12 billion. Investors include tech titans Andreessen Horowitz, Nvidia, Accel, Cisco, and AMD. Murati’s cofounders are industry heavyweights who helped build ChatGPT and led core AI research efforts at OpenAI, making this one of the most elite tech exoduses in recent memory.
The startup is developing multimodal AI that can interact with humans not only through conversation but also through vision and collaboration. With open-source components and a mission to empower researchers and startups, Thinking Machines is positioning itself as a high-impact, high-integrity alternative to Big AI. Their upcoming product launch and research agenda aim to democratize advanced AI tools while maintaining transparency—an implicit jab at the secrecy surrounding OpenAI and other giants.
This is a masterclass in timing, team-building, and mission-driven branding. For founders in tech, the key takeaway is talent + timing = tectonic shift. If your startup operates in a space dominated by giants, your competitive edge won’t just be tech—it’ll be trust, values, and clarity of purpose. Build with openness, rally visionary talent, and solve a specific pain point that Big Tech is too bloated or conflicted to tackle. And if your founding team is made up of defectors from the Death Star? Even better.
Source: Knight, W. (2025, July 16). Thinking Machines Lab recauda una cifra millonaria para su IA y anuncia a sus cofundadores. WIRED. https://es.wired.com/articulos/thinking-machines-lab-recauda-una-cifra-millonaria-para-su-ia-y-anuncia-a-sus-cofundadores
Rise of the Code Whisperer: Ex-Google Rebels Build AI That Reads Slack and Writes Software
Reflection, a stealth-mode AI startup led by former Google DeepMind and Gemini researchers, has introduced Asimov, a groundbreaking multi-agent system trained not just to generate code, but to understand how real software is built inside organizations. By analyzing codebases, Slack messages, emails, and project updates, Asimov learns from a company's actual workflows—aiming to evolve from assistant to autonomous engineer. Unlike traditional tools, Asimov is designed to read more than it writes, focusing on understanding complex team dynamics and context.
With reinforcement learning and secure architecture, Asimov already outperforms competitors like Anthropic’s Claude Code in developer preference surveys. Reflection’s moonshot? Teaching AI to master coding as a gateway to superintelligence. Instead of building flashy UI-driven agents, they train Asimov to act like an in-house engineer who “gets” your product roadmap. Backed by Sequoia and praised by experts like MIT’s Daniel Jackson (with caveats), Reflection believes AI agents will soon become the organizational brains of the future—writing, fixing, and eventually inventing software systems and infrastructure with minimal human input.
Forget shiny demos—context is king. If you’re building with or for AI, look at what Reflection nailed: understanding the entire environment of a workflow, not just its outputs. Entrepreneurs should stop chasing “code-generation magic” and instead aim to embed their solutions into the invisible layers of how teams actually build. If your AI understands the messy reality—emails, updates, Slack chaos—and creates signal out of noise, you’re not just saving time; you’re building trust. That’s where product-market fit lives in the AI age.
Source: Knight, W. (2025, July 16). Exinvestigadores de Google crean una IA que transforma conversaciones y código en software funcional. WIRED. https://es.wired.com/articulos/exinvestigadores-de-google-crean-una-ia-que-transforma-conversaciones-y-codigo-en-software-funcional
Meta Strikes Again: Steals Two More AI Stars from OpenAI in Billion-Dollar Talent War
Meta has poached two of OpenAI’s rising stars—Jason Wei and Hyung Won Chung—for its ambitious new superintelligence lab, escalating the high-stakes AI talent war. Wei, a reinforcement learning enthusiast known for his work on OpenAI’s o3 and deep research models, and Chung, a specialist in reasoning and agent design, previously collaborated at both Google and OpenAI. Their Slack profiles at OpenAI are now inactive, a silent but clear sign of their defection. Meta, Wei, Chung, and OpenAI have yet to comment publicly, but sources confirm the strategic move.
This is just the latest episode in Meta’s aggressive recruitment spree, with Mark Zuckerberg reportedly offering up to $300 million over four years to elite AI researchers. Meta’s Superintelligence Lab now resembles a who’s-who of former OpenAI talent, while OpenAI retaliates by pulling senior engineers from Tesla, xAI, and Meta itself. Wei cryptically reflected online on reinforcement learning’s life lessons—imitation may work early, but true breakthroughs demand risk-taking and forging your own path. Sounds like Meta’s entire hiring strategy in a sentence.
This isn’t just corporate gossip—it’s a signal. The war for AI talent has gone thermonuclear. If you're building an AI startup, remember: your edge won't be compute—it'll be conviction, culture, and mission. Money alone won't retain top minds; purpose will. Build teams with shared vision and let them co-own the moonshot. Also, pay attention to the meta-game (pun intended): when giants pivot, they signal new openings for niche players. The exodus from Big AI creates space for rebels to innovate faster and freer.
Source: Robison, K. (2025, July 16). Meta se lleva a otros dos destacados investigadores de OpenAI. WIRED. https://es.wired.com/articulos/meta-se-lleva-a-otros-dos-destacados-investigadores-de-openai
The AI IQ Crisis: How Machines Are Outsmarting the Tests—But Not Really Thinking
As AI systems like ChatGPT, Claude, and Gemini evolve at breakneck speed, one big question looms: How smart are they, really? With nearly 70 large language models (LLMs) in circulation—and hundreds more specialized ones—it’s become increasingly difficult to compare their capabilities or measure progress toward Artificial General Intelligence (AGI). That’s where benchmarks come in: standardized tests like MMLU, HellaSwag, HumanEval, and TruthfulQA attempt to gauge everything from reasoning to programming to honesty. But critics argue that many of these models are simply pattern-recognizing parrots—scoring high not because they understand, but because they’ve seen the answers before.
Newer, more nuanced benchmarks like ARC-AGI and SWE-bench aim to test real-world problem solving and abstract thinking, but even these are being gamed. OpenAI’s claim that its o3 model scored 87.5% on ARC-AGI raised eyebrows—and was later dissected by experts who said the system brute-forced answers instead of showing true reasoning. Human-based evaluations, such as LMArena, are emerging as more meaningful alternatives, emphasizing that as machines mimic us better, only we can judge how close they really get to thinking like us.
If you’re building AI products, don’t get blinded by benchmark scores alone—they’re the vanity metrics of the AI world. Focus instead on outcomes, trust, and context-specific performance. A model that scores 95% on HellaSwag might still hallucinate when it’s summarizing your client's quarterly earnings. For founders, the real competitive edge lies in human-centered evaluation—designing systems where users, not benchmarks, validate value. Build for depth, not demos.
Source: Signorelli, A. D. (2025, July 14). Qué pruebas usamos para medir lo “inteligente” que es una IA. WIRED. https://es.wired.com/articulos/que-pruebas-usamos-para-medir-lo-inteligente-que-es-una-ia