Google III (2015 - 2025)
This is the story of how the world’s greatest business faces its greatest test: can they disrupt themselves without losing their $140B annual profit-generating machine in Search?
Google faces the greatest innovator’s dilemma in history. They invented the Transformer — the breakthrough technology powering every modern AI system from ChatGPT to Claude (and, of course, Gemini). They employed nearly all the top AI talent: Ilya Sutskever, Geoff Hinton, Demis Hassabis, Dario Amodei — more or less everyone who leads modern AI worked at Google circa 2014. They built the best dedicated AI infrastructure (TPUs!) and deployed AI at massive scale years before anyone else. And yet... the launch of ChatGPT in November 2022 caught them completely flat-footed. How on earth did the greatest business in history wind up playing catch-up to a nonprofit-turned-startup?
Today we review the complete story of Google’s 20+ year AI journey: from their first tiny language model in 2001 through the creation Google Brain, the birth of the transformer, the talent exodus to OpenAI (sparked by Elon Musk’s fury over Google’s DeepMind acquisition), and their current all-hands-on-deck response with Gemini. And oh yeah — a little business called Waymo that went from crazy moonshot idea to doing more rides than Lyft in San Francisco, potentially building another Google-sized business within Google. This is the story of how the world’s greatest business faces its greatest test: can they disrupt themselves without losing their $140B annual profit-generating machine in Search?
Kyle’s Rating: 10/10
This episode showcased Acquired’s signature storytelling at its best, weaving together the complex narrative of how Google was both the creator of modern AI (inventing the transformer, building the infrastructure, training virtually every AI researcher) and constrained by business model conflicts that prevented them from shipping a ChatGPT-like product years earlier. Ben and David’s analysis provides much-needed nuance to the simplistic “Google lost the AI race” narrative.
Company Overview
Company: Google (Alphabet Inc.)
Founded: 1998
Headquarters: Mountain View, California
Google stands at a critical juncture as both the creator of foundational AI technology (the transformer) and a company whose core search business faces potential disruption from the very AI revolution it helped spawn. The company exemplifies the innovator’s dilemma - having invented the transformer in 2017 that powers today’s AI boom, yet struggling initially to commercialize it while competitors like OpenAI capitalized on Google’s own breakthrough.
Narrative
I: The Early Foundation of AI at Google (2000-2011)
Larry Page always conceived Google as an AI company, influenced by his father’s contrarian PhD in machine learning. In 2002, Page declared “artificial intelligence would be the ultimate version of Google.”
The journey began with a 2000-2001 lunch where George Harik theorized to Noam Shazeer that compressing data equals understanding it. They built PHIL, creating the “Did you mean?” feature and powering Jeff Dean’s one-week AdSense implementation generating billions overnight.
In 2007, Sebastian Thrun joined after Larry acquihired his pre-startup. Franz Och won DARPA’s translation challenge but took 12 hours per sentence - Jeff Dean reduced this to 100 milliseconds. Thrun recruited Jeff Hinton, pursuing then-heretical deep neural networks. Google X launched in 2009 with Waymo first, then Google Brain in 2011, led by Andrew Ng and Jeff Dean using the Disbelief distributed system.
II: Google Brain and the Era of Dominance (2012-2017)
The 2012 “cat paper” changed everything. Using 16,000 CPU cores, they trained a neural network to identify cats in YouTube videos without supervision. This solved YouTube’s massive problem: people uploaded videos but were terrible at describing them. The breakthrough led to hundreds of billions in revenue across the industry. As David notes: “The AI era started in 2012“ for companies with social feeds. YouTube transformed into “the single biggest property on the internet.” Facebook hired Yann LeCun to replicate this. ByteDance built TikTok on it.
Simultaneously, AlexNet achieved 40% ImageNet improvement using Nvidia gaming GPUs - two GTX 580s from Best Buy. This set Nvidia on the path to becoming the world’s most valuable company. Google acquired DNNresearch for $44 million after Jeff Hinton ran an auction from his hotel room in Lake Tahoe, with bidding including Baidu, Microsoft, and surprisingly DeepMind.
Google’s $550 million DeepMind acquisition in 2014 proved pivotal. The acquisition became a bidding war: Facebook offered $800 million, Elon Musk offered Tesla stock (worth 70x today), but Demis felt kinship with Larry Page. Google agreed to keep DeepMind independent in London.
Google ordered 40,000 GPUs from Nvidia for $130 million - 3% of Nvidia’s revenue when their market cap was just $10 billion. But Jeff Dean calculated speech recognition on all Android phones would require doubling Google’s data centers. The solution: Tensor Processing Units (TPUs), developed in 15 months in Madison, Wisconsin, designed to fit hard drive slots. They kept it secret while AlphaGo ran on just four TPUs to beat the world champion.
III: The Transformer Revolution and Google’s Missed Opportunity (2017-2022)
In 2017, eight Google Brain researchers published “Attention is All You Need,” introducing transformers that process entire text passages simultaneously. Jakob Uszkoreit conceived the attention mechanism, but initial implementations failed. Then Noam Shazeer joined and rewrote the codebase from scratch - what teammates called “wizardry.” The transformer crushed LSTMs. More critically, they discovered scaling laws: bigger models meant better results, with no apparent ceiling.
Greg Corrado told the hosts the transformer was “so elegant that people’s response was often, this can’t work. It’s too simple.” Google immediately built BERT and MUM models. So, it is false to say that Google did nothing with the Transformer paper. However, they treated it as incremental improvement rather than platform shift.
Noam Shazeer saw the potential immediately. He built Meena, an internal ChatGPT predecessor, and pitched leadership: “drop the ten blue links and pivot search to a giant chatbot.” But Google faced three insurmountable problems:
Business model destruction: Google’s search ads generate $400 per US user annually, totaling $370 billion. Direct AI answers eliminate this entire ecosystem. As Ben explains, “This isn’t just a UI problem; it’s an existential business model challenge.”
Legal liability: Google spent decades navigating publisher relationships. Even info boxes required extensive legal review. A chatbot synthesizing information would exponentially increase copyright exposure.
Brand trust: Users trust Google’s accuracy absolutely. But Lambda could be asked “who should die” and would provide names. The reputational risk could destroy decades of trust overnight.
Despite Sundar declaring “we are an AI first company” in 2017, Google treated AI as a sustaining innovation - something that strengthens existing products. They integrated AI everywhere: search ranking, YouTube recommendations, Gmail autocomplete. The notion that AI could disrupt their own business seemed absurd.
This conservatism triggered exodus. Ilya Sutskever left for OpenAI after Elon Musk and Sam Altman’s 2015 Rosewood Hotel dinner. When asked what would make researchers leave Google, almost everyone said “nothing.” But Ilya found it “interesting to try” despite Jeff Dean personally doubling his offer. By 2021, Noam Shazeer left for Character.AI. Eventually all eight transformer authors departed - what the hosts call “one of the greatest talent and IP losses in corporate history.”
IV: Code Red and Google’s Response (2022-2025)
ChatGPT’s November 30, 2022 launch - during Thanksgiving, “OpenAI’s favorite time for drama” - shattered Google’s worldview. The product hit 100 million users in two months. As David explains: “Up until December 2022, Google viewed AI as a sustaining innovation. Overnight, it became a disruptive innovation and existential threat.“
Microsoft’s February 2023 Bing announcement worsened everything. Satya declared “It’s a new day for search” and later boasted “I want people to know that we made Google dance.” With Microsoft owning 49% of OpenAI after $13 billion investment, Google’s original enemy returned with a legitimate threat.
Google’s response was catastrophic. They rushed Bard’s launch, but the demo contained factual errors, causing an 8% stock drop - billions destroyed by one video. The product lacked the RLHF that made ChatGPT magical. Replacing Lambda with Palm didn’t help; they remained behind GPT-4.
Then Sundar made two transformative decisions.
Merging Brain and DeepMind under Demis Hassabis despite violating acquisition terms - DeepMind had been promised independence. But having two competing AI labs while facing existential threat was untenable. Demis received the mandate: “Change the past 10 years of culture around building and shipping AI products.”
Standardizing on Gemini - one model for everything. Scaling laws made this necessary: multiple models mean duplicating enormous costs while diluting quality. This avoided the fragmentation that killed Google+.
With Jeff Dean and Noam Shazeer (returned via $2.7 billion Character.AI deal) leading, plus Sergey Brin actively working, Google compressed years into months. They announced Gemini in May 2023, released in December, then iterations at “Nvidia pace.” Sundar mandated a ballet to protect search growth while creating the best AI experience. They launched AI overviews selectively, tested AI mode carefully, kept Gemini separate while experimenting with google.com.
By October 2025, Gemini reached 450 million users with revenue at all-time highs. Google uniquely possesses all four AI pillars - application, model, chip, cloud - entering “the most capital-intensive race in business history” while doing stock buybacks. They’re processing nearly a quadrillion tokens on infrastructure no competitor can match.
Yet the fundamental tension remains. As Ben frames it: “Larry and Sergey say they’d rather go bankrupt than lose at AI. Will they really?” When AI provides direct answers instead of ads, Google must choose between their mission and margins. The hosts conclude Google is “doing the best job threading the needle,” but can they sustain both when those goals no longer align?
Notable Facts
The transformer paper is the 7th most cited of the 21st century with 173,000+ citations; all eight authors left Google
Google processes nearly 1 quadrillion tokens (50x increase from 2024), dwarfing competitors
Waymo shows 91% fewer serious crashes than human drivers with 100M+ autonomous miles
Google owns 2-3 million TPUs, paying 50% margins vs competitors’ 80% to Nvidia
Financial & User Metrics
Overall Google Financial Performance:
Revenue: $370 billion over last 12 months
Earnings: $140 billion over last 12 months (more profit than any tech company globally; only Saudi Aramco generates more)
Market cap: $3 trillion (fourth most valuable company behind Nvidia, Microsoft, Apple)
Cash position: $95 billion in cash and marketable securities (down from $140 billion in 2021 due to AI CapEx and shareholder returns)
Per-user economics: Google generates approximately $400 per user per year from free search in the US
AI-Specific Metrics:
Gemini monthly active users: 450 million (though Ben and David question what exactly is counted)
Token processing: 980 trillion tokens as of June 2025 (50x increase year-over-year)
Infrastructure: 2-3 million TPUs deployed (comparable to Nvidia’s 4 million GPUs shipped in 2024)
AI hardware investment: Initial $130 million for 40,000 GPUs (2014), now billions annually
TPU development: 15 months from conception to deployment, kept secret for over a year
Google Cloud Metrics:
Current revenue: $50+ billion annual run rate
Growth rate: 30% year-over-year (fastest growing of major cloud providers)
Historical growth: 5x revenue in 5 years (from ~$10 billion to $50+ billion)
Profitability: Achieved profitability in 2023
Workforce: ~10,000 people hired into go-to-market organization under Thomas Kurian
Subscription & Consumer AI:
Google One subscribers: 150+ million (growing ~50% year-over-year)
AI premium tier: $20/month (small fraction of total subscribers currently)
Comparison: Netflix has hundreds of millions of subscribers; Spotify has 250+ million
Key Acquisitions & Deals:
DeepMind acquisition: $550 million (2014) - potentially worth $500 billion today per hosts
DNNresearch acquisition: $44 million (2013) after competitive auction
Character.AI licensing deal: $2.7 billion (2024) to bring back Noam Shazeer
Waymo Metrics:
Total autonomous miles: 100+ million with no human driver
Weekly growth: 2 million miles per week
Paid rides: 10+ million total, hundreds of thousands weekly
Fleet size: 2,000 vehicles across 5 cities
Safety improvement: 91% fewer serious crashes than human drivers
Total investment: $10-15 billion over 15+ years (one year of Uber’s current profits)
Market position: Reportedly doing more gross bookings than Lyft in San Francisco (January 2025)
Google Cloud
Google Cloud began in 2008 as App Engine - quintessentially Google: highly opinionated, requiring specific SDKs and languages. The cultural mismatch was stark: Google made self-service consumer products monetized through ads, with no enterprise sales culture. By 2017, after nine years, revenue was just $4 billion.
The turning point came with hiring Thomas Kurian from Oracle in 2018. He hired 10,000 go-to-market people from a base of 150. Revenue exploded to $50+ billion today, growing 30% annually. Two strategic moves proved crucial: launching Kubernetes for multi-cloud positioning, and offering TPUs exclusively through Cloud.
Cloud became strategically essential for AI as both distribution mechanism and the only way to offer TPUs externally. Without Cloud, Google couldn’t compete in chips - Amazon and Microsoft wouldn’t put TPUs in their clouds. The irony: Google was “a cloud company all along” but needed Kurian to teach them enterprise sales.
Waymo
Waymo’s origins trace to the 2004 DARPA Grand Challenge where Sebastian Thrun’s Stanford team won using a software-first approach - commodity sensors on an unmodified Volkswagen versus competitors’ hardware-heavy builds. They used machine learning to combine laser and camera data, identifying safe paths by color-matching terrain.
When Sebastian joined Google in 2007, Larry pushed him to pursue self-driving. Project Chauffeur became Google X’s first project in 2009. The team completed the “Larry 1000” (1,000 miles of difficult California roads) in 18 months, but commercialization took 15 years due to infinite edge cases.
Waymo incorporated deep learning gradually - convolutional neural nets in 2013, transformer insights by 2017. After raising billions and spinning out in 2016, they launched commercial service in Phoenix (2020) then San Francisco (2024), quickly surpassing Lyft’s gross bookings.
Today: 100+ million autonomous miles, 91% fewer serious crashes, potential to save $420 billion annually in accident costs. Total investment of just $10-15 billion - “one year of Uber’s profits.” As Ben notes: “slowly and then all at once” - Waymo proves Google can still execute massive technical challenges, potentially worth hundreds of billions independently.
“Attention Is All You Need”: The Transformer Paper
In 2017, eight Google Brain researchers published a paper that would fundamentally reshape artificial intelligence, though Google itself would initially fail to recognize its revolutionary potential. The transformer architecture emerged from Jakob Uszkoreit’s insight: instead of processing language sequentially, what if a model could examine entire text passages at once - similar to how professional translators read an entire document before translating? This was computationally expensive but perfectly suited to Google’s parallel infrastructure.
At the time, LSTMs (Long Short-Term Memory networks) seemed like the future, having reduced Google Translate’s error rate by 60% in 2016. But LSTMs had a critical flaw: they were computationally expensive and didn’t parallelize well, limiting scalability.
The transformer project initially struggled - the implementation wasn’t beating LSTMs. Then Noam Shazeer joined and completely rewrote the codebase from scratch. In what teammates called wizardry, he made it work. The transformer didn’t just match LSTMs - it crushed them. More importantly, they discovered scaling laws: the bigger the model, the better the results. This would define the entire modern AI era.
Greg Corrado told the hosts the transformer was “so elegant that people’s response was often, this can’t work. It’s too simple.” The architecture was barely a neural network - just attention mechanisms and feed-forward layers. This simplicity reflected a pattern: the most efficient solutions survive, and breakthroughs are often surprisingly simple.
The transformer proved that in virtually every field of AI, you could just add more data and compute to a scalable architecture and performance would improve predictably. No more clever algorithms needed; just scale. As Rich Sutton would later articulate in “The Bitter Lesson” (2019), this was the future.
Google allowed the team to publish openly as “Attention is All You Need” - a Beatles reference. The paper has been cited over 173,000 times, becoming the 7th most cited paper of the 21st century.
The irony is profound. Google used transformers for BERT and MUM models that improved search. As the hosts emphasize, “It is a false narrative that Google did nothing with the transformer.” But critically, they didn’t treat it as a wholesale platform change - they saw incremental improvement while others saw revolution.
Every other AI lab recognized the potential. OpenAI built GPT on it. Anthropic used it for Claude. Meta for Llama. Every modern LLM is built on Google’s invention.
The ultimate irony came in the talent exodus. All eight authors eventually left Google.
Google invented the technology revolutionizing AI, published it openly, then watched its researchers leave to build competing companies. The hosts call this “one of the greatest talent and IP losses in corporate history” - Google created the foundation for the modern AI revolution, gave it away, and initially failed to capitalize while others built the future on their invention.
Google: The Garden of Eden for AI Talent
As David emphasizes in the episode, ten years ago “basically every single person of note in AI worked at Google.” The hosts compare it to if IBM had hired every person who knew how to code at the dawn of computing. This unprecedented concentration of talent made Google the birthplace of modern AI, though most would eventually leave to spread the technology across the industry.
The Godfathers and Pioneers:
Jeff Hinton (2013-2023): The “Godfather of AI,” co-creator of AlexNet, technically an intern at age 60. Now at University of Toronto after leaving over AI safety concerns.
Demis Hassabis (2014-present): DeepMind founder, chess prodigy, creator of AlphaGo. Still at Google as CEO of Google DeepMind.
Shane Legg (2014-present): DeepMind co-founder, popularized term “AGI.” Still at Google DeepMind.
Mustafa Suleyman (2014-2019): DeepMind co-founder. Now head of AI at Microsoft after Inflection AI.
The Transformer Authors (All Left):
Noam Shazeer (2000-2021, 2024-present): Rewrote transformer code, created Meena chatbot. Founded Character.AI, returned for $2.7 billion.
Jakob Uszkoreit (at Google until ~2020): Conceived attention mechanism. Founded Inceptive.
Ashish Vaswani & Niki Parmar: Founded Adept AI (later Essential AI).
Llion Jones: Founded Sakana AI.
Aidan Gomez: Co-founded Cohere, major LLM competitor.
Lukasz Kaiser: Joined OpenAI.
Illia Polosukhin: Co-founded Near Protocol.
The OpenAI Exodus:
Ilya Sutskever (2013-2015): AlexNet co-creator, DNNresearch. Left to co-found OpenAI despite Jeff Dean doubling his offer.
Dario Amodei (2014-2016): Google Brain researcher. Founded Anthropic, created Claude.
Chris Olah (at Google Brain): Neural network interpretability pioneer. Now at Anthropic.
The Leaders Who Stayed:
Jeff Dean (1999-present): Legend who built AdSense in a week, created MapReduce, leads Gemini. Still Google’s Chief Scientist.
Sanjay Ghemawat (1999-present): Jeff’s coding partner, distributed systems genius. Still at Google.
Sundar Pichai (2004-present): Issued “Code Red,” merged DeepMind/Brain. CEO of Google/Alphabet.
The Builders and Innovators:
Sebastian Thrun (2007-2011): DARPA Challenge winner, founded Google X and Waymo. Now CEO of Kitty Hawk.
Andrew Ng (2011-2012): Co-founded Google Brain. Founded Coursera, led Baidu AI, now at Landing AI.
Ian Goodfellow (2013-2016): Invented GANs at Google. Went to OpenAI, then Apple, now back at DeepMind.
Andrej Karpathy (brief stint): Became Tesla’s AI director, then back to OpenAI.
François Chollet (2015-present): Created Keras deep learning library. Still at Google.
The Infrastructure Architects:
Franz Och (2004-2014): Built Google Translate’s statistical models. Left to join Human Longevity Inc.
George Harik (1999-2006): Created PHIL language model, employee #10. Now venture capitalist.
As the hosts note, this concentration was both Google’s greatest strength and ultimately a weakness - they had everyone but couldn’t hold them once the world realized what AI could become. The diaspora from Google seeded every major AI lab: OpenAI, Anthropic, Inflection, Cohere, and countless startups, spreading the knowledge that would eventually come back to challenge Google itself.
Powers
Scale Economies: Ben and David identify this as Google’s strongest power in AI, operating on multiple levels. The company is amortizing training costs across quadrillions of inference tokens - from 10 trillion in April 2024 to 980 trillion by June 2025. They have infrastructure scale making them the low-cost producer - paying 50% margins on TPUs through Broadcom versus competitors’ 80% to Nvidia (2x versus 5x markup). With chips representing over 50% of datacenter costs, this difference is massive. Google uniquely has “self-sustaining funding” for AI development while competitors depend on external capital.
Cornered Resource: Google Search as the “front door to the internet” provides unparalleled distribution despite ChatGPT becoming “the Kleenex of the era.” Their dark fiber network, purchased “pennies on the dollar” after the dot-com crash, creates unreplicable private data center connections. YouTube provides exclusive training data for video models. They can pay billions to recapture talent like Noam Shazeer ($2.7 billion).
Branding: While cutting both ways as the incumbent, Google’s brand remains net positive. For most people, “they trust Google” while “they probably don’t trust these who-knows AI companies.” However, Ben notes the challenge: “they have the added challenge now of being the incumbent...people and the ecosystem isn’t necessarily rooting for them.”
Bull Case and Bear Case
Bull Case
Distribution to all humans: Google has distribution to basically all humans as the front door to the internet through search, and they can funnel that however they want. Despite ChatGPT becoming a household name, Google still processes vastly more queries daily and controls the primary text box for internet intent. This allows instant scaling of AI features to billions of users.
All four AI capabilities: Google has all the capabilities to win in AI - foundational model, chips, application, and cloud. As Ben emphasizes repeatedly, no other company has more than one of these. They are a hyperscaler with self-sustaining funding from their search business, not reliant on VC cash like OpenAI or Anthropic. They are the only self-funded player in the frontier model race.
Infrastructure advantages: Google owns fat pipes connecting data centers from buying dark fiber after the dot-com crash. This private backhaul network that no competitor can match is essential for both YouTube and AI workloads. They have the world’s most sophisticated distributed computing infrastructure built over decades.
YouTube plus AI possibilities: The combination of YouTube’s massive video corpus with AI opens crazy possibilities for training, content generation, and monetization. As Ben Thompson outlined, Google could instantly make every product in every video shoppable using AI labeling, creating massive new revenue streams.
Talent density: Despite departures, Google retains incredible AI talent and has shown willingness to spend billions to get key people back (like the $2.7 billion for Noam Shazeer). They have Jeff Dean, Demis Hassabis, and Sergey Brin actively working on Gemini.
TPU economics: Unit economics on TPUs could lead to Google being the low-cost producer of tokens. With 2-3 million TPUs paying 50% margins to Broadcom versus competitors paying 80% margins to Nvidia (2x versus 5x markup), Google has a massive structural cost advantage in the most expensive component of AI infrastructure.
Personal data treasure trove: All of Google’s other products (Gmail, Calendar, Docs, Photos, Maps, Chrome, Android) give them a trove of personal data that they can use to create personalized AI products no competitor can match. This data moat compounds as users interact more with Google AI.
Better ads potential: AI may conceivably end up being a better ads business than regular search because users tend to type many more words (20+ versus 2-3 in search), which provides much better intent signals. The precision of understanding exactly what users want could enable dramatically higher ad rates.
Waymo opportunity: Waymo could be a Google-sized business on its own. With 91% fewer serious crashes than human drivers and potential to save $420 billion annually in accident costs, it represents hundreds of billions in value that’s completely separate from the search/AI business.
Bear Case
AI hasn’t lent itself to ads: Thus far, the AI product shape has not lent itself well to ads. Despite billions of interactions, there’s no clear model for monetizing chat conversations with advertising without destroying the user experience. Google makes $400 per US user annually from search ads - who will pay that for AI access?
Not clearly the best product: Unlike when Google Search launched and was immediately obviously superior, Gemini is not clearly the best product on the market. It’s arguably on par with several competitors (ChatGPT, Claude, etc.) but doesn’t have a compelling differentiation for most users.
Small market share in AI: Google is not a dominant player in AI like they are in search. They maybe have 25% of the AI market versus 90% in search. Even if they monetize AI users as well as search users, there are far fewer users to monetize. The market will likely remain fragmented with multiple strong players.
AI cannibalizes high-value search: AI might take away the majority of use cases from search, and even if it doesn’t, it will likely take the highest-value use cases. Trip planning, health queries, shopping research - these high-intent, high-value searches that attract premium ad rates are the first to move to AI.
Lost hearts and minds: The people are not rooting for Google anymore - they’re rooting for the startups. Unlike the mobile transition where Google was still seen as innovative, they’re now viewed as the slow incumbent. This affects talent recruitment, user enthusiasm, and media coverage.
Quintessence
Ben’s Quintessence:
This is “the most fascinating example of the Innovator’s Dilemma ever”
Google invented the transformer that powers the entire AI revolution, published it openly, then failed to capitalize
The fundamental tension: choosing between mission (organizing world’s information) and margins ($370 billion revenue)
Larry and Sergey say they’d “rather go bankrupt than lose at AI” - but will they really?
When AI provides direct answers instead of ten blue links with ads, which wins: mission or profits?
David’s Quintessence:
Google is “probably doing the best job of trying to thread the needle with AI” among big tech companies
Executing with “rapid but not rash” decision-making
Moving at “Nvidia pace” with Gemini releases while protecting the core franchise
The DeepMind/Brain merger and Gemini standardization show “incredibly commendable” leadership
Successfully navigating “the most capital-intensive race in business history” while doing stock buybacks
Carveouts
Ben’s Picks:
F1 Movie: Recommends seeing it in theaters for the “beautiful cinema” and surround sound experience
TravelPro Suitcase: His “budget pick gone right” - the $416 international check bag version that’s “robust” with smooth-gliding wheels, calling it perfect despite being “the most budget suitcase you could buy”
David’s Picks:
The Glue Guys Podcast: Features Sequoia partner Ravi Gupta with Shane Battier and Alex Smith; particularly recommends the Wright Thompson episode despite it having only 5,000 listens
Steam Deck Gaming Update: Shares how his daughter learned to play video games on his Steam Deck, describing watching her learn to use a joystick as “one of the most incredible experiences I’ve had as a parent”
Joint Announcement: Acquired will host the NFL Innovation Summit on the Friday before Super Bowl 2025 in San Francisco
Additional Notes
Episode Metadata:
Duration: 4:06:37
Release Date: October 6, 2025
Season: Fall 2025
Related Episodes:
Links: