Before Hollywood learned to animate pixels, Silicon Valley learned to animate light. The first dreamers weren’t directors — they were designers and engineers who turned math into motion, built the machines behind Jurassic Park and Toy Story, and taught computers to imagine. Now, those same roots are fueling a new frontier — AI video and generative storytelling.

By Skeeter Wesinger

October 8, 2025

Silicon Valley is best known for chips, code, and capital. Yet long before the first social network or smartphone, it was quietly building a very different kind of future: one made not of transistors and spreadsheets, but of light, motion, and dreams. Out of a few square miles of industrial parks and lab benches came the hardware and software that would transform Hollywood and the entire art of animation. What began as an engineering problem—how to make a computer draw—became one of the most profound creative revolutions of the modern age.

In the 1970s, the Valley was an ecosystem of chipmakers and electrical engineers. Intel and AMD were designing ever smaller, faster processors, competing to make silicon think. Fairchild, National Semiconductor, and Motorola advanced fabrication and logic design, while Stanford’s computer science labs experimented with computer graphics, attempting to render three-dimensional images on oscilloscopes and CRTs. There was no talk yet of Pixar or visual effects. The language was physics, not film. But the engineers were laying the groundwork for a world in which pictures could be computed rather than photographed.

The company that fused those worlds was Silicon Graphics Inc., founded in 1982 by Jim Clark in Mountain View. SGI built high-performance workstations optimized for three-dimensional graphics, using its own MIPS processors and hardware pipelines that could move millions of polygons per second—unheard of at the time. Its engineers created OpenGL, the standard that still underlies most 3D visualization and gaming. In a sense, SGI gave the world its first visual supercomputers. And almost overnight, filmmakers discovered that these machines could conjure scenes that could never be shot with a camera.

Industrial Light & Magic, George Lucas’s special-effects division, was among the first. Using SGI systems, ILM rendered the shimmering pseudopod of The Abyss in 1989, the liquid-metal T-1000 in Terminator 2 two years later, and the dinosaurs of Jurassic Park in 1993. Each of those breakthroughs marked a moment when audiences realized that digital images could be not just convincing but alive. Down the road in Emeryville, the small research group that would become Pixar was using SGI machines to render Luxo Jr. and eventually Toy Story, the first fully computer-animated feature film. In Redwood City, Pacific Data Images created the iconic HBO “space logo,” a gleaming emblem that introduced millions of viewers to the look of digital cinema. All of it—the logos, the morphing faces, the prehistoric beasts—was running on SGI’s hardware.

The partnership between Silicon Valley and Hollywood wasn’t simply commercial; it was cultural. SGI engineers treated graphics as a scientific frontier, not a special effect. Artists, in turn, learned to think like programmers. Out of that hybrid came a new creative species: the technical director, equal parts physicist and painter, writing code to simulate smoke or hair or sunlight. The language of animation became mathematical, and mathematics became expressive. The Valley had turned rendering into an art form.

When SGI faltered in the late 1990s, its people carried that vision outward. Jensen Huang, Curtis Priem, and Chris Malachowsky—former SGI engineers—founded Nvidia in 1993 to shrink the power of those million-dollar workstations onto a single affordable board. Their invention of the graphics processing unit, or GPU, democratized what SGI had pioneered. Gary Tarolli left to co-found 3dfx, whose Voodoo chips brought 3D rendering to the mass market. Jim Clark, SGI’s founder, went on to co-create Netscape, igniting the web era. Others formed Keyhole, whose Earth-rendering engine became Google Earth. Alias | Wavefront, once owned by SGI, evolved into Autodesk Maya, still the industry standard for 3D animation. What began as a handful of graphics labs had by the millennium become a global ecosystem spanning entertainment, design, and data visualization.

Meanwhile, Nvidia’s GPUs kept growing more powerful, and something extraordinary happened: the math that drew polygons turned out to be the same math that drives artificial intelligence. The parallel architecture built for rendering light and shadow was ideally suited to training neural networks. What once simulated dinosaurs now trains large language models. The evolution from SGI’s Reality Engine to Nvidia’s Tensor Core is part of the same lineage—only the subject has shifted from geometry to cognition.

Adobe and Autodesk played parallel roles, transforming these once-elite tools into instruments for everyday creators. Photoshop and After Effects made compositing and motion graphics accessible to independent artists. Maya brought professional 3D modeling to personal computers. The revolution that began in a few Valley clean rooms became a global vocabulary. The look of modern media—from film and television to advertising and gaming—emerged from that convergence of software and silicon.

Today, the next revolution is already underway, and again it’s powered by Silicon Valley hardware. Platforms like Runway, Pika Labs, Luma AI, and Kaiber are building text-to-video systems that generate entire animated sequences from written prompts. Their models run on Nvidia GPUs, descendants of SGI’s original vision of parallel graphics computing. Diffusion networks and generative adversarial systems use statistical inference instead of keyframes, but conceptually they’re doing the same thing: constructing light and form from numbers. The pipeline that once connected a storyboard to a render farm now loops through a neural net.

This new era blurs the line between animator and algorithm. A single creator can describe a scene and watch it materialize in seconds. The tools that once required teams of engineers are being distilled into conversational interfaces. Just as the SGI workstation liberated filmmakers from physical sets, AI generation is liberating them from even the constraints of modeling and rigging. The medium of animation—once defined by patience and precision—is becoming instantaneous, fluid, and infinitely adaptive.

Silicon Valley didn’t just make Hollywood more efficient; it rewrote its language. It taught cinema to think computationally, to treat imagery as data. From the first frame buffers to today’s diffusion models, the through-line is clear: each leap in hardware has unlocked a new kind of artistic expression. The transistor enabled the pixel. The pixel enabled the frame. The GPU enabled intelligence. And now intelligence itself is becoming the new camera.

What began as a handful of chip engineers trying to visualize equations ended up transforming the world’s most powerful storytelling medium. The Valley’s real export wasn’t microchips or startups—it was imagination, made executable. The glow of every rendered frame, from Toy Story to the latest AI-generated short film, is a reflection of that heritage. In the end, Silicon Valley didn’t just build the machines of computation. It taught them how to dream big!

Inside the ShinyHunters Breach: How a Cybercrime Collective Outsmarted Google

By Skeeter Wesinger

August 26, 2025

In June 2025, a phone call was all it took to crack open one of the world’s most secure companies. Google, the billion-dollar titan that built Chrome, Gmail, and Android, didn’t fall to an exotic zero-day exploit or state-sponsored cyberweapon. Instead, it stumbled over a voice on the line.

The culprits were ShinyHunters, a name that has haunted cybersecurity teams for nearly half a decade. Their infiltration of Google’s Salesforce system—achieved by tricking an employee into installing a poisoned version of a trusted utility—didn’t yield passwords or credit card numbers. But what it did uncover, millions of names, emails, and phone numbers, was enough to unleash a global phishing storm and prove once again that the human element remains the weakest link in digital defense.

ShinyHunters first burst onto the scene in 2020, when massive troves of stolen data began appearing on underground forums. Early hits included databases from Tokopedia, Wattpad, and Microsoft’s private GitHub repositories. Over time, the group built a reputation as one of the most prolific sellers of stolen data, often releasing sample leaks for free to advertise their “work” before auctioning the rest to the highest bidder. Unlike some cybercrime groups that focus on a single specialty—ransomware, banking trojans, or nation-state espionage—ShinyHunters thrive on versatility. They have carried out brute-force intrusions, exploited cloud misconfigurations, and, as Google’s case shows, mastered social engineering. What ties their operations together is a single goal: monetization through chaos. Their name itself comes from the Pokémon community, where “shiny hunters” are players obsessively searching for rare, alternate-colored Pokémon. It’s a fitting metaphor—ShinyHunters sift through digital landscapes looking for rare weaknesses, exploiting them, and then flaunting their finds in dark corners of the internet.

The attack on Google was as elegant as it was devastating. ShinyHunters launched what cybersecurity experts call a vishing campaign—voice phishing. An employee received a convincing phone call from someone posing as IT support. The hacker guided the target into downloading what appeared to be Salesforce’s Data Loader, a legitimate tool used by administrators. Unbeknownst to the victim, the tool had been tampered with. Once installed, it silently granted ShinyHunters remote access to Google’s Salesforce instance. Within hours, they had siphoned off contact data for countless small and medium-sized business clients. The breach didn’t expose Gmail passwords or financial records, but in today’s digital ecosystem, raw contact data can be just as dangerous. The stolen information became ammunition for phishing campaigns that soon followed—calls, texts, and emails impersonating Google staff, many of them spoofed to look as though they came from Silicon Valley’s “650” area code.

This wasn’t ShinyHunters’ first high-profile strike. They’ve stolen databases from major corporations including AT&T, Mashable, and Bonobos. They’ve been linked to leaks affecting over 70 companies worldwide, racking up billions of compromised records. What sets them apart is not sheer volume but adaptability. In the early days, ShinyHunters focused on exploiting unsecured servers and developer platforms. As defenses improved, they pivoted to supply-chain vulnerabilities and cloud applications. Now, they’ve sharpened their social engineering skills to the point where a single phone call can topple a security program worth millions. Cybersecurity researchers note that ShinyHunters thrive in the gray zone between nuisance and catastrophe. They rarely pursue the destructive paths of ransomware groups, preferring instead to quietly drain data and monetize it on dark web markets. But their growing sophistication makes them a constant wildcard in the cybercrime underworld.

Google wasn’t the only target. The same campaign has been tied to breaches at other major corporations, including luxury brands, airlines, and financial institutions. The common thread is Salesforce, the ubiquitous customer relationship management platform that underpins business operations worldwide. By compromising a Salesforce instance, attackers gain not only a list of customers but also context—relationships, communication histories, even sales leads. That’s gold for scammers who thrive on credibility. A phishing email that mentions a real company, a real client, or a recent deal is far harder to dismiss as spam. Google’s prominence simply made it the most visible victim. If a company with Google’s security apparatus can be tricked, what chance does a regional retailer or midsize manufacturer have?

At its core, the ShinyHunters breach of Google demonstrates a troubling shift in cybercrime. For years, the focus was on software vulnerabilities—buffer overflows, unpatched servers, zero-days. Today, the battlefield is human psychology. ShinyHunters didn’t exploit an obscure flaw in Salesforce. They exploited belief. An employee believed the voice on the phone was legitimate. They believed the download link was safe. They believed the Data Loader tool was what it claimed to be. And belief, it turns out, is harder to patch than software.

Google has confirmed that the incident did not expose Gmail passwords, and it has urged users to adopt stronger protections such as two-factor authentication and passkeys. But the broader lesson goes beyond patches or new login methods. ShinyHunters’ success highlights the fragility of digital trust in an era when AI can generate flawless fake voices, craft convincing emails, and automate scams at scale. Tomorrow’s vishing call may sound exactly like your boss, your colleague, or your bank representative. The line between legitimate communication and malicious deception is blurring fast. For ShinyHunters, that blurring is the business model. And for the rest of us, it’s a reminder that the next major breach may not come from a flaw in the code, but from a flaw in ourselves. And these ShinyHunters use fake Gmail accounts, which will get them caught.

Beyond Zapier: What Happens When Workflow Automation Becomes Obsolete?

By Skeeter Wesinger August 3, 2025

For years, tools like Zapier, LangChain, and Make (formerly Integromat) have served as the backbone of modern automation. They gave us a way to stitch together the sprawling ecosystem of SaaS tools, APIs, and data triggers that power everything from startups to enterprise platforms. They democratized automation, enabled lean teams to punch above their weight, and brought programmable logic to non-programmers.

But here’s the uncomfortable truth: their days are numbered.

These platforms weren’t designed to think—they were designed to follow instructions. They excel at task execution, but they fall short when the situation requires adaptation, judgment, or real-time negotiation between competing priorities. The problem isn’t what they do; it’s what they can’t do.

The Next Frontier: Intent-Driven Autonomy

The future doesn’t belong to systems that wait to be told what to do. It belongs to systems that understand goals, assess context, and coordinate actions without micromanagement. We’re entering the age of intent-driven autonomy, where AI agents don’t just execute; they plan, adapt, and negotiate across domains.

Imagine a world where your AI agent doesn’t wait for a Zap to send an email—it anticipates the follow-up based on urgency, sentiment, and your calendar. Where you don’t need to build a LangChain flow to summarize documents—your agent reads, tags, stores, and cross-references relevant data on its own. Where infrastructure no longer needs triggers because it has embedded agency—software that adjusts itself to real-world feedback without human intervention.

This is more than automation. This is cognition at the edge of software.

Why This Isn’t Hype

We’re already seeing signs. From autonomous GPT-based agents like AutoGPT and CrewAI to self-updating internal tools powered by vector databases and real-time embeddings, the scaffolding of tomorrow is under construction today. These agents won’t need workflows—they’ll need guardrails. They’ll speak natural language, interact across APIs, observe results, and self-correct. And instead of chaining actions together, they’ll pursue objectives.

Don’t Panic. But Do Prepare.

This doesn’t mean Zapier or LangChain failed. On the contrary, they paved the way. They taught us how to think modularly, how to connect tools, and how to make systems work for us. But as we move forward, we need to unlearn some habits and embrace the shift from rigid logic to adaptive intelligence.

The question for builders, founders, and technologists isn’t “What should I automate next?” It’s “What kind of agency am I ready to give my systems?”

Because the future isn’t about building better workflows. It’s about building systems that don’t need them.

Banking Without Prompts: Autonomous AI Agents and the Future of Finance

By Skeeter Wesinger

August 1, 2025

As artificial intelligence evolves beyond chatbots and scripted assistants, a new kind of intelligence is emerging—one that doesn’t wait to be asked, but rather understands what needs to happen next. In the world of finance, this evolution marks a profound shift. Autonomous AI agents are poised to redefine how we interact with our money, our banks, and even decentralized systems like Bitcoin. They will not simply respond to prompts. They will act on our behalf, coordinating, securing, optimizing, and executing financial operations with a level of contextual intelligence that eliminates friction and anticipates needs.

In traditional banking, autonomous agents will operate across the entire customer lifecycle. Instead of relying on users to initiate every action, these systems will recognize patterns, detect anomalies, and carry out tasks without requiring a single command. They will notice unusual account activity and intervene before fraud occurs. They will detect opportunities for savings, debt optimization, or loan restructuring and act accordingly, surfacing choices only when human approval is required. Agents will onboard new customers by retrieving identity credentials, verifying documents through secure biometric scans, and completing compliance steps in seconds—all in the background. On the back end, these agents will navigate regulatory checkpoints, reconcile ledgers, update Know Your Customer (KYC) files, and monitor compliance thresholds in real-time. They will not replace bankers—they will become the invisible machinery that supports them.

In the realm of Bitcoin and digital assets, the impact will be just as profound. Managing wallets, executing transactions, and securing assets in a decentralized environment is complex, and often inaccessible to non-experts. Autonomous agents will quietly manage these processes. They will optimize transaction fees based on current network conditions, initiate trades under preset thresholds, rotate keys to enhance security, and notify users only when intervention is required. In decentralized finance, agents will monitor liquidity positions, collateral ratios, and yield performance. When conditions change, the system will react without being told—reallocating, unwinding, or hedging positions across decentralized platforms. In multi-signature environments, agents coordinate signing sequences among stakeholders, manage the quorum, and execute proposals based on a shared set of rules, all without a central authority.

Crucially, these agents will act without compromising privacy. They will utilize zero-knowledge proofs to perform audits, verify compliance, or authenticate identity without disclosing personal data. They will operate at the edge when necessary, avoiding unnecessary cloud dependency, while still syncing securely across systems and jurisdictions. Whether in traditional banking, Bitcoin custody, or the emerging DeFi landscape, these agents will not just streamline finance—they will secure it, fortify it, and make it more resilient.

We are moving toward a world where finance no longer requires constant attention. The prompt—once essential—becomes redundant. You won’t need to ask for a balance, check your rates, or move funds manually. Your presence, your intent, and your context will be enough. The system will already know. It will already be working.

Contact: Skeeter Wesinger

Senior Research Fellow

Autonomous Systems Technology and Research

skeeter@skeeter.com

For inquiries, research partnerships, or technology licensing.

Scattered Spider: Impersonation, and Cybersecurity in the Age of Cloud Computing

By Skeeter Wesinger
June 29, 2025

In an era where companies have moved their infrastructure to the cloud and outsourced much of their IT, one old-fashioned tactic still defeats the most modern defenses: impersonation.
At the center of this threat is Scattered Spider, a cybercriminal collective that doesn’t exploit code—they exploit people. Their operations are quiet, persuasive, and dangerously effective. Instead of smashing through firewalls, they impersonate trusted employees—often convincingly enough to fool help desks, bypass multi-factor authentication, and gain access to critical systems without ever tripping an alarm.
This is the cybersecurity challenge of our time. Not ransomware. Not zero-days. But trust itself.
Who Is Scattered Spider?
Known to threat intelligence teams as UNC3944, Muddled Libra, or 0ktapus, Scattered Spider is an English-speaking group that has compromised some of the most security-aware companies in North America. Their breaches at MGM Resorts and Caesars Entertainment made headlines—not because they used sophisticated malware, but because they didn’t have to.
Their weapon of choice is the phone call. A help desk technician receives a request from someone claiming to be a senior executive who lost their device. The impersonator is articulate, knowledgeable, and urgent. They know internal jargon. They cite real names. Sometimes, they even use AI-generated voices.
And too often, it works. The attacker gets a password reset, reroutes MFA codes, and slips in undetected.
The Illusion of Familiarity
What makes these attackers so dangerous is their ability to sound familiar. They don’t just say the right things—they say them the right way. They mirror internal language. They speak with confidence. They understand hierarchy. They’re skilled impersonators, and they prey on a simple reflex: the desire to help.
In the past, we might have trusted our ears. “It sounded like them,” someone might say.
But in the age of AI, “sounding like them” is no longer proof of identity. It’s a liability.
When Cloud Isn’t the Cure
Many organizations have moved to cloud-based environments under the assumption that centralization and managed services will reduce their exposure. In some ways, they’re right: the cloud simplifies infrastructure and offloads security operations. But here’s the truth: you can’t outsource responsibility. The human layer remains—and that’s precisely where Scattered Spider operates.
They don’t need to breach Azure or AWS. They just need to impersonate someone with access to it.
It’s time we stop treating “trust but verify” as a cliché and start treating it as operational policy. Better yet: trust—but always verify. Every request. Every reset. Every exception.
Verification today means more than checking a box. It requires multi-channel authentication. It means never resetting MFA or passwords based solely on a phone call, no matter how credible the caller seems. It means locking down help desk protocols so impersonation doesn’t slip through the cracks.
Security teams must also monitor legitimate tools—like AnyDesk, TeamViewer, and ScreenConnect—that attackers often use once inside. These aren’t inherently malicious, but in the wrong hands, they’re devastating.
And above all, organizations must train their frontline personnel—especially support staff—to treat every identity request with healthy skepticism. If your instinct says something feels off, pause and verify through secure channels. Escalate. Slow down. Ask the questions attackers hope you won’t.
Scattered Spider doesn’t hack your servers. They hack your systems of trust. They bypass encryption by impersonating authority. And they exploit the one vulnerability no software can patch: assumption.
As we continue shifting toward remote work, outsourced IT, and cloud-based everything, the real threat isn’t technical—it’s personal. It’s the voice on the line. The urgent request. The person who “sounds right.”
In this world, cybersecurity isn’t just about what you build. It’s about what you believe—and what you’re willing to question.
Therefore, you have to train your teams. Harden your protocols. And remember in the age of the cloud, the most important firewall is still human.
Trust—but always verify!

The New Cold War Is No Longer a Theory—It’s Airborne

By Skeeter Wesinger
June 16, 2025

“The great conflicts of history do not always announce themselves with declarations of war. Sometimes they slip quietly onto a runway in the dead of night, transponders off.”

In an era of satellites, signal intelligence, and open-source surveillance, it’s rare for a global superpower to move undetected. So, when a Chinese cargo aircraft slipped silently into Iranian airspace, its transponder disabled and its mission classified, it wasn’t just a mystery—it was a message. A coded communiqué to Washington, to Tel Aviv, and to anyone else watching closely: The New Cold War is real, and the lines are being drawn.

No Longer Just Iran and Israel

The conflict that began as yet another volatile flashpoint between Iran and Israel is rapidly mutating. The sudden, unverified—but deeply credible—report of a Chinese aircraft secretly delivering “strategic cargo” to Tehran has thrown fuel on the already smoldering fire. The fact that the flight’s transponder was off is not just a technical note—it’s an act of deliberate concealment, a violation of international air protocol usually reserved for acts of war, espionage, or arms delivery.

In the old Cold War, the world was divided along a single axis: Washington versus Moscow. Today’s alignment is more fluid, but just as dangerous. It is no longer a two-player chessboard. It’s a three-dimensional battlefield of cyber proxies, energy corridors, and ideological spheres. And in that contest, China just stepped out of the shadows.

Why Would China Choose Now?

Timing is never accidental in geopolitics. This move comes just as U.S. and Israeli forces are executing airstrikes on Iranian infrastructure—strikes that have reportedly killed senior nuclear scientists and disabled key facilities in Natanz and Isfahan. By choosing this moment to intervene, however quietly, Beijing is not just signaling support for Iran. It is challenging the very architecture of Western deterrence.
And it is not unprecedented. For years, China has expanded its strategic partnerships in the Middle East through infrastructure projects, energy deals, and joint military exercises with both Iran and Saudi Arabia. But this is different. This is not diplomacy. This is movement of materiel under the cover of silence.

Who’s Taking Sides?

Like the proxy wars of the 20th century, the sides are forming—some loudly, others with calibrated ambiguity:

China is backing Iran quietly but unmistakably—through oil purchases, drone technology, cyber cooperation, and possibly now, arms delivery.
Russia, already aligned with Iran in Syria and hardened by its own war in Ukraine, is likely complicit or at least informed.
The United States, long Israel’s security patron, is being forced into a reactive posture—issuing vague warnings, watching red lines blur.
Israel, ever aggressive and cornered, has no margin for error. Its F-35 strikes and retaliatory doctrine may now risk wider war.

And then there are the others. The Gulf states, wary of Iran but weary of chaos. Turkey, straddling NATO ties and Eastern ambitions. The EU, whispering peace but unwilling to pay its price. Each is being pulled toward a pole of influence—either by oil, ideology, or the allure of protection.

What’s Being Delivered? And What’s at Stake?

We may never know exactly what that Chinese cargo plane carried. Was it missile components? Electronic warfare gear? A quantum-encrypted communications hub? Or perhaps something more symbolic—proof that the East is now willing to enter the Western hemisphere of influence not with trade, but with leverage.
And that is what this new Cold War is truly about: not territory, but control of the narrative, the infrastructure, and the future of power itself.
What’s emerging isn’t a singular confrontation, but a latticework of quiet escalations. A missile strike here. A silent aircraft is there. An AI blackout in a foreign grid. The battlefield is now global—and often invisible.

Conclusion: A Shadow Conflict in Plain Sight

The old Cold War ended not with victory parades but with archives released years later. The new one may never declare itself openly. But it doesn’t need to.
When cargo planes fly dark into Tehran, when nuclear scientists are killed by hypersonic drones, and when world leaders speak of “territorial integrity” while flying weapons into contested zones, we are not watching peace unravel. We are watching a new order take shape—one where surveillance is constant, trust is rare, and the next flashpoint could arrive with a ping, not a bang.

As in the 1930s, the alliances are still forming, the weapons still being positioned. But history reminds us that by the time the first shot is noticed, the war has already begun.

Burning the Future: Why Waymo Robotaxis Are Being Targeted in Los Angeles

By Skeeter Wesinger
June 11, 2025

The future is burning in Los Angeles—and it’s driving itself into the flames.
In recent weeks, autonomous vehicles operated by Waymo, Alphabet’s self-driving subsidiary, have become a flashpoint in the city’s ongoing social unrest. What began as scattered protests against housing inequality and police overreach has turned sharply against the most visible emblem of Silicon Valley’s quiet conquest of urban life: the driverless car.
Waymo’s robotaxis—sleek, sensor-laden electric vehicles that glide through city streets with no one at the wheel—have been set on fire, spray-painted, disabled, and blocked. In some cases, protesters jumped on their hoods. In one instance, the vehicle’s lithium-ion battery ignited, blanketing an intersection in black smoke and toxic fumes. Five cars were torched in a single night near the Beverly Center. Waymo has since suspended service in key areas.
Why Waymo? Why now?

A Rolling Surveillance State
Part of the answer lies in optics. A Waymo car looks like what it is: a surveillance platform in motion. Packed with LiDAR, radar, and 360-degree cameras, each vehicle is effectively a roving sensor array collecting vast troves of visual and environmental data. Protesters increasingly believe that Waymo footage is being shared—or could be shared—with law enforcement. That makes the robotaxi a surveillance threat, especially in communities already skeptical of over-policing and state monitoring.
In an age when public space is contested ground, a driverless car is not just an anomaly—it’s a trespasser.

Automation as Class War
But the backlash isn’t only about privacy. For many in Los Angeles, Waymo represents something even more existential: job loss at the altar of automation.
The city’s economy still depends on tens of thousands of human drivers—Uber, Lyft, taxis, delivery vans, and commercial transport. Waymo’s expansion signals a not-so-distant future in which those workers are rendered obsolete. That future is arriving without public input, without protections, and with little concern for who gets left behind. The Teamsters and the LA County Federation of Labor have protested Waymo’s rollout since 2023. Their warnings are now finding a wider audience, and a louder voice.
If you’re looking for a symbol of job displacement and unaccountable tech governance, you won’t find a better target than a car that drives itself and costs a man his living.

Tech as the Face of Gentrification
There’s also the unavoidable truth that Waymo vehicles are highly visible in neighborhoods already under pressure from gentrification. The sleek, whirring robotaxis feel alien, indifferent—like emissaries of a world that values efficiency over community, and sensors over people. For longtime residents, they are reminders of a city being hollowed out, algorithm by algorithm, until only the surface remains.
In this context, setting a Waymo car on fire is not just an act of destruction. It is a political statement.

Spectacle and Strategy
And then there’s the media effect. A burning Waymo is headline gold. It’s instantly legible as a rejection of Big Tech, of automation, of surveillance, of the inequality that comes when luxury innovation is layered on top of public neglect. Images of charred autonomous vehicles make the evening news, circulate on social media, and galvanize protestors elsewhere.
It’s not unlike what the Luddites did in the 19th century—targeting the machines that symbolized their displacement. Only now the machine drives itself and livestreams the revolution.

A Dangerous Road Ahead
Waymo’s executives are right to be concerned. What’s being targeted isn’t just a brand—it’s a future that many people were never asked to vote on. One where machines replace people, where public spaces are privately surveilled, and where “innovation” often means exclusion.
The destruction of these vehicles may be unlawful, but the message is clear: you can’t automate your way out of accountability.
Until the tech industry confronts this unrest not with PR statements but with real dialogue, real reform, and a real respect for the communities it drives through, the streets will remain dangerous, not just for Waymos but for any vision of the future that forgets the people in the present.

How AI is quietly taking over the consulting industry—from slide decks to strategy sessions.

By Skeeter Wesinger
June 10, 2025

Let’s say you’re the CEO of a Fortune 500 company. You’ve just paid McKinsey a few million dollars to help streamline your supply chain or finesse your M&A pitch. What you may not know is that some of that brainpower now comes from a machine.

McKinsey, Bain, and Boston Consulting Group—the Big Three of strategy consulting—have embraced artificial intelligence not just as a service they sell, but as a co-worker. At McKinsey, a proprietary AI platform now drafts proposals, generates PowerPoint decks, and even outlines market entry strategies. That used to be a junior analyst’s job. Now it’s done in seconds by software.

The firm insists this is progress, not replacement. “Our people will be doing the things that are more valuable to our clients,” a McKinsey spokesperson told the Financial Times.¹ It’s the kind of line that sounds better in a press release than in a staff meeting.

Meanwhile, Bain & Company has rolled out a custom chat interface powered by OpenAI.² It’s more than just a chatbot—it’s a digital consigliere that surfaces insights, runs simulations, and drafts client memos with GPT-powered fluency. Over at Boston Consulting Group, AI-driven engagements already make up 20 percent of the firm’s total revenue.³ That’s not a rounding error—it’s a shift in the business model.

This Isn’t Just Efficiency. It’s a Redefinition.

AI doesn’t sleep, bill overtime, or ask for a promotion. It digests case studies, slurps up real-time market data, and spins out “insights” at breakneck speed. A proposal that once took two weeks now gets turned around in two hours. A slide deck that required a team of Ivy Leaguers is built by algorithms trained on millions of prior decks.

That’s the efficiency part. But the real story is what happens next.

Strategy consulting has always sold scarcity—the idea that elite firms offered unique, human insight. But what happens when AI systems trained on decades of reports can replicate that thinking, and maybe even improve on it?

“Empathy,” the firms say. “Judgment.” “Relationship building.” Those are the buzzwords that now define human value in consulting. If the machine can do the math, the humans must do the trust. It’s a plausible pivot—until clients bring their own AI to the table.

The Consultants Are Pivoting—Fast

McKinsey and its rivals aren’t fighting the change—they’re monetizing it. They’re building internal tools while also selling AI implementation strategies to clients. In effect, they’re profiting twice: first by automating their own work, then by teaching others how to do the same.

This is the classic consulting playbook—turn a threat into a line item.

But beneath the slideware optimism is an existential question. If your AI builds the deck, drafts the strategy, and even suggests the pricing model, what exactly are you buying from a consultant?

Maybe it’s still the name on the invoice. Maybe it’s the assurance that someone—some human—stands behind the recommendation. Or maybe, just maybe, it’s the beginning of a new normal: where the smartest person in the room isn’t a person at all.

Citations

  1. Mark Marcellis, “McKinsey’s AI Revolution Has Begun,” Financial Times, May 29, 2025. https://www.ft.com/content/mckinsey-ai-presentation-tools
  2. Derek Thompson, “How Bain Is Using OpenAI to Redefine Consulting,” The Atlantic, March 12, 2025. https://www.theatlantic.com/technology/bain-openai-strategy
  3. David Gelles, “At BCG, AI Consulting Now Drives 20% of Revenue,” The New York Times, April 10, 2025. https://www.nytimes.com/business/bcg-ai-revenue-growth