How a $400,000 lobster theft exposed the hidden security gaps in modern logistics

 

By Skeeter Wesinger

January 5, 2026

 

Earlier this month, thieves made off with roughly $400,000 worth of lobster from a Massachusetts facility. The seafood was never supposed to vanish; it was en route to Costco locations in the Midwest. Instead, it became the end product of a carefully staged deception that blended cyber impersonation, procedural blind spots, and physical-world confidence tricks.

This was not a smash-and-grab. It was a systems failure.

The operation began quietly, with an altered email domain that closely resembled that of a legitimate trucking company. To most humans—and most workflows—that was enough. The email looked right, sounded right, and fit neatly into an existing logistics conversation. No servers were hacked. No passwords were cracked. The attackers didn’t break in; they were let in.

Modern organizations often believe that email authentication technologies protect them from impersonation. They do not. Tools like SPF, DKIM, and DMARC can verify that a message truly came from a domain, but they cannot tell you whether it came from the right one. The gap between technical validation and human trust remains wide, and that gap was the attackers’ point of entry.

Once inside the conversation, the criminals did what sophisticated attackers always do: they followed the process. They presented themselves as the selected carrier, responded on time, and matched expectations. Crucially, no one stopped to verify the change using a trusted, out-of-band channel—no phone call to a number already on file, no portal confirmation, no secondary check. The digital impersonation slid smoothly into operational reality.

The real turning point came at the loading dock. A tractor-trailer arrived bearing the branding of the legitimate company. The drivers carried paperwork and commercial licenses convincing enough to pass a quick inspection. Faced with routine procedures and time pressure, facility staff released the shipment. In that moment, digital deception became physical authorization.

This is where the incident stops being about phishing and starts being about trust. Visual cues—logos, uniforms, familiar names—still function as de facto security controls in high-value logistics. They are also trivial to counterfeit. Without a strong shared secret, such as a one-time pickup code or independently issued authorization token, the chain of custody rests on appearances.

After the truck departed, the final safeguards failed just as quietly. GPS trackers were disabled, and their sudden silence did not trigger an immediate, decisive response. In security terms, there was no deadman switch. When telemetry went dark, escalation was not automatic. By the time uncertainty turned into alarm, the window for recovery had likely closed.

Logistics theft experts know this pattern well. The first hour after a diversion is decisive. Organized theft rings plan around confusion, delayed verification, and fragmented responsibility. Their confidence suggests experience, not luck.

The CEO of Rexing Cos., the logistics firm coordinating the shipment, later described the crime as “very sophisticated” and attributed it to a large criminal organization. That assessment aligns with the evidence. This was not a crime of opportunity. It was a repeatable playbook executed by people who understood how modern supply chains actually operate—not how they are diagrammed.

The most unsettling lesson of the lobster theft is that no single system failed catastrophically. Email worked. Scheduling worked. Dock operations worked. Tracking existed. Each layer functioned more or less as designed. The failure emerged in the seams between them.

Security professionals often say that attackers don’t exploit systems; they exploit assumptions. This incident is a case study in that truth. Every handoff assumed the previous step had already done the hard work of verification. Each trust decision compounded the last until six figures’ worth of cargo rolled away under false pretenses. Always trust, but also verify, to quote President Reagan: “Doveryay, no proveryay”- “Trust, but verify.”

As supply chains become more digitized and more automated, it is tempting to treat logistics as paperwork and coordination rather than as critical identity infrastructure. This theft demonstrates the cost of that assumption. High-value goods move through a chain of identities—domains, vendors, drivers, vehicles—and each identity must be independently verified, not inferred.

The lobster didn’t disappear because the system was weak. It disappeared because the system was polite.

AWS and Startups: The Difference Between Support and Reality
By Skeeter Wesinger
December 27, 2025

Amazon Web Services presents itself as one of the great enablers of modern entrepreneurship. Its startup messaging promises speed, affordability, mentorship, and a clear path from idea to scale. Millions of companies, we’re told, build on AWS because it helps them innovate faster, keep costs low, and prove what is possible.

All of that is true—just not in the way most founders assume.

AWS is not a startup partner. It is a world-class infrastructure utility with a startup-friendly on-ramp. Confusing those two things is where disappointment begins.

The startup ecosystem AWS advertises does exist. There are former founders, CTOs, venture capitalists, and mentors inside the organization. But access to that expertise is neither automatic nor evenly distributed. For most early-stage founders, the lived AWS experience consists of documentation, support tickets, and account managers whose incentives are aligned with usage growth, not startup survival. The ecosystem is real, but it is gated—and those gates usually open only after external validation has already occurred.

Generative AI follows the same pattern. AWS encourages startups to innovate quickly using managed services and foundation models, reducing the operational burden of building AI systems from scratch. This is genuinely useful. It is also strategically convenient. AWS benefits when startups adopt its abstractions early, because abstraction is how lock-in begins. Pricing complexity, usage opacity, and scaling surprises tend to reveal themselves only after a product starts working—precisely when switching costs are highest.

Programs like AWS Activate are often cited as evidence of AWS’s commitment to founders. Credits, technical support, and mentorship can meaningfully accelerate early experimentation. But credits do not change fundamentals. They delay cost reality; they do not remove it. For infrastructure-heavy startups—particularly those using GPUs, data pipelines, or real-time systems—credits can evaporate in weeks. When they do, the company is left facing enterprise-grade pricing without enterprise-grade revenue.

Go-to-market support is perhaps the most misunderstood promise of all. Co-selling with AWS is possible, but it is not designed for early startups. It favors companies with reference customers, repeatable sales motions, and offerings that align cleanly with existing account strategies. In practice, partners are incentivized to sell their own services, not to shepherd unproven products into the market. Distribution exists—but usually only after distribution is no longer the primary problem.

None of this makes AWS deceptive. It makes AWS exactly what it is: a highly efficient, globally scaled infrastructure provider. AWS does not exist to reduce founder risk. It exists to provide reliable, metered access to computing resources—and it does that exceptionally well.

The danger lies in mistaking capability for commitment.

AWS will help you build faster. It will not help you decide what to build. It will help you scale globally. It will not help you survive the transition from prototype to revenue. It will let you fail quickly and at scale—but the bill will still arrive.

For experienced builders, this distinction matters. Startups that treat AWS like electricity—necessary, powerful, and expensive if misused—tend to make better decisions than those who treat it like a mentor or partner. Infrastructure accelerates outcomes; it does not improve judgment.

AWS’s startup narrative is written for investors, accelerators, and press releases. The reality is written in CloudWatch logs, cost-explorer dashboards, and late-night architecture decisions. Founders would be better served by understanding that difference early. AWS is handing out popsicles, not meals—knowing most will melt long before they ever create a mess. Only those that survive the heat earn a seat at the table.

What Is Inference in Artificial Intelligence?

Skeeter Wesinger December 25, 2025

When people talk about artificial intelligence, they often focus on training—the phase where a model learns from large amounts of data. But training is only preparation. The real work of AI happens later, during a phase called inference.

Inference is what occurs after an AI model has already been trained. At this point, the model is no longer learning. Instead, it is using what it has learned to make decisions, predictions, or generate outputs based on new information.

Think of training as education and inference as employment. A student may spend years learning mathematics, but once they become an engineer, they are no longer studying calculus textbooks. They are applying what they already know to solve real problems. Inference is that moment of application.

During inference, the internal structure of the model—its learned parameters—remains fixed. New data is fed in, the model processes it, and an output is produced. That output might be a medical diagnosis, a fraud risk score, a translated sentence, or a 3D reconstruction. The model does not adjust itself in the process. It simply executes.

This distinction matters because inference is where artificial intelligence actually touches the real world. It is where speed, reliability, and cost become critical. A system that took weeks to train may be expected to produce answers in milliseconds during inference. In many applications, especially in healthcare, finance, manufacturing, and defense, inference is the product.

Most modern AI systems spend the vast majority of their operational life performing inference. Training may happen once or occasionally, but inference runs continuously. Every search result, recommendation, image recognition, or automated decision depends on inference happening correctly and efficiently.

As AI systems scale, the challenge is no longer just how to train better models, but how to deploy them so inference is fast, affordable, and dependable. This is why inference infrastructure—hardware, software, and architecture—has become as important as the models themselves.

In short, training teaches an AI model how to think. Inference is how that thinking is put to work.

The $9.2 Million Warning: Why 2025 Will Punish Companies That Ignore AI Governance

By R. Skeeter Wesinger
(Inventor & Systems Architect | 33 U.S. Patents | MA)

November 3, 2025

When artificial intelligence began sweeping through boardrooms in the early 2020s, it was sold as the ultimate accelerator. Every company wanted in. Chatbots turned into assistants, copilots wrote code, and predictive models started making calls that once required senior analysts. The pace was breathtaking. The oversight, however, was not.

Now, in 2025, the consequences of that imbalance are becoming painfully clear. Across the Fortune 1000, AI-related compliance and security failures are costing an average of $9.2 million per incident—money spent on fines, investigations, recovery, and rebuilding trust. It’s a staggering number that reveals an uncomfortable truth: the age of ungoverned AI is ending, and the regulators have arrived.

For years, companies treated AI governance as a future concern, a conversation for ethics committees and think tanks. But the future showed up early. The European Union’s AI Act has set the global tone, requiring documentation, transparency, and human oversight for high-risk systems. In the United States, the Federal Trade Commission, the Securities and Exchange Commission, and several state legislatures are following suit, with fines that can reach a million dollars per violation.

The problem is not simply regulation—it’s the absence of internal discipline. IBM’s 2025 Cost of a Data Breach Report found that 13 percent of organizations had already experienced a breach involving AI systems. Of those, 97 percent lacked proper access controls. That means almost every AI-related breach could have been prevented with basic governance.

The most common culprit is what security professionals call “shadow AI”: unapproved, unsupervised models or tools running inside companies without formal review. An analyst feeding customer data into an online chatbot, a developer fine-tuning an open-source model on sensitive code, a marketing team using third-party APIs to segment clients—each one introduces unseen risk. When something goes wrong, the result isn’t just a data spill but a governance black hole. Nobody knows what model was used, what data it touched, or who had access.

IBM’s data shows that organizations hit by shadow-AI incidents paid roughly $670,000 more per breach than those with well-managed systems. The real cost, though, is the time lost to confusion: recreating logs, explaining decisions, and attempting to reconstruct the chain of events. By the time the lawyers and auditors are done, an eight-figure price tag no longer looks far-fetched.

The rise in financial exposure has forced executives to rethink the purpose of governance itself. It’s not red tape; it’s architecture. A strong AI governance framework lays out clear policies for data use, accountability, and human oversight. It inventories every model in production, documents who owns it, and tracks how it learns. It defines testing, access, and audit trails, so that when the inevitable questions come—Why did the model do this? Who approved it?—the answers already exist.

This kind of structure doesn’t slow innovation; it enables it. In finance, healthcare, and defense—the sectors most familiar to me—AI governance is quickly becoming a competitive advantage. Banks that can demonstrate model transparency get regulatory clearance faster. Hospitals that audit their algorithms for bias build stronger patient trust. Defense contractors who can trace training data back to source win contracts others can’t even bid for. Governance, in other words, isn’t the opposite of agility; it’s how agility survives scale.

History offers a pattern. Every transformative technology—railroads, electricity, the internet—has moved through the same cycle: unrestrained expansion followed by an era of rules and standards. The organizations that thrive through that correction are always the ones that built internal discipline before it was enforced from outside. AI is no different. What we’re witnessing now is the transition from freedom to accountability, and the market will reward those who adapt early.

The $9.2 million statistic is less a headline than a warning. It tells us that AI is no longer a side project or a pilot experiment—it’s a liability vector, one that demands the same rigor as financial reporting or cybersecurity. The companies that understand this will govern their algorithms as seriously as they govern their balance sheets. The ones that don’t will find governance arriving in the form of subpoenas and settlements.

The lesson is as old as engineering itself: systems fail not from lack of power, but from lack of control. AI governance is that control. It’s the difference between a tool that scales and a crisis that compounds. In 2025, the smartest move any enterprise can make is to bring its intelligence systems under the same discipline that made its business succeed in the first place. Govern your AI—before it governs you.

Mars
By Skeeter Wesinger
October 19, 2025

In the photograph, a reddish cliff face rises out of a wind-scoured plain, the strata climbing diagonally across the frame like the pages of a tilted book. At its center, a dark rectangular recess interrupts the rhythm—a void whose walls drop almost perfectly vertical, so clean they seem carved. The geometry jars against the slanting erosion that surrounds it. It looks like a doorway, but it is something subtler: the record of two ancient forces meeting at right angles.

The angled beds tell the story of sediment settling in a different age, each layer a quiet deposit of dust or silt hardened by time. Long after those layers were set, internal stresses fractured the rock, opening vertical joints through the mass. When the wind came, armed with its endless cargo of sand, it abraded the soft laminae along the bedding planes while also widening the fractures. Eventually, at one of those intersections, a single block detached—a neat, rectangular absence born of simple physics.

Professor Ron Lyons used to pause at outcrops like this and trace his hand along the stone. “Erosion isn’t chaos,” he’d say. “It’s the handwriting of structure.” He taught that every slope and shadow has a grammar: bedding dictating rhythm, joints providing syntax, time composing the sentence. What appears deliberate is often just the rock yielding to those rules.

Seen in that light, the “doorway” in this image is not a mystery but a manuscript. The slanted lines and vertical void tell the same story written in two dialects of stone—deposition and fracture, order and stress. The illusion of architecture dissolves, leaving behind something grander: evidence of the planet’s quiet labor, and the enduring truth of Lyons’s lesson—that when form and force intersect, even nature writes in straight lines.

The Martian photograph captures that paradox perfectly. The angled erosion tells one history; the vertical recess, another. Together they create a scene that feels deliberate, almost architectural, even though it is entirely natural. It is a reminder that geology often mimics geometry—and that a careful eye, honed by years in the field, can see both the form and the forces that shaped it.

The Eye That Sees

Photo taken by the Mars Global Surveyor

I lingered on that image longer than I should have, tracing each line and shadow as if they might finally confess intent. The opening stood there in quiet defiance, its vertical symmetry cutting through tilted beds like a thought interrupting memory. Below it stretched a pale bench of stone, smoothed by time until it looked almost paved—a terrace where the wind might pause to rest. And to the right, half-buried in shadow, the likeness of a turtle’s jaw cupped a single round geode, the size of a bowling ball. The sphere fit so perfectly in the hollow that it seemed placed there, the mineral heart cradled in a stony mouth.

Coincidence, of course. Yet it was the kind of coincidence that stops the rational mind cold. Professor Lyons used to warn against dismissing such moments too quickly. “Nature,” he’d say, “has a way of rehearsing intention.” His point wasn’t mystical—it was observational. Given enough time, a pattern emerges from chance, and chance carves a pattern in return. The geode had grown in darkness, its concentric layers forming in a quiet chemical patience; the cavity that held it had been shaped by wind, fracture, and the long attrition of dust. Two independent stories that happened, by grace or gravity, to end in the same frame.

Standing before a scene like that—even in a photograph from another world—you begin to understand why geologists so often speak like poets. The land writes in structure and syntax, and every so often it composes a line that feels intentional. The “doorway,” the “walkway,” the “turtle jaw”—these are our names for those brief alignments between the mind’s geometry and the planet’s indifference. They’re not evidence of design, only echoes of recognition.

Lyons used to trace his hand across such shapes and say, “Don’t worship the form. Read the process.” I think of that often now. The Mars photograph, for all its haunting symmetry, is not an artifact but a sentence: a statement written in bedding and fracture, in stress and repose. The rectangular recess and the curved geode are both parts of the same grammar—the meeting of right angles and circles, force and resistance, accident and alignment.

And that may be the real story here. We study rocks to understand time, but sometimes, they look back at us in perfect composition, as if time itself were trying to understand us.

Before Hollywood learned to animate pixels, Silicon Valley learned to animate light. The first dreamers weren’t directors — they were designers and engineers who turned math into motion, built the machines behind Jurassic Park and Toy Story, and taught computers to imagine. Now, those same roots are fueling a new frontier — AI video and generative storytelling.

By Skeeter Wesinger

October 8, 2025

Silicon Valley is best known for chips, code, and capital. Yet long before the first social network or smartphone, it was quietly building a very different kind of future: one made not of transistors and spreadsheets, but of light, motion, and dreams. Out of a few square miles of industrial parks and lab benches came the hardware and software that would transform Hollywood and the entire art of animation. What began as an engineering problem—how to make a computer draw—became one of the most profound creative revolutions of the modern age.

In the 1970s, the Valley was an ecosystem of chipmakers and electrical engineers. Intel and AMD were designing ever smaller, faster processors, competing to make silicon think. Fairchild, National Semiconductor, and Motorola advanced fabrication and logic design, while Stanford’s computer science labs experimented with computer graphics, attempting to render three-dimensional images on oscilloscopes and CRTs. There was no talk yet of Pixar or visual effects. The language was physics, not film. But the engineers were laying the groundwork for a world in which pictures could be computed rather than photographed.

The company that fused those worlds was Silicon Graphics Inc., founded in 1982 by Jim Clark in Mountain View. SGI built high-performance workstations optimized for three-dimensional graphics, using its own MIPS processors and hardware pipelines that could move millions of polygons per second—unheard of at the time. Its engineers created OpenGL, the standard that still underlies most 3D visualization and gaming. In a sense, SGI gave the world its first visual supercomputers. And almost overnight, filmmakers discovered that these machines could conjure scenes that could never be shot with a camera.

Industrial Light & Magic, George Lucas’s special-effects division, was among the first. Using SGI systems, ILM rendered the shimmering pseudopod of The Abyss in 1989, the liquid-metal T-1000 in Terminator 2 two years later, and the dinosaurs of Jurassic Park in 1993. Each of those breakthroughs marked a moment when audiences realized that digital images could be not just convincing but alive. Down the road in Emeryville, the small research group that would become Pixar was using SGI machines to render Luxo Jr. and eventually Toy Story, the first fully computer-animated feature film. In Redwood City, Pacific Data Images created the iconic HBO “space logo,” a gleaming emblem that introduced millions of viewers to the look of digital cinema. All of it—the logos, the morphing faces, the prehistoric beasts—was running on SGI’s hardware.

The partnership between Silicon Valley and Hollywood wasn’t simply commercial; it was cultural. SGI engineers treated graphics as a scientific frontier, not a special effect. Artists, in turn, learned to think like programmers. Out of that hybrid came a new creative species: the technical director, equal parts physicist and painter, writing code to simulate smoke or hair or sunlight. The language of animation became mathematical, and mathematics became expressive. The Valley had turned rendering into an art form.

When SGI faltered in the late 1990s, its people carried that vision outward. Jensen Huang, Curtis Priem, and Chris Malachowsky—former SGI engineers—founded Nvidia in 1993 to shrink the power of those million-dollar workstations onto a single affordable board. Their invention of the graphics processing unit, or GPU, democratized what SGI had pioneered. Gary Tarolli left to co-found 3dfx, whose Voodoo chips brought 3D rendering to the mass market. Jim Clark, SGI’s founder, went on to co-create Netscape, igniting the web era. Others formed Keyhole, whose Earth-rendering engine became Google Earth. Alias | Wavefront, once owned by SGI, evolved into Autodesk Maya, still the industry standard for 3D animation. What began as a handful of graphics labs had by the millennium become a global ecosystem spanning entertainment, design, and data visualization.

Meanwhile, Nvidia’s GPUs kept growing more powerful, and something extraordinary happened: the math that drew polygons turned out to be the same math that drives artificial intelligence. The parallel architecture built for rendering light and shadow was ideally suited to training neural networks. What once simulated dinosaurs now trains large language models. The evolution from SGI’s Reality Engine to Nvidia’s Tensor Core is part of the same lineage—only the subject has shifted from geometry to cognition.

Adobe and Autodesk played parallel roles, transforming these once-elite tools into instruments for everyday creators. Photoshop and After Effects made compositing and motion graphics accessible to independent artists. Maya brought professional 3D modeling to personal computers. The revolution that began in a few Valley clean rooms became a global vocabulary. The look of modern media—from film and television to advertising and gaming—emerged from that convergence of software and silicon.

Today, the next revolution is already underway, and again it’s powered by Silicon Valley hardware. Platforms like Runway, Pika Labs, Luma AI, and Kaiber are building text-to-video systems that generate entire animated sequences from written prompts. Their models run on Nvidia GPUs, descendants of SGI’s original vision of parallel graphics computing. Diffusion networks and generative adversarial systems use statistical inference instead of keyframes, but conceptually they’re doing the same thing: constructing light and form from numbers. The pipeline that once connected a storyboard to a render farm now loops through a neural net.

This new era blurs the line between animator and algorithm. A single creator can describe a scene and watch it materialize in seconds. The tools that once required teams of engineers are being distilled into conversational interfaces. Just as the SGI workstation liberated filmmakers from physical sets, AI generation is liberating them from even the constraints of modeling and rigging. The medium of animation—once defined by patience and precision—is becoming instantaneous, fluid, and infinitely adaptive.

Silicon Valley didn’t just make Hollywood more efficient; it rewrote its language. It taught cinema to think computationally, to treat imagery as data. From the first frame buffers to today’s diffusion models, the through-line is clear: each leap in hardware has unlocked a new kind of artistic expression. The transistor enabled the pixel. The pixel enabled the frame. The GPU enabled intelligence. And now intelligence itself is becoming the new camera.

What began as a handful of chip engineers trying to visualize equations ended up transforming the world’s most powerful storytelling medium. The Valley’s real export wasn’t microchips or startups—it was imagination, made executable. The glow of every rendered frame, from Toy Story to the latest AI-generated short film, is a reflection of that heritage. In the end, Silicon Valley didn’t just build the machines of computation. It taught them how to dream big!

The Power Law of Mediocrity: Confessions from the Belly of the VC Beast

By Skeeter Wesinger

October 6, 2025

We all read the headlines. They hit our inboxes every week: some fresh-faced kid drops out of Stanford, starts a company in his apartment, lands millions from a “top-tier” VC, and—poof—it’s a billion-dollar exit three years later. We’re force-fed the kombucha, SXSW platitudes, and “Disruptor of the Year” awards.

The public narrative of venture capital is that of a heroic journey: visionary geniuses striking gold, a thrilling testament to the idea that with enough grit, hustle, and a conveniently privileged network, anyone can build a unicorn. It’s the Disney version of capitalism—“anyone can be a chef,” as in Ratatouille—except this kitchen serves valuations, not ratatouille.

And it’s all a delightful, meticulously crafted fabrication by PR mavens, institutional LPs, and valuation alchemists who discovered long ago that perception is liquidity.

The truth is far less cinematic. Venture capital isn’t a visionary’s playground—it’s a casino, and the house always wins. Lawyers, bankers, and VCs take their rake whether the founders strike it rich or flame out in a spectacular implosion. The real magic isn’t in finding winners; it’s in convincing everyone, especially limited partners and the next crop of naive founders, that every single bet is a winner in the making. And in the current AI gold rush, this narrative isn’t just intoxicating—it’s practically a MDMA-induced hallucination set to a soundtrack of buzzwords and TED-ready hyperbole.

Full disclosure: I’ve been on both sides of that table—VC and angel investor, and founder. So consider this less a critique and more a confession, or perhaps karmic cleansing, from someone who has seen the sausage made and lived to regret the recipe.

The Power Law of Mediocrity

The first and most inconvenient truth? Venture capital isn’t about hitting singles and doubles—it’s about swinging for the fences while knowing, with absolute certainty, that you’ll strike out 90 percent of the time.

Academic data puts it plainly: roughly 75 percent of venture-backed startups never return significant cash to their investors. A typical fund might back ten companies—four will fail outright, four will limp to mediocrity, and one or two might generate a real return. Of those, maybe one breaks double-digit multiples.

And yet, the myth persists. Why? Because returns follow a power law, not a bell curve. A single breakout win papers over nine corpses. The median VC fund barely outperforms the S&P 500, but the top decile—those with one or two unicorns—create the illusion of genius. In truth, it’s statistical noise dressed up as foresight.

The Devil in the Cap Table

Not all angels have halos. Some of them carry pitchforks.

I call them “Devil Investors.” They arrive smiling, armed with mentorship talk and a check just large enough to seem life-changing. Then, once the ink dries, they sit you down and explain “how the real world works.” That’s when the charm evaporates. Clauses appear like tripwires—liquidation preferences, ratchets, veto rights. What looked like partnership becomes ownership.

These are the quiet tragedies of the startup world: founders who lose not only their companies but their sense of agency, their belief that vision could trump capital. Venture capital thrives on asymmetry—of information, of power, of options.

So no, I don’t feel bad when VCs get hoodwinked. They’ve built an empire on the backs of the optimistic, the overworked, and the under-represented. When a fund loses money because it failed to do due diligence, that’s not misfortune—that’s karma.

For every VC who shrugs off a loss as “portfolio churn,” there’s a founder who’s lost years, health, and ownership of the very thing they built. The VC walks away with a management fee and another fund to raise. The founder walks away with debt and burnout.

The Great AI Hallucination

If the 2010s were about social apps and scooters, the 2020s are about AI euphoria. Every week, another “AI-powered” startup raises $50 million for a product that doesn’t exist, can’t scale, and often relies entirely on someone else’s model.

It’s déjà vu for anyone who remembers the dot-com bubble—companies worth billions on paper, zero on the balance sheet. But in this era, the illusion has new fuel: the hype multiplier of media and the self-referential feedback loops of venture circles. Valuation becomes validation. Paper gains become gospel.

In private, partners admit the math doesn’t add up. In public, they double down on buzzwords: foundational models, RAG pipelines, synthetic data moats. They don’t have to be right—they just have to be first, loud, and liquid enough to raise Fund IV before Fund III collapses.

The House Always Wins

The cruel beauty of venture capital is that even when the bets go bad, the system pays its insiders. Management fees—usually 2 percent of committed capital—keep the lights on. Carried interest, when a unicorn hits, covers a decade of misses. It’s a model designed to appear risky while transferring the risk onto everyone else.

Founders risk their sanity, employees their weekends, and LPs their patience. The VC? He risks his reputation—which, in this industry, can always be rebranded.

A Confession, Not a Complaint

I say all this not as an outsider looking in but as someone who once believed the myth—that innovation needed gatekeepers, that disruption was noble, that capital was somehow creative. I’ve seen brilliant ideas die not for lack of ingenuity but for lack of political capital in a partner meeting.

Venture capital has produced miracles—no question. But for every transformative success, there are hundreds of broken dreams swept quietly into the footnotes of fund reports.

Pulling Back the Curtain

The next time you read about a wunderkind founder and their dazzling valuation, remember: you’re seeing the show, not the spreadsheet. Behind the curtain lies an industry that’s part casino, part cult, and wholly addicted to the illusion of inevitability.

Because in venture capital, the product isn’t innovation.
It’s a belief—and belief, conveniently, can be marked up every quarter.

By Skeeter Wesinger

September 18, 2025

Are you in technology and job hunting? HR screens resumes like they’re ordering a pizza: “CISSP? Check. Kubernetes? Check. PCI 4.0? Check.”

The problem is, they can’t tell the difference between someone who follows procedures, someone who designs systems, or the person who literally built the technology itself. You could have authored patents in firewalls and encryption — and still get passed over because “AWS” wasn’t on line one of your résumé. That’s not just a miss; it’s malpractice.

Job descriptions make it worse. They mash together operational tasks (patching, SIEM tuning, user tickets) with executive-level responsibilities (board reporting, enterprise risk, regulatory alignment). That’s how you end up with an “Information Security Officer” posting that reads like three jobs rolled into one — and satisfies none of them.

Leaders who have built companies, led exits, and advised boards across industries bring something far deeper than any checklist: the ability to navigate regulators, manage enterprise risk, and scale technology in high-stakes environments. Yet HR looks for “five years in a credit union” and misses the fact that these leaders have already solved far more complex problems under tighter scrutiny. That’s the disconnect.

The better path is direct. Boards and executives don’t care whether Kubernetes shows up in column three of your résumé. They care about outcomes: resilience, risk reduction, and transformation. The best hires don’t come from keyword scans in an ATS — they come from trust. A referral, a network, or a CEO saying, “This leader already solved the problem you’re facing.”

More and more, the trusted advisor or fractional executive route bypasses HR altogether. You’re brought in to advise, you prove value, and often that role evolves into something permanent.

 

Titanium’s Secret War: Could Vale Be Eyeing Labrador’s Radar Project?
Story By Skeeter Wesinger
September 16, 2025

In the far reaches of Labrador, where winter stretches nine months and the land is as harsh as it is resource-rich, a junior exploration company says it may have stumbled onto one of North America’s most significant new sources of titanium. Saga Metals’ Radar Project has been promoted as road-accessible, near a port, an airstrip, and hydro power. But critics argue that in reality, it’s hell and gone from anywhere.
And yet, despite the challenges, whispers are circulating: could mining giant Vale already be circling?
Titanium is no longer just for aerospace engineers and medical implants. It’s the quiet backbone of 21st-century warfare: drones, hypersonic missiles, stealth fighters. The U.S. imports over 90% of its titanium feedstock, largely from Russia, China, and Kazakhstan. That dependency has become a glaring weakness at a time when defense spending is surging past $1 trillion. For Washington policymakers, securing a domestic or friendly-jurisdiction supply of titanium isn’t just an economic issue. It’s a national security imperative.

From communications satellites to aircraft carriers, titanium’s unmatched strength, lightness, and heat resistance make it indispensable — even the F-35 relies on it to secure America’s military advantage.

The F-35 is America’s military advantage.

Vale already has a commanding presence in Newfoundland and Labrador through its Voisey’s Bay nickel-copper-cobalt mine and Long Harbour hydromet plant. Those assets anchor Vale to the province, with billions already invested and deep relationships built with government and Indigenous stakeholders. So if Labrador is being positioned as a titanium-vanadium corridor — with Saga’s Radar Project next to Rio Tinto’s long-running Lac Tio mine — wouldn’t Vale at least be curious?
Officially, Vale has said nothing. But that silence may say less about disinterest and more about timing. Mining majors rarely move at the exploration stage. They let juniors burn cash and prove up a resource. Only once grades, tonnage, and metallurgy are de-risked do they swoop in with capital and scale. The Radar site is remote, snowbound most of the year, and would require major road, port, and power upgrades to reach production. Vale is focused on nickel and copper, metals tied to electrification and EVs, but vanadium — with its growing role in grid-scale batteries — could give them a reason to pay attention.
What if the U.S. or Canada starts subsidizing titanium development the way they did rare earths or semiconductors? That would change the math overnight. Vale, with its capital, processing expertise, and political weight, could then step in as a consolidator. It wouldn’t be the first time a major stayed quiet until the subsidies hit.
Saga’s drill results have been splashy — magnetometer readings that “maxed out the machine,” multi-metal mineralization, comparisons to China’s massive Panzhihua deposit. For now, it’s still a speculative story. But the gravity of titanium demand is real. And if Labrador is destined to become a titanium hub, Vale is already in the neighborhood.
It’s easy to dismiss Saga’s Radar Project as another hyped junior play, complete with glossy investor decks and paid promotions. But it’s also easy to forget that the world’s mining giants often wait in the wings, letting the market underestimate projects until the timing is right. In a world where titanium has become the metal behind drones, jets, and modern defense, ignoring Labrador’s potential may not be an option forever.

The Second Cold War now moves to the Caribbean

By Skeeter Wesinger

September 10, 2025

The Caribbean has once again become a stage for the rivalry of great powers. In Cuba, Chinese technicians and engineers have been working around the clock to expand a network of intelligence-gathering sites. Satellite photographs and on-the-ground accounts confirm the presence of large radar dishes and a new antenna array near Santiago de Cuba, along with several facilities west of Havana. These installations appear designed to intercept communications and track movements across the southeastern United States. Their placement recalls the old Soviet listening post at Lourdes, which for years operated as Moscow’s ear on Washington.

What makes the present moment different is that China has chosen to follow its land-based presence with a naval one. Reports now indicate that a Chinese aircraft carrier, accompanied by support vessels, is moving into Caribbean waters. The decision to send such a formation across the Pacific and into the approaches of the Americas is a first. The United States Navy remains stronger in every respect, but the symbolism is clear. A foreign fleet, commanded from Beijing, is operating in what for two centuries Americans have regarded as their own sphere.

The tensions with Venezuela lend further weight to this development. Caracas, under sanction and isolation from Washington, has cultivated close ties with both China and Russia. A Chinese carrier group near Venezuelan ports would strengthen the government there and complicate American policy. It would also demonstrate that the Monroe Doctrine, which has served as the guiding principle of U.S. policy in the hemisphere since 1823, is under direct test.

Technologically, the new Cuban installations may not represent the most advanced form of signals intelligence. Analysts note that a significant amount can be intercepted today through satellite and cyber networks. Yet, the presence of these bases, together with a Chinese fleet, alters the strategic picture. They indicate that Beijing seeks not only to contest American influence in Asia but also to place pressure on the United States close to home.

This pattern, of probing and counter-probing, of establishing footholds near the other’s shores, is one that recalls earlier periods of rivalry. The first Cold War played out along these lines, and it is in that sense that many observers now speak of a second. The Caribbean, once the flashpoint of the Cuban Missile Crisis, is again the scene of significant power maneuvering. For now, the balance of power remains unchanged. But the geography of the contest has shifted. America finds that its own neighborhood is no longer beyond the reach of its chief rival, and that the struggle of the new century may be fought not only in distant waters, but in the seas and islands that lie just off its southern coast. The words of Ronald Reagan resonate now more than ever: ‘Trust, but verify.