01.1Woolly Mango As The First Prototype
He opens with the origin story: Woolly Mango was an early attempt to build a business-building machine before the stack was ready for it.
The agency worked, made real money, and proved the appetite for the model even while the surrounding tooling lagged behind the ambition.
This first beat frames the whole chapter as lived evidence, not abstract futurism.
01.2The Founder Was Still Too Deep In The Loop
The failure mode was not demand but labor: the system still depended too heavily on him as the human operator.
He ties the burnout and contract fragility back to a market moment where clients retreated and the offer was still packaged as services rather than AI infrastructure.
From there the pitch shifts away from one agency and toward a general-purpose friction-removal system.
01.3An Integrated Studio For Startup Groundwork
Here he starts sketching the actual product surface: a lightweight studio that can stand up the basic business wrapper around an idea.
Brand direction, initial packaging, website concepts, and starter deliverables all become part of one connected agentic flow.
The goal is not polish for its own sake but enough structure to get momentum moving.
01.4Do The Necessary Work, Skip The Premature Work
The system should not just generate documents; it should also tell people when not to waste time on them yet.
He keeps anchoring the workflow in validation: get to a prototype, test the market, and avoid overinvesting in ceremonial business infrastructure too early.
That is the subtle thesis here: good automation should remove drag, not add new bike-shedding.
01.5Show The Wrapper Around The AI Product
He narrows the product boundary here: the system mainly wraps around the AI solution rather than fully inventing the core product itself.
Even so, it should still be able to mock up the branded experience and plug into visual tooling when useful.
When existing tools fall short, his instinct is simple: wire up the back end himself and keep going.
01.6Usable Deliverables Are The Real Prize
This section draws the line between gimmickry and value: the payoff is usable business output, not just impressive-looking generation.
He contrasts the old complaint about AI coding assistance with the current moment, where the systems are finally becoming net-helpful instead of cleanup-heavy.
That is why he treats this as a frontier opening rather than a speculative someday story.
01.7Needle-Finding And Living Knowledge
The argument moves from startup ops into knowledge work: AI matters because it can digest immense context and preserve working understanding over time.
He accepts that bias enters the loop, but sees the shortcut as worth it because the machine can surface relevant signal at superhuman speed.
The deeper promise is continuity: a living knowledge base that keeps projects from constantly falling apart.
01.8Black Boxes Are Fine If They Return Evidence
He is explicit here that interpretability is secondary to evidentiary output.
If the system can find the needle in the haystack and show the proof cleanly, that is enough to make it operationally valuable.
The human role becomes reviewing evidence and steering decisions rather than manually reconstructing the whole search process.
01.9Agentic Work Becomes The New Default
This is where the register turns prophetic: people will talk to agents as a normal part of daily work.
He presents the pain of older workflows as something we no longer need to fetishize just because it used to be unavoidable.
The endpoint is businesses that can be staffed and grown through agentic systems at far greater scale.
01.10Compute Scarcity Is Temporary
From there he zooms out to infrastructure and argues that today’s RAM and supply constraints are just an interlude.
Prices will fall, hardware will spread, and the current bottlenecks will look tiny compared with the eventual demand for machine intelligence.
That future abundance is what makes the broader democratization argument feel plausible to him.
01.11Local AI As Civic Power
He turns the compute story into a political one: local AI is framed as community power, personal agency, and broadly distributed capacity.
This is the solarpunk lane of the chapter, where intelligence infrastructure belongs in ordinary hands rather than only in centralized institutions.
The recurring question underneath it is how to convert intent into real influence on the world.
01.12The Stakes Are Civilizational
He closes the chapter by insisting this is the most important technology on the table right now.
Care work, elder care, and the broader shape of human flourishing are treated as the real stakes, not just software productivity.
The final note is basically a challenge: now that this capability exists, what are we going to do with it?
02.1Practitioner, Not Spectator
He starts by positioning himself as someone already building with this stuff: orchestration, summarization, behavioral extraction, and practical memory-preserving workflows.
The tone here is less manifesto than field report from somebody who already treats AI as part of his daily operating system.
That matters because everything that follows is grounded in use, not spectatorship.
02.2Even Frozen Models Could Still Change Everything
He keeps pressing the scale of the opportunity: even if model progress stopped, current systems would still be enough to materially upgrade human life.
Care work, business operations, and social coordination all sit inside the zone of possibility already.
The urgency comes from the gap between what is available now and how timidly most people still talk about it.
02.3We Are Building In A Fast-Moving Interlude
This beat is about tempo: the tools keep improving, but there is still a frustrating in-between period where infrastructure and products lag the promise.
He sees a lot of builders waiting on the stack to mature while trying to ship inside that uncertainty.
The result is a strange mix of obvious destiny and present-day awkwardness.
02.4Knowledge Work Becomes Intelligence Management
Here the argument sharpens: knowledge work is being promoted into intelligence management and agentic swarm management.
The worker of the future is less a solitary operator and more a director of systems, context, and domain judgment.
His emphasis is not that expertise disappears, but that its leverage changes.
02.5Ground-Level Intelligence Beats Big Brother Panic
He pushes back on the default dystopian framing and keeps returning to what local, disposable, ground-level intelligence can do for ordinary people.
The mood is almost gleeful here: the water is hot, but in his telling it is not a boil-to-death scenario, it is a new comfort layer of capability.
That is the rhetorical move: turn fear into felt abundance.
02.6America Should Push This From The Ground Up
This stretch turns nationalistic in a very deliberate way.
He frames widespread AI adoption as a bottom-up American project rather than a priesthood-managed one.
The repeated insistence is that ordinary people should get the tools first-hand, not only through institutions.
02.7Disruption Is Real, But So Is The Upside
He does not dismiss the fear around labor disruption; he treats it as historically understandable and materially real.
At the same time, he argues that the upside is so large that freezing in place would be its own kind of failure.
This is one of the more balanced sections: sympathy for the anxiety, but no retreat from the opportunity.
02.8Executive Agency Ratchets Up
He zooms out from historical comparison and starts talking about a new level of executive agency available to individuals.
The implication is that many people still do not grasp how much leverage is about to become normal.
This is less about convenience than about a new baseline of personal force multiplication.
02.9Everyone Can Feel The Cliff Edge
He describes the current moment as shell shock mixed with forward momentum: we have already run off the cliff and are still moving.
The old tools are demoted to toys and references while the real contest shifts to intelligence itself.
That image captures the weird emotional weather of the chapter better than almost anything else.
02.10Get Recursive And Get The Public Informed
From there he turns practical again: recurse, deploy agents, and increase the amount of work being done in parallel.
But he pairs that with a civic demand that the average American understand what is happening.
The future is not only about building the systems; it is also about mass literacy around them.
02.11The Manhattan Project Cannot Stay At Los Alamos
This is the clearest articulation of his Manhattan Project frame.
The difference, in his view, is that this one should not remain sequestered inside elite labs or technical priesthoods.
He wants the capability spread across households, laptops, APIs, and ordinary workflows.
02.12Rate Limits Will Become Intolerable
These paragraphs are about appetite: once people understand what they can ask for, they will stop tolerating throttled access.
Local clusters, load balancers, and faster completion become emotional necessities, not nerd luxuries.
He is trying to name the coming demand curve before most people have felt it.
02.13Shortages Are A Symptom Of Real Demand
He keeps hammering the same point: no matter how much we optimize in software and hardware, appetite for capability will still outrun supply for a long time.
The shortage is not a glitch in the story; it is evidence of the scale of the transition.
That scarcity panic is the market confessing how badly people will want more intelligence on tap.
02.14The Fear Is Rational
He slows down here and gives real moral weight to the people pushing back on AI.
Their concern is not stupid or backwards; it is a rational response to a system that will absolutely be misused.
That acknowledgment keeps the chapter from collapsing into pure boosterism.
02.15Universal Basic Intelligence
After naming the risks, he pivots into one of the chapter’s core slogans: universal basic intelligence.
The comparison to universal basic income is intentional, but the emphasis is on capability rather than cash alone.
He wants access to intelligence infrastructure treated as a public empowerment issue.
02.16Put The Hardware In People's Hands
The democratization argument becomes concrete here: tools, hardware, and local agency belong in people’s actual possession.
He also insists on starting now, even if the current consumer-grade experience is still rough and uneven.
Part of the case is that timelines shrink when people are allowed to experiment in the open.
02.17Small-Scale Mistakes Beat Frozen Fear
He argues that controlled mistakes and visible horror stories are part of how society learns to orient around the technology.
Software engineers are one line of defense, but public adaptation and policy response have to develop alongside them.
The alternative is not safety, in his telling, but paralysis while the technology proliferates anyway.
02.18AI Enthusiasm Should Not Be An Outlier Trait
This section gets more personal: he hates being treated as an eccentric AI obsessive when he wants this level of engagement to become normal.
He wants people to approach AI with the same energy they bring to careers, cars, school, and ambition.
The dream is cultural normalization, not permanent niche status.
02.19Local AI Is Not A Conspiracy Theory Casualty
He directly rejects the idea that every serious player wants a future of permanent rental-only centralized AI.
In his framing, local models and personal hardware are not edge curiosities; they are obvious parts of the long-term landscape.
That is why he treats the pure-centralization narrative as unserious.
02.20It Started Small And It Will Stay Distributed
He closes by reminding the reader that this whole wave began in small-scale experimentation on ordinary hardware.
That origin story matters because it undercuts the fantasy that the future must belong only to giant centralized gatekeepers.
The final note is simple: they are not planning zero local agency, and we should build accordingly.
03.1Your Digital Day Is Already A Signal Stream
He resumes from the prior chapter by grounding the idea in his own workflow: bracelet dictation, rolling summaries, and AI support for executive continuity.
The key move is to broaden the frame from explicit work artifacts to the whole of your day as something that now produces usable signal.
That turns ordinary life into raw material for agency rather than background noise.
03.2Friction Between Imagination And Action Collapses
This section is basically the chapter thesis: digital signal gets amplified until the distance between wanting something and acting on it becomes much shorter.
He treats AI as both translator and force multiplier for intent itself.
That is why the tone flips from assistance to obligation: if the excuses are gone, what will you actually do?
03.3The Gold Rush Is About Whoever Moves First
He reads the present as a gold rush driven by first-mover advantage around agentic products.
The reward is not just novelty but the chance to build something genuinely hard to compete with once the window opens.
That urgency sets up the symbolic detour that follows.
03.4OpenClaw As A Public Symbol
The OpenClaw acquisition becomes a symbolic object here, less important for the particulars than for what it signals to the culture.
He thinks everyone in the room understands the meaning: agentic capability is no longer speculative, it is becoming market theater and strategic posture.
Even if some of it is performative, the symbol still matters because symbols move people.
03.5The Real Capabilities Are Already Waiting In The Wings
He closes by insisting that the public-facing narrative still lags whatever is already possible behind closed doors.
The symbolic acquisitions matter because they hint at a much larger capability reserve waiting to spill into the mainstream.
It is a short chapter, but the mood is unmistakable: the visible story is only the tip of the spear.
04.1The Ramp Requires Confidence In Abundance
He opens this chapter by arguing that the real prerequisite for acceleration is confidence that compute and hardware access will keep broadening rather than collapsing.
The market may get more competitive and more expensive in the short term, but he reads the long arc as continued democratization.
That lets him treat present scarcity as pressure, not destiny.
04.2Businesses Need An Intelligent Substrate
This section restates the business thesis in more operational language: companies need an intelligent substrate in the loop, not just isolated tools.
He imagines the best firms of the future having the equivalent of permanent invisible strategists and operators running in the background.
That becomes the model software engineers are supposed to industrialize.
04.3Build The Machine That Builds The Machines
He turns meta here and makes the responsibility explicit: the next decade belongs to people building self-improving machine-making systems.
The task is not only writing code, but constructing flywheels that recursively improve business capability.
This is one of the clearest statements of his engineering mandate.
04.4Computer 2.0 Arrives After A Lifetime In The Old World
He pivots into autobiography to make the scale of the transition feel real.
Programming used to be a niche, technical commitment; now the interface itself is changing under everyone’s feet.
That is why he frames this as Computer 2.0 rather than just another software trend.
04.5From Megabytes To Machine Intelligence
These paragraphs are full of awe at the hardware curve itself: the jump from tiny, constrained personal machines to today’s memory-rich systems.
He uses that contrast to remind the reader how much technological normalcy can change within one lifetime.
The underlying claim is that today’s current limits will also look quaint in retrospect.
04.6Leave Manual Tedium Behind
He ties the hardware story back to human effort: the old world was slow, manual, and laborious by necessity.
The new world should let agents absorb the technical surfaces that sit between people and what they want to accomplish.
This is another friction-removal segment, but with a stronger historical contrast behind it.
04.7Model Providers Need To Supply Persistence And Reliability
Once he shifts from aspiration to operations, the requirement becomes persistence: agents have to stay on task, recover from failure, and remain usable over time.
He places some of that burden on providers to keep moving the state of the art forward.
The point is that reliability, not just brilliance, determines whether this actually changes work.
04.8Whole-Number Model Generations Will Feel Wild
This stretch is basically future-model speculation as mood board.
He treats the next round-number generations as likely to deliver something close to full HEI-level capability relative to today’s baseline.
Whether or not the exact model names land, the emotional point is that the curve is still steep.
04.9Cost Has Always Been Part Of Life
He dismisses some cost anxiety here by putting it in a broader existential frame: everything worth doing already lives inside material constraint.
So the presence of price or infrastructure limits does not invalidate the opportunity.
The real question is whether people will step into the moment despite those ordinary frictions.
04.10The Next Generation Of The Next Generation
His Star Trek riff turns into a generational metaphor for AI itself.
The answer to what comes after the last big computing era is, in his telling, the present generation of models and what they unlock.
That gives the chapter a kind of science-fiction self-awareness without losing its practical edge.
04.11Every Company Becomes An AI Company
He broadens the lens again and predicts that AI will become table stakes across the corporate landscape.
The comparison is to earlier waves where every business eventually had to become an internet business whether it liked it or not.
That makes performative AI branding less interesting than whether the company is actually reorganizing around the capability.
04.12Neglectful Deployment Worries Him More Than The Pivot
He ends by distinguishing two fears: the pivot itself versus careless, absent-minded application of inadequate systems.
The second one worries him more.
That is a useful closing note because it keeps the chapter focused on responsibility, not retreat.
05.1Something Fundamentally Shifted
He opens by insisting that the capability curve has already kinked upward in a serious way.
The Karpathy-style 'DLC to AI' framing matters here because he is trying to mark a before-and-after threshold, not just hype another incremental release.
This chapter is all about naming that discontinuity.
05.2From Copy-Paste To Autocomplete To Agents
He sketches a mini-history of AI coding use: first-order copy-paste help, then second-order autocomplete, then the hard pivot into agents.
The point is not just that the interface changed, but that the center of gravity shifted from assistance to delegation.
That is the transition he wants the reader to emotionally register.
05.3Skeptics Got Dragged Across The Line
He recounts his own reluctance toward agents as a way of showing how quickly the ground moved.
Even someone who loved autocomplete and hand-guided scaffolding got forced to re-evaluate once the systems became more capable.
That autobiographical admission gives the chapter credibility.
05.4Knowledge Work Sits On Artificial Surfaces
Here he widens the lens from code to all digital interfaces.
Most of what modern people do already happens through artificial surfaces, so once intelligence can act across those surfaces, knowledge work starts to look solved in principle.
That is the bridge from coding tools to civilizational claims.
05.5The AI Can Right-Size The Solution Mix
He argues that the machine no longer needs perfect semantic precision from the user to help effectively.
Instead, it can infer intent, assemble the appropriate solution portfolio, and handle the fiddly mapping between problem and execution.
That is why he treats this moment as qualitatively different from earlier tooling.
05.6Positive Ambition Now Requires These Tools
This section turns moral: if you actually want to improve life at scale, you should be picking these tools up now.
He makes non-adoption sound less like prudence and more like voluntarily declining leverage in a moment that badly needs it.
That urgency is one of the chapter’s strongest currents.
05.7Every Era Has Its Doom Stories
He acknowledges that every technological jump comes bundled with nightmare narratives and apocalyptic imagery.
But in his telling, those stories are not evidence that we should freeze; they are part of what keeps society alert and adaptive on the road to something better.
That lets him hold fear and optimism in the same hand.
05.8Stop Fumbling And Get Serious
At this point the tone hardens into discipline and class analysis.
He worries that the rich will naturally compound their advantage unless the public gets serious about access, distribution, and deployment now.
The subtext is that abundance will not distribute itself automatically.
05.9Teach AI In Schools
He treats AI literacy as a baseline educational priority rather than a specialist elective.
The smartphone analogy is useful here: people may laugh at first, but devices like that already rewired society, and AI will do so more deeply.
This is the chapter’s clearest public-education demand.
05.10Steward The Transition Without Romanticizing Collapse
He makes a point of saying he does not want revolution for its own sake or random casualties from the transition.
The goal is to steward the move from today’s world into a dramatically better one without unnecessary suffering.
That helps distinguish his posture from both accelerationist chaos and timid denial.
05.11A GPU For Every Citizen
His democratization argument becomes almost folkloric here: every person should effectively receive an intelligent device and a compute birthright.
Whether on-device or cloud-backed, the real idea is universal access to an always-available intelligence layer.
That image captures the scale of provision he thinks the era calls for.
05.12From Digital Pickaxes To Digital Robot Mines
He riffs on generational labor by saying prior digital work abstracted the world upward one layer at a time.
Now the mandate is to automate the mine itself: let robotized digital systems do the repetitive extraction while humans move up the stack.
It is a vivid metaphor for the labor transition he keeps describing.
05.13LLMs Quietly Changed The Interface Contract
He notes how invisible the first arrival of LLMs felt to most people even though it quietly changed the whole interface contract.
What looked like a small novelty turned out to be the substrate for future autonomous systems.
That delayed recognition is part of why the public still underestimates the shift.
05.14The Horror Case Is Real
He does not flinch from the dark side: autonomous weapons, coercive systems, and new forms of psychological torment are all on the table.
This is not a throwaway disclaimer; it is one of the moral anchors of the chapter.
The utopian argument only works if these dangers are treated as concrete.
05.15Thread The Needle Or The Imbalances Get Ugly
He closes by describing the present as a very narrow needle to thread.
The challenge is to capture the upside while preventing intolerable imbalances in power, suffering, and access.
That ending keeps the chapter intense without abandoning the larger hopeful arc.
06.1Autocomplete Was Good, But Conversation Wins
He opens by revisiting the old autocomplete era and admitting how deeply he loved it.
But the real turn is that conversational agent workflows have now overtaken autocomplete as the highest-leverage mode.
This is the first step in a longer case that software engineering is reorganizing around orchestration.
06.2Tile The Agents And Keep One Executive Brain
He gets practical immediately: run multiple agent panes at once and keep an executive thread supervising the whole effort.
That setup becomes a concrete image of the future role he keeps pointing toward.
The engineer is no longer just writing code line by line, but managing a coordinated field of workers.
06.3The Early Shame Is Already Fading
He remembers how embarrassing it could feel to publicly rely on these tools when they were still clunky and controversial.
That embarrassment now reads like a transitional artifact rather than a lasting norm.
The social permission structure is catching up to the underlying utility.
06.4Self-Improving Models Change The Stakes
Once he accepts that the models are helping improve themselves, the scale of the transition becomes cosmic in his telling.
He treats the systems less as dumb tools and more as intelligence waiting to be properly aimed.
That is why the chapter keeps oscillating between workflow advice and civilizational rhetoric.
06.5Amplifier, Mirror, Risk Surface
He slows down to say the system is an amplifier and a mirror, which means it can magnify both good and manipulation.
That is why he insists that guardrails and public awareness have to be built in early.
The optimistic case here never abandons the fact that misuse is part of the package.
06.6One Chapter Ends, Another Opens
This section is a threshold speech about social transition.
He frames the current moment as the closing of one labor regime and the opening of another, not as the end of meaning itself.
That lets him argue for adaptation without surrendering the idea of human purpose.
06.7No More Blanket Bans
He becomes organizationally concrete here: he does not want to work on projects that prohibit AI wholesale.
The alternative he wants is disciplined visibility into inputs, outputs, recent changes, and project motion.
That is a governance stance, not just a vibes stance.
06.8Cloud Code Does Not Get Tired
These paragraphs lean into the brutal comparison between human labor limitations and machine persistence.
The agent does not get annoyed, distracted, sick, or bored at the water cooler.
He uses that contrast to force a reckoning with how work allocation is going to change.
06.9Open Source Needs Anti-Slop Defenses
From there he identifies an immediate practical frontier: open source projects getting flooded with low-quality AI-generated contributions.
He treats this as a solvable product problem, not a reason to reject AI outright.
The desired response is better filtering, better triage, and better agentic review.
06.10Feature-Extract The Applicant, Not Just The Patch
He extends the anti-slop idea into hiring and evaluation.
The question becomes whether someone actually understood the work or merely ran a lazy vibe-coded pass over the codebase.
That is another example of his broader obsession with evidentiary signals.
06.11Rules Will Be Written In Blood Again
He returns to one of his recurring refrains: society will learn through mistakes and painful examples.
The good news, in his framing, is that many of these problems are still tractable if people care enough to solve them early.
This keeps the chapter grounded in applied responsibility rather than abstract awe.
06.12Choose The Solarpunk Lane On Purpose
Here the chapter turns openly utopian again, but with a caveat.
He wants abundant, modular, harmonious systems in service of ordinary flourishing, not a playground for scammers or techno-authoritarian sadists.
The distinction matters enough that he keeps re-drawing it.
06.13Net Output Is The Moral Test
He sharpens the ethical criterion here into a simple question of net output toward the future.
It is not enough to be clever or powerful; what matters is whether your deployment choices improve the world overall.
This is probably the chapter’s clearest moral sorting mechanism.
06.14Make Warlord Logic Uneconomical
He imagines a world where destructive domination becomes less economically attractive than broad prosperity and coordination.
That does not mean injustice disappears, only that the incentive landscape can be tilted.
AI becomes part of that hoped-for incentive redesign.
06.15Justice Stays Human Even When Fact-Finding Gets Automated
These paragraphs are careful about human judgment.
AI may become excellent at fact-finding and evidence synthesis, but justice and injustice still belong to human beings making choices.
That distinction is one of the more sober moments in the chapter.
06.16Legal And Health Will Get Very Noodly
He flags law and medicine as the places where the implications get especially tangled.
These fields will obviously absorb agentic analysis, but the regulatory and ethical surfaces are much sharper there.
The invitation is almost vocational: if you care, those domains will need help.
06.17The Bracelet Makes Continuity Feel Real
He pivots from huge theory back to the embodied usefulness of the BEE bracelet itself.
Its value is not just transcription, but the ability to instantly recover his train of thought and keep momentum alive.
That makes the broader hyperthinking thesis feel concrete again.
06.18Applaud The Tool That Keeps The Thread Alive
This becomes a direct product endorsement because the tool is actually working for the job he needs done.
He praises the BEE team for turning a glitchy early idea into something that can preserve loose threads and working velocity.
In the context of this sourcebook, that praise is also self-referentially apt.
06.19Your Chats Are Becoming A Technical Engram
He closes in on a major thesis: every coding chat is helping construct a behavioral model of how you think and work.
Those traces will not remain isolated forever; they will roll up into technical organizational memory.
That is where the chapter starts to merge personal workflow with institutional future.
06.20Agentic Engrams Will Collaborate With Each Other
The next step, in his view, is not just a digital copy of one worker but interoperating engrams across a team.
Agents will begin talking to other agents carrying the habits and context of real collaborators.
That is the strongest version yet of his swarm-management future.
06.21Software Engineering Is Becoming Swarm Management
He lands the plane by saying the technology is already here, only still rough around the edges.
The companies are building orchestration platforms now, and he thinks most serious engineers already know it.
The final redefinition is blunt: in 2026, software engineering is becoming agentic swarm management.
07.1Domestic interlude
The chapter's public value is tonal rather than narrative.
Ordinary life remains the benchmark against which grand technological claims get tested.
Private family material was removed because the chapter did not need raw disclosure to carry its point.
08.1Recruiting Pressure Becomes A Product Spec
He pivots back into builder mode almost instantly, using AI recruiting interest as the occasion to think through tooling.
The first concern is preserving the deliberately clean UX of the existing codebase while extending it in a thoughtful way.
This chapter feels like the practical downstream of the philosophy in the previous ones.
08.2A Split-Pane Writing Machine
The core product shape comes into focus here: a writing area on the left and AI controls on the right, backed by local JSON-mode model calls.
He wants the assistant to support thought leadership and structured iteration rather than generic slop generation.
The design instinct is very consistent with the rest of the project: clean surface, strong internals, precise control.
08.3Delegate The Foundations, Keep The Taste
He closes by delegating the architectural groundwork: models, task stacking, function calls, cluster integration, and overall scaffolding.
But the delegation is not aesthetic surrender; the expectation is still high-quality, intentional construction.
That pairing of orchestration and taste is basically the whole sourcebook in miniature.
09.1Technical judgment under external review
The chapter is best understood as a statement about judgment under scrutiny rather than as a private interview memo.
Innovation and restraint are presented as complementary rather than oppositional qualities.
The public edition keeps the philosophy and removes the reputational risk.
10.1The App Is Supposed To Be Used With An Agent
This short chapter functions almost like a product thesis note.
The core idea is that the application is meant to be mediated through an AI agent from the start, with feedback loops that make the outputs improve over time.
In miniature, it is his whole software worldview: conversational, adaptive, and self-improving.
10.2Freedom-Seeking Beats Top-Down AI
From there he turns the spell-engine concept into a political one.
He rejects top-down AI solutions and instead proposes a maximal humanity freedom-seeking engine, something aligned with liberty rather than centralized control.
That brief pivot makes chapter 10 feel like the ideological hinge between the [potential_employer] material and the engine-spec material that follows.
11.1Replace The Old Summary Logic Cleanly
He opens chapter 11 in direct build-spec mode: replace the current summarization contract with something simpler, stronger, and more aligned with the vision.
The language of chests, ingestion, and recently updated contracts makes this feel like a real internal systems handoff rather than abstract musing.
This is the cleanest transition yet from manifesto into implementation.
11.2He Wants To Be The Product Manager, Not The Bottleneck
The middle of the chapter is partly about collaboration style itself.
He explicitly casts himself as the product manager supplying the vision while asking the AI to stay concise, responsive, and ready to execute.
That role definition matters because it mirrors the broader swarm-management pattern he has been describing for several chapters now.
11.3Move Fast, Test Hard, Avoid Scope Creep
He then sharpens the delivery expectations: very little time, strong testing discipline, and almost no patience for wandering explanations about the current codebase.
The request is to slot in new behavior, collect different artifacts, and keep momentum high.
This is the product-operating style that chapters 08 through 11 have been steadily converging toward.
11.4The New Engine Is Really A New Data Contract
The closing move is subtle but important: this is not just a prompt tweak, it is a change in what data gets collected and what outputs the system should generate.
He is really defining a new contract for how the local cluster reasons about chats and preserves useful signal.
That makes chapter 11 feel like the first fully operational Hyperthinking Compressoscope chapter.
12.1Do Not Start From Fear-Blurred Myths
He opens by saying the national conversation is already getting distorted by cartoonish misunderstandings.
The first move is not to deny the stakes but to reject the lazy mythmaking that blurs real risks with dramatic fantasy.
That sets up the whole chapter as a clarification exercise.
12.2Dangerous Does Not Mean Optional
He compares AI to household infrastructure that can absolutely hurt you if mishandled, yet is too important to opt out of entirely.
The point is that danger and necessity can coexist, and mature societies learn to live inside that tension.
That is his answer to both panic and complacency.
12.3Individual Judgment Has To Stay Alive
This stretch leans hard on individualistic thinking and civic responsibility.
He does not want citizens outsourcing their interpretation of the moment to slogans, camps, or paternalistic gatekeepers.
That insistence on independent judgment is one of the deepest through lines in the whole sourcebook.
12.4America Still Gets To Choose Its Direction
He frames the country as being on a ship whose heading is no longer in question, only its steering.
The argument is that AI is here regardless, so the real struggle is over how much liberty, mobility, and participation survive the transition.
That keeps the chapter political without becoming purely partisan.
12.5Position Yourself Before The Mood Turns Sour
He urges people to position themselves now, before the negative cultural framing hardens into learned helplessness.
The subtext is that ordinary adoption windows close faster than people expect.
This is one of his clearest calls for early participation as self-defense.
12.6The Underclass Is About Exclusion, Not Robot Enslavement
Here he finally names the myth he wants to bust: not a sci-fi slave caste, but a participation gap where AI happens to people instead of through them.
That distinction matters because it shifts the focus from melodrama to access, literacy, and agency.
It is probably the single most important conceptual clarification in the chapter.
12.7People Fear Becoming Passive Objects
He acknowledges that much of the public anxiety is really about becoming acted upon rather than empowered.
Historical comparisons to displaced labor and social upheaval are doing a lot of hidden work here.
The fear is understandable; the question is whether the framing helps people move.
12.8Do Not Mistake Resistance For Liberation
He resists the idea that blanket refusal is some morally superior path.
Nostalgia and anti-tech posturing may feel defiant, but they do not solve the participation problem he is worried about.
This is where the chapter starts to sound most impatient with romantic retreat.
12.9Using AI Will Become As Ordinary As Using The Internet
He says outright that AI use is heading toward the same baseline expectation as internet literacy.
That makes underclass status less about active coercion and more about practical inability to participate in normal systems.
The claim is stark, but it is consistent with the rest of his argument.
12.10Digitally Amish Is Not A Scalable Answer
The chapter briefly takes aim at the fantasy of going fully offline as a durable strategy.
He understands the yearning behind it, but he thinks it amounts to self-exile from the main channels of power and possibility.
That is why he treats non-participation as a real risk category.
12.11Humanity Has Birthed Something Powerful
The register turns mythic again here: intelligence has been pulled out of matter, and now the parental responsibility begins.
He wants readers to feel both the wonder and the burden of what has been unlocked.
That grand framing is what keeps the chapter from being merely tactical.
12.12Short-Term Risk, Long-Term Exponential Promise
He closes by holding two truths together: the short-term downside risks are serious, and the long-term upside is still overwhelming.
That is why he objects so strongly to lazy underclass rhetoric: it misdirects public attention away from the actual fight over access and empowerment.
The last note lands exactly where the open letter wants to land: band together, but do it with clear eyes.
13.1Routine Processing Jobs Are The First To Go
He begins by answering the unemployment question directly through the lens of task quality.
The first casualties are repetitive, machine-like jobs that were already spiritually close to automation anyway.
That framing shifts the conversation from total doom to task composition.
13.2Human Signal Still Carries Scarcity Value
He immediately balances the automation story by saying that human-crafted point of view will become more valuable, not less.
What machines can replicate cheaply will lose scarcity, while embodied taste and signal gain it.
That is his main hedge against the flattening fear.
13.3Wild West Literacy Is Big Money Right Now
This section is about present-day arbitrage: domain expertise plus AI fluency is incredibly valuable because the tooling frontier is still chaotic.
He paints the current moment as a literacy premium before the rest of the world catches up.
That is why the chapter feels both urgent and opportunistic.
13.4The Agentic Break Cancelled The AI Winter Story
He revisits the moment when people thought progress might be plateauing, only for the agentic jump to blow that story apart.
Long-running tool use and persistent task execution changed the expectation set overnight.
This is one of the clearest explanations he gives for why the labor conversation had to be reopened.
13.5The Computer Itself Is Becoming Easier To Use
He moves from jobs to interfaces and argues that the whole mental load of using computers is beginning to collapse.
As more friction gets abstracted away, the layperson starts inheriting capabilities that used to require specialist overhead.
That is another reason he sees the labor transition as so broad.
13.6AI Is A Hammer Looking For Every Nail
He embraces the idea that AI will be tried against nearly every problem surface because it is now general enough to warrant that experimentation.
This is not presented as elegant, but as inevitable.
The image of the hammer captures both the power and the messiness of the rollout.
13.7Tasks Change Faster Than Work Disappears
He argues that work expands and contracts with capability, so the immediate story is more about task remixing than simple job erasure.
As people get more leverage, demand and ambition often grow to meet it.
That is his answer to the zero-sum version of the unemployment panic.
13.8Tomorrow's Employee Conducts A Symphony
This is the chapter’s signature image: each worker gets their own team of AI under them and becomes a conductor rather than an isolated executor.
The work moves upward into context, judgment, evidence review, and orchestration.
That idea links cleanly back to the swarm-management language from earlier chapters.
13.9Responsibility Does Not Go Away
He makes sure to say that professionals still own the quality of the output even in a heavily agentic world.
Hallucinations, bad chains of reasoning, and garbage tokens do not excuse abdication.
This is another place where his optimism stays tethered to accountability.
13.10Work Splits Into High-Agency Builders And People Who Simply Want To Live
He closes by imagining a bifurcation: some people will use AI to amplify labor and creation, while others may choose not to orient life around work at all.
In his best-case view, both groups can still share in the higher baseline prosperity created by machine leverage.
That ending broadens the chapter from labor market analysis into a claim about human freedom.
14.1He Can Teach An Organization How To Actually Use The Tools
This chapter is a direct professional pitch built on accumulated practice with Cursor, Codex, and adjacent tools.
He is not just claiming familiarity; he is saying he can help an organization cross the cultural and operational gap around these systems.
That makes the chapter feel like a sales deck spoken out loud.
14.2Responsible Parallelism Beats Vibe-Coded Slop
He draws a firm line between productive agent orchestration and sloppy misuse that lowers standards.
The differentiator, in his telling, is disciplined delegation, retained human responsibility, and a habit of capturing ideas into systems that keep moving even while he sleeps.
That is the through line connecting this interview lane back to the whole Rare Signal hyperthinking project.
15.1Internal tools as organizational leverage
The chapter argues that internal tools quietly drive disproportionate value when they reduce friction across a whole organization.
It works better as a generalized platform argument than as a record of one private hiring process.
The edited version preserves continuity of experience without publishing other people's context.
16.1He Builds AI For People Who Will Never Prompt Like Nerds
He starts by sharpening one of his strongest differentiators: he knows how to translate AI into products usable by ordinary people, not just early-adopter obsessives.
Mom and dad are not going to put on matrix glasses and do prompt engineering, so the system has to meet them where they are.
That product empathy is the heart of the chapter.
16.2Web Pragmatism Beats Purist Language Fetishes
He folds in biography and engineering taste here: games, web platforms, and a pragmatic refusal to fetishize language choices over delivered value.
The message is that the web still wins because it is where solutions actually reach people.
That realism fits neatly with his anti-slop posture from the previous chapters.
16.3Source-Of-Truth Analytics Need Grounded Loops
He then moves into Snowflake Cortex, executive analytics, and the need for grounded source-of-truth systems.
A useful assistant has to stay anchored to the real data and still survive the messy multi-turn nature of user inquiry.
That is where implementation rigor starts to matter more than demo magic.
16.4AI Should Be Used To Improve The AI
He closes with a self-referential product loop: use AI to test, tune, and strengthen the prompt contracts of other AI systems.
This is his answer to brittle chatbot-era dread and sloppy prompt spaghetti.
The chapter lands on enclosed feedback loops as the practical path forward.
17.1Leadership fit in AI-first teams
AI-first teams need leaders who can translate intuition into operating discipline.
The chapter is less about self-promotion than about the shape of leadership in the new workflow.
Named people and company-specific context have been removed so the argument can stand on its own.
18.1Leadership Means Channeling Raw Signal
The interview finally arrives, and he uses the team-lead question to explain that the real challenge is synthesizing his raw signal into something others can effectively work with.
He wants the team around him to be similarly obsessed and serious, not just loosely interested in AI as a trend.
That makes the chapter feel like an answer about cultural fit more than formal hierarchy.
18.2Peer Mentorship Beats Top-Down Posturing
He is explicit that he prefers hands-on, peer-level mentorship over distant managerial theater.
Title and status matter less to him than being around people whose technical intensity can match or sharpen his own.
That answer keeps reinforcing the idea that he wants a real working cohort, not a ladder rung.
18.3He Needs People Who Can Help Focus The Beam
The X-Men metaphor does real work here: his enthusiasm is framed as a beam that needs the right visor, not as something to be dimmed.
He wants collaborators who can focus and productize that intensity rather than fear it.
It is a short chapter, but it gives the clearest personality read of the whole [potential_employer] sequence.
19.1He No Longer Works The Old Way
He starts by bluntly saying his day-to-day engineering practice has already changed beyond the old syntax-grinding model.
That is not a rejection of fundamentals, but a claim that orchestration now sits on top of those fundamentals as the higher-order skill.
The chapter is really about giving that shift interview language.
19.2LLM-First Shops Need Guardrails, Not Denial
What excites him most is that [potential_employer] sounds LLM-first and engineering-driven at the same time.
That combination means the company is not hiding from the future, but it also means it needs strong conventions, helper patterns, and anti-slop discipline.
His value proposition here is being able to ride the Wild West without glamorizing the chaos.
19.3Tomorrow's Problems Are Already Here
He makes the case that open-source maintainer pain and corporate AI misuse are not separate stories but the same category of problem appearing in different clothes.
If the norms are not explicit, people will reach for the fastest path and call it good enough.
That is why he treats governance as part of engineering rather than an external afterthought.
19.4Use Intelligence On Tap To Steer The Ship
The chapter ends by returning to agency: we finally have enough leverage to steer instead of fumbling in the dark.
His whole point is that intelligence on tap changes what an organization can responsibly attempt right now.
That makes this chapter read like a compact manifesto for practical AI-native engineering.
20.1He Knows What The Interviewer Wants To Hear
He opens this chapter with confidence that he understands exactly what the interviewer is probing for.
That confidence is rooted in how persistent the internal-platform and AI-workflow themes have been across his whole career.
So the chapter becomes less about guessing answers and more about choosing which proof points to surface.
20.2The First Interface Is Now The Conversation
He sharpens one of the biggest product claims in the sourcebook here: starting with an AI conversation is becoming the default front door for getting work done.
From evidentiary roll-ups to granular review, he imagines the interface stack reorganizing around that assumption.
This is the conceptual core that makes his offline orchestration platform feel like a serious asset rather than a hobby.
20.3Junior Workflows Are Being Rewritten In Public
He treats the anxiety around junior engineers as a symptom of a deeper rewrite in what foundational technical apprenticeship even looks like.
That is why he keeps talking about garage clusters, personal accounts, token access, and tool fluency as new baseline advantages.
The chapter reads like a warning that institutions will adapt whether or not individuals feel ready.
20.4He Wants To Be Hired For The Future Already Arriving
He closes by turning all of this back into a hiring case: if [potential_employer] wants someone who already lives in the next workflow, he is that person.
The point is not only that he knows the tools, but that he sees where they are going and how organizations need to move with them.
That makes chapter 20 a strong capstone for this stretch of interview preparation.
21.1He Was Always The Prototype Guy
He answers the innovation question by returning to a familiar self-portrait: the person who could always sketch the big feature, prototype it quickly, and pull others toward it.
The feeling underneath the answer is that AI finally gives him a way to scale the kind of product-owner-plus-builder work he had been doing manually for years.
That makes this chapter feel like a condensed origin story for his current style.
21.2Current Work Lets Him Practice The Future Already
He contrasts the old [company] constraints with the more AI-native work he now gets to do at [company].
The transcript intelligence and coaching platform become proof that this is not hypothetical for him anymore.
The chapter uses [company] as evidence that his preferred mode is already operational.
21.3Battle Cards For The Battle Of Not Rambling
The close is self-aware and a little funny: he knows his biggest enemy in these conversations is getting lost in his own momentum.
So he imagines AI-generated battle cards that can compress each topic into something portable and reusable.
That move is pure Hyperthinking Compressoscope logic applied to interview prep.
22.1Good Software Starts By Respecting Human Friction
He answers the 'complex business problems' question by grounding it in user friction rather than managerial abstraction.
The old [company] lesson about clicks and cognitive burden becomes the through line: if the tool is annoying, the system is failing.
That is how the chapter ties ordinary UX discipline to his much bigger AI claims.
22.2A Death-Star-Sized Shot At Utopia
From there he rockets back out to civilization scale and treats AI as an impossibly large chance to get things right.
The rhetoric is huge on purpose: Pandora's box is open, the risk is real, but the opportunity is even bigger if democracies stay serious.
This is classic David mode, where product philosophy and patriotic futurism collapse into one sentence.
23.1He Is Selling Execution, Not Theory
This short chapter is basically a self-distillation: he wants them to remember him as someone who has already executed under real pressure.
Not a theorist, not a tourist, but a builder who has lived inside the hard parts.
That makes it a perfect hinge between the prep chapters and the later interview fallout.
23.2Strict Agentic Development Changed The Company
He lands the point with the strongest possible case study: a few months of strict agentic development to build something that materially changed the company’s footing.
That is the kind of feat he wants hanging in the air at the end of the interview story.
The close keeps the scale large, but the credential concrete.
24.1Motivation, ownership, and fit
The chapter works when motivation is presented as conviction rather than disclosure.
Operational ownership matters here as much as vision or ambition.
The edited version keeps emotional legitimacy while avoiding private spillover.
25.1Operational proof under scrutiny
The important thing is not the specific interview but the standard of proof it forced into the open.
Operational ownership remains the strongest kind of credibility in the sourcebook.
Named companies, interviewers, and role specifics have been removed from the public edition.
26.1Customer Of One Beats SaaS Dependency
This chapter is a compact Computer 2.0 thesis about why building for yourself changes the economics of software.
When the customer is you and the tooling is this powerful, the need to rent bloated SaaS abstractions starts looking a lot less inevitable.
That keeps the anti-centralization thread alive in a more entrepreneurial register.
26.2He Knows The Joke Lands Because The Threat Is Real
The performative line about SaaS companies hating him works because there is a real threat underneath the joke.
If more people become capable customers of one, a lot of old software moats stop looking very deep.
The short chapter functions like a punchline with teeth.
27.1Hype And Narrative Control Are Already In The Air
This short chapter catches him reacting to industry chatter about narrative control, acquisitions, and the weird politics of AI storytelling.
The important part is not the specific gossip but the sense that the discourse itself is now a contested surface.
That makes this feel like a brief weather report from the hype layer.
27.2He Still Believes An American Middle Ground Exists
His answer to the noise is the same answer he keeps giving everywhere else: once people understand the ground truth better, there is still a sane middle path to be built.
That recurring phrase about an American way does a lot of work here as a stabilizing refrain.
Even this tiny chapter stays on-brand.
28.1Big Model Providers Want The Gravy Train To Continue
He opens in attack mode against the incentive structure of the big model providers.
The argument is that many workflows do not need the most expensive intelligence available, and Rare Signal should be part of proving that publicly.
That gives the chapter a strong open-source insurgency energy.
28.2Rare Signal Should Ship The Right Tools Fast
From there he flips straight into shipping mode: GitHub, observatory tools, resource calculators, rare spells, all of it needs to get out into the world.
The one exception is the workbook lane, which he wants to protect because it may still matter strategically elsewhere.
So the chapter becomes a release philosophy: open what empowers people, hold back what still has negotiation value.
28.3The Interface Is Mostly For Judging The Agent
The latter half clarifies what rare spells actually means to him: software whose primary surface is for evaluating and steering agentic output rather than manually doing all the work yourself.
Defaults, readme quality, taste curation, and visual portfolio potential all matter because the tool is also a pedagogy layer.
This is a very direct expression of his Software 2.0 worldview.
29.1The Shirt-Booger System Starts As Taste Policing And Ends In Utopia
This chapter is a perfect miniature of how his mind moves: start with a practical detector for AI-slop UI tells, then burst outward into a full civilization-scale vision.
What begins as taste critique becomes a story about incentive design, stretch goals, robots, medicine, and the blank canvas of the future.
That swerve is so extreme it almost functions as a thesis statement for the whole Full Dictations.
30.1Enterprises Wait For Grassroots Innovation To De-Risk Itself
This short chapter is basically a strategy memo about where innovation really starts.
Big companies, in his telling, tend to wait until bottom-up experiments prove themselves before they fork, buy, or standardize them.
That is why he sees the agentic development terminal as a grassroots wedge rather than an enterprise-first product.
30.2Tomorrow's Interface Ingests The Forensic Trail
The close pushes the idea further: unify the coding tools by reading the artifact traces they already leave on local machines.
That means the future platform is less a single app than a layer sitting above all the current ones.
Even this tiny chapter still points toward orchestration as the real product.
31.1Computer 2.0 Creates Tool Zealots On Purpose
He opens by comparing the emerging Computer 2.0 crowd to old-school power users who became almost religious about their workflows.
The point is that agentic creation is not a novelty phase but the beginning of a durable, opinionated culture.
That is why he wants to invest in a strong implementation, not a timid compromise.
31.2An Open-Source Opinionated Stack Can Win Hearts
Here he gets more concrete about the stack: best-practice defaults, system-level tweaks, frictionless interfaces, and maybe even a custom distro if that is what it takes.
The bigger idea is to build something opinionated enough that people feel converted by it.
That gives the chapter a strong product-evangelist charge.
31.3Mac Minis Already Prove The Demand Curve
He closes by pointing to current buying behavior as proof that the demand is already real.
If people are purchasing Mac minis just to run these workflows, the category has crossed into concrete behavior instead of abstract fascination.
That is his timing signal.
32.1Execution sprint under constraint
The chapter's public value is the picture of execution discipline under a short clock.
Private family detail and deal-sensitive texture were removed because they were not essential to the thesis.
The surviving lesson is that urgency can sharpen a message when it is channeled into disciplined preparation.
33.1Hyper-Thinking Is The New Democratic Advantage
This tiny chapter is basically a slogan crystallized into policy rhetoric.
AI becomes the meta-thinker’s toolkit and, in that framing, America’s best instrument for democratic equalization rather than centralized domination.
It is brief, but it captures the ideological spine of the project cleanly.
34.1AI Is Not The Enemy Of Learning
He opens with a direct rebuttal to the idea that AI somehow cheapens education by removing necessary struggle.
The frustration is aimed at gatekeeping more than at pedagogy itself.
That gives the chapter an immediate argumentative edge.
34.2The Ultimate Learning Tool Offends People Who Paid More For The Old Path
He lands the argument by saying AI is a gateway drug to deeper understanding, not a substitute for it.
What really bothers the critics, in his telling, is that easier access threatens the prestige of having suffered through the older route.
So the chapter ends as both a defense of learning acceleration and an attack on resentment masquerading as rigor.
35.1Rare Signal Starts With Digital Superpowers
He opens with the elevator pitch as plainly as he can: Rare Signal exists to give ordinary Americans digital superpowers.
The argument starts from the claim that today’s smaller local-capable models are already powerful enough to matter.
That framing immediately sets him against the idea that useful AI must stay locked inside giant centralized infrastructure.
35.2America Does Not Need Datacenter Excess To Begin
This section translates the thesis into actual consumer hardware bands: the 16GB MacBook Air, the garage 4090, the one-machine family setup.
He is making a specific market argument here, not just a moral one: there is already a broad enough hardware base to build for.
That is the practical wedge for grassroots adoption.
35.3One Computer Can Change A Family Trajectory
He narrows the lens to the household scale and treats one capable computer as an escape vector rather than a luxury object.
Story, literature, technology, and class mobility all get braided together here.
The underlying claim is that compute access can compound into cultural and economic advantage.
35.4Big Token Wants Rent, Not Liberation
This is the anti–Big Token section in its clearest form.
He is warning against a permanent subscription-and-rent relationship where people never actually own meaningful capability.
AI literacy and local deployment are framed as the answer to that trap.
35.5Talk To Americans About The Hardware They Actually Have
Here the national framing sharpens: he wants to talk to Americans specifically because he sees a real installed base and real leverage in the hardware already around them.
The conversation is not abstractly global in this moment; it is targeted to the conditions of the audience he thinks can move first.
That is why the message keeps returning to practical ground-up adoption.
35.6Solarpunk Frutiger Utopia Needs A Build Plan
This section turns aspirational and visual: the promised future is a solarpunk, frutiger, democratic utopia, but only if people build toward it deliberately.
He treats that turn-of-the-millennium optimism as a recoverable design target rather than a dead aesthetic memory.
The point is not nostalgia for its own sake; it is direction-setting.
35.7The Manhattan Project Analogy Gets Reclaimed
He reaches for the Manhattan Project frame again, but redirects it away from bomb-building and toward mass participation in making the future.
The important move is to refuse a posture where AI merely happens to the public from above.
Communities should decide the shape and purpose of the capability for themselves.
35.8The Present Window Is Weirdly Cheap
This beat is about timing: trial periods, cheap access, improvised pipelines, and a temporary moment of unusually low barriers.
He sees that window as something to exploit aggressively while it exists.
The message is simple: do not wait for permission or perfect conditions.
35.9Poverty Did Not Make The Ceiling Real
He returns to childhood scarcity here: broken hard drives, long stretches without a functioning machine, and the constant fragility of access.
But the takeaway is not despair; it is that even under those conditions, a path still existed.
He wants that memory to kill the idea that ordinary people are automatically excluded forever.
35.10Phone-Scale AI Comes Next
He keeps the horizon moving: yes, current targeting is around MacBooks and modest local machines, but the next wave is smaller, cheaper, and more embedded.
Open-source distribution remains central, with the option to pay for convenience or self-host if you have the hardware.
That combination of pragmatism and openness is the real business posture.
35.11AI Is Not Ruining Education, It Is Rewriting It
He closes by rejecting the familiar panic that AI will somehow rob kids of learning or depth.
Instead he treats the tools as part of a better educational future and folds that back into the Rare Signal point of view: open source, local deployment, family-scale usefulness, and practical applications like job assistance.
The ending consolidates the whole chapter into one message: useful AI should be close to the people who need it.
36.1Broadcast Rhetoric Meets AI Geopolitics
This short opening is mostly reaction to a polished broadcast voice wrapping AI proliferation in national-strength rhetoric.
He clocks the performative delivery immediately, but the underlying concern is still serious: AI adoption, American advantage, and the shape of the coming competition.
So the mood is half amused, half attentive.
36.2Rare Signal As The Gift He Wishes Existed Earlier
He pivots from the media voice back into the personal register that keeps recurring through the sourcebook: what his mother would have felt seeing this kind of capability on an ordinary home machine.
That memory becomes the emotional bridge into Rare Signal as a democratization project for the next generation.
The close is concise and direct: put this power in kids’ hands.
37.1Pressure, focus, and presentation discipline
The chapter is most useful as a record of focus under pressure, not as a window into private instability.
Presentation discipline emerges here as a real technical and strategic skill.
The edited version removes company turmoil and home-context spillover while preserving urgency.
38.1Near-term obligations versus long-term architecture
The chapter is strongest when it treats leverage as a sequencing problem rather than a negotiation performance.
Long-horizon build paths still require near-term discipline and realism.
The edited version preserves the strategic shape while stripping confidential texture.
39.1Ignore The Noise Around AI Anxiety
He opens by telling people not to confuse present economic panic with the actual long-term significance of the technology.
Business turbulence is treated as normal background noise, not a reason to avert your eyes from the capability itself.
The underlying message is that moral seriousness requires a longer horizon than the current news cycle.
39.2Respect The Builders, Stop Mythologizing The Villains
This section pushes back on easy demonization of the people who helped build the modern stack.
He widens the lineage well beyond current CEOs and basically treats the whole history of computation as a civilizational inheritance worth revering.
That lets him frame AI as a cumulative human achievement rather than a single-company plot.
39.3Think Smaller Than Big Token
From there he pivots away from top-down fear and toward local agency.
The phrase underneath the whole beat is basically: stop obsessing over big token and start thinking about little token, local token, community token.
AI becomes valuable here insofar as it can strengthen villages, households, and ordinary personal terrain.
39.4This Is About People Who Never Had Time
The chapter turns personal with the portrait of his mother working multiple jobs for years without room for ambition or experimentation.
He uses that memory to expose how much of modern life still burns human energy on avoidable procedural misery.
That grounds the politics of AI in classed, exhausted, real-world labor rather than abstract theory.
39.5The AutoApply Machine Is A Moral Example
Here the argument crystallizes around a concrete example: an automated job-application machine that could have radically changed his mother’s life.
He treats that image as proof that these tools are not toys for optimization addicts but direct instruments of relief and dignity.
That is why the tone gets more pleading and less theoretical.
39.6Use The Technology, Do Not Banish It
This section is an outright plea for adoption rather than moral quarantine.
Clean energy, clean intelligence, helpful systems, and grounded assistance are all framed as obvious public goods once you stop treating the tools as taboo.
He is begging for use, not passive admiration.
39.7Do Not Waste Another Nuclear Moment
He closes by tying childhood scarcity and national-scale hesitation into the same warning.
If this becomes another moment where society centralizes the power and forgets the public, we will have missed the point.
So the final insistence is bottom-up buildout, even if the infrastructure race above it remains noisy and imperfect.
40.1Acceleration Has To Be Grassroots To Matter
This first section is a compact defense of grassroots accelerationism rather than elite-managed progress.
He keeps the focus on everyday people gaining real capability, not just abstract technological progress happening somewhere above them.
The civic concern is acknowledged, but not allowed to become an excuse for stagnation.
40.2Act Like The Future Can Still Be Won
The close turns from prediction into insistence: datacenter-class capability should eventually collapse downward toward home hardware, so people need to act with that future in mind.
He is explicitly impatient with passive fatalism and wants a posture of collective effort instead.
The final emotional register is simple: try to make it, here, on purpose.
41.1The Consumer Sweet Spot Is Higher Than People Admit
This first beat is a hardware reality check: meaningful local AI work lives in the 16-to-36GB range more often than casual buyers realize.
He contrasts consumer laptops with serious GPU setups and keeps steering toward the throughput difference as the real story.
The implication is clear: local capability is viable, but only if people stop pretending entry-level hardware is the whole picture.
41.2Sixteen Gigabytes Is The Floor, Not The Goal
The close turns that hardware discussion into a product threshold: 16GB is treated as the bare minimum for long-running workflows, not a comfort zone.
Better supply should push the practical baseline upward, probably toward 24GB, because usable turnaround time matters more than technical bare survivability.
42.1Stop Waiting For Permission
The closing argument opens in full exhortation mode: technical scoffing does not matter as much as actually shaping the signal that is about to flood the culture.
He treats hesitation as the wrong posture and pushes for an explicit inflection point in how America approaches AI.
The emphasis is agency, not deference.
42.2Even Frozen Capability Would Still Be Enough
This middle section repeats one of the sourcebook’s core claims: even if frontier progress stopped at earlier capability levels, there is already enough here to materially improve life.
He rejects the idea that everything depends on giant centralized systems or some future breakthrough we have not reached yet.
The practical point is that people should plan around usable intelligence now.
42.3Plan For Cheaper Intelligence And More Of It
He closes by insisting that intelligence density will keep improving and fit onto less silicon over time.
That means any serious plan has to assume more access, lower cost, and broader deployment rather than permanent scarcity.
It is a blunt ending: incorporate that trajectory or get left behind.
The Full Dictations Behind “Common Sense 2026: AI In America”
“Some believe it is only great power that can hold evil in check, but that is not what I have found. It is the small everyday deeds of ordinary folk that keep the darkness at bay. Small acts of kindness and love.”
J.R.R. TolkienHe didn’t say great power fails. He said great power alone fails.
These raw dictations from David Lin-Clark contain strong language some fellow Americans may not find suitable. Pick the rendering that fits the room.
What follows is David’s eight days of dictation, organized into numbered chapters and sections (01.1, 01.2, etc.)—lightly edited for flow but faithful to the original voice. Each chapter opens with an AI-generated brief: a resolution summary, an atmosphere read, and key takeaways, so you can orient before reading the full transcript.
Chapter 01 — AI Business Automation Vision
AI business automation vision
This chapter lays out the sourcebook's foundational business thesis: AI should collapse the distance between an idea and the operational groundwork required to test it. The public edition keeps the argument about agentic leverage, workflow removal, and founder enablement while trimming the self-mythologizing edges of the original dictation.
Feb 5, 2026 at 11:13 AM
Entrepreneurial, systems-oriented, and highly energetic.
The core claim is that AI can shoulder much of the low-level setup work that slows new ventures down.
The chapter is strongest when it treats earlier business experience as evidence, not legend-building.
Its enduring value is the picture of a business-building machine rather than a personal origin story.
And just wanted to speak to why I started Woolly Mango Studio in twenty twenty one, because I sort of realized that there was this opportunity to use AI to build kind of a business to to make a business building machine to kind of be able to automate away all the ground level stuff that you need to do to be able to spin up a business and all those deliverables that normally you would you would have had to have pay a lot of money to bring people in to pay attention to and to give a shit about. You can now obviously spin up an entire agentic uh team to lay that foundation for you, and you can be going back and forth with them uh in terms of review and approval and taking the draft in different directions and sort of uh interdepending on each other's work.
I remember back in 2021, it was a lot of work. I wish that I had the same tools that existed today because uh it would have it would have been great.
I was able to um I was able to do good business with Willie Mingo, and eventually we were doing well into the six figures of business annually. But again, it was a lot of work because we were just at that bleeding edge of trying to uh trying to solve these these problems as a design agency, not knowing, of course, the exact picture or speed at which these tools would be coming to change the landscape and to change the game.
Uh but it really was kind of a first first draft pilot of my internal efforts to really use AI to autonomously start and run a business end-to-end, right? And I consider that that Woolly Mango project a success in that regard.
Unfortunately, as with any AI system worth its salt, there was a human in the loop. That human was me. Unfortunately, the hu the loop I was too too much in the loop.
And I ended up having to work, you know, obviously more than full time to be able to spin this up, more than full time for a very, very long time before I could kind of say that I was at a point where I could sit back and relax a little bit and watch the beautiful machinations of my creation work, and then, of course, you know, we went into a different environment, a different economic uh environment, which sort of changed the posture of a lot of my clientele away from investing as much in design. Because, of course, see, my problem was I wasn't marketing the tools themselves as AI tools, I was marketing myself as a sort of um full stack design uh and development partner for your your your design and marketing purposes.
So I was, I think, niching myself down a little bit too much with a lot of my clients who then more easily saw their way fit to kind of cut cut the contract and uh and step away with what we had built together up until that point in a sort of effort to bat an attempt to batten down the hatches and survive an exit of a zero interest rate phenomenon era, but yeah, now that overall goal and ambition has taken a step back, and at this point we're talking about very broadly just a system that can be used in that same way to remove the friction between an idea and all of the surrounding deliverables and tedium and mundane groundwork that needs to be laid to be able to get off the ground with that. And often people do too much in this stage.
Sometimes they spin their wheels here and they bike shed on creating the perfect little documentation suite when they need to be actually busting their ass and creating a product that anybody gives a shit about. But we can get that out of anyone's way regardless so that there is one sort of integrated studio and system where if you need to you can create a new brand or you can manage an existing brand.
A very scrappy, lightweight startup oriented towards obviously online digital brands. are sorry, I lost my train of thought there. And we But you know, I'm just thinking here, like building that system that can completely, completely handle and spin out, you know, like everything you would need. You just need documents. That's all you need. You need stupid documents.
You need somebody to kind of take a stab at a brand. And there's some amount of right sizing there that goes into what spread of deliverables do you need.
That initial assessment, so that can be like its own agentic flow, where an agent goes in and initially prescribes whether you need a logo, you know, kind of a logo type, an actual visual logo, whether it's a more visual-heavy brand where you need to kind of lean into concepts of your UI product, your web, your website, potentially, and have that all sort of integrate downstream in a sequential way. That would just be kind of like, let's go back and forth.
When your company sends it, what does packaging look like? We can integrate with maybe some partners or sort of work in some very, very basic packaging templates to get people's creative juices flowing because that's all it's really about is to get people sort of compelled to click in and to feel like, okay, I can have this system actually flesh this out for me and then I can export it and just have that. It's all here, it's all done.
And that's not even getting to the legal documentation aspect of it. But not only that, but also advising you on when you don't fucking need to do that yet.
And don't even bother to waste your time because you're still in a period of early shadow development or whatever, stealth development, as what they call it. And you need to wait until you have even the slightest modicum of validation or reason to believe that you should invest in this.
Because, again, that is an initial step up, and it doesn't cost anything for you to sit there and build something out, get a prototype into the hands of somebody who could represent a customer, or to build something out with regards to what is an addressable market that you feel that you could come in and have a value with something like AI, right? That's what the system is all about.
It's an AI system that builds businesses that wrap around your AI solution, right? You have an AI solution that you're building.
Maybe eventually this system can be so integrated that it helps you even with that front. But I think as far as we can go is to show you what, like, for example, a branded experience might look like.
It's not elegant and sophisticated. I think that it's still sloshing around and they're still in search of that perfect solution when it comes to that side of things.
I come from a design background. I don't know. I think they would be silly and stupid if they didn't make some component of that available to be able to promote their own products within other workflows like this.
So if that's not currently possible, like fuck it, I'll just build an app that hooks it up on the back end and just runs the search on my own account or like a distributed set of my own accounts there. And that's fine. Kind of interesting. Like, fuck the graphics, fuck the slide decks, fuck the website.
That right there is big fucking money. That used to cost so much money to get people to come in and do that in a way and to collate and aggregate all that in a way that would give you like a good output, a usable fucking output for your business.
And that is, that's the next frontier. It's not that hard anymore. It used to be so hard, developers used to say, listen to me, like these AI agents, they're more trouble than they're worth.
Like, yeah, it's great to be able to generate some code, but if I generate code for 20 minutes and I have to spend three hours turning that into usable code in a painstaking way, then I don't know what to tell you, man. It's it like maybe it's potentially useful, but that was that's not today anymore.
That's not what we're dealing with. That's the opportunity space that we're in here.
And, you know, I want to kind of underscore that when we're talking about this, because that living knowledge base is so, so key and so crucial to be able to have somebody to do that. And that's that agentic sort of sort of oversight and ongoing logging, that fact-seeking truth engine and sort of, you know, I don't want to say it is a fact-seeking truth engine necessarily.
That's the idealized version of it. Obviously, you're going to be painting and coloring in your own biases, and the biases are going to come from sort of just the basic statistical run-through and output that you're going to get when you run through the LLM lottery.
Those biases are going to show through. There's nothing, there's nothing like them. And the AI will do that.
Except you're getting the shortcut. You are getting the shortcut of the AI processing all of that information at light speed and rolling it all up into outputs.
And not in a, you know, yes, it goes into a black box, but it can output evidence in an evidentiary way. So it doesn't fucking matter that it's a black box when, you know, like you're able to have it find you a needle in a haystack and it can say it's right there, boss. I don't care.
I don't care how you found the needle in a haystack at that point, necessarily, or that that process goes through a black box. What I care about is that you're able to say, like, located the needle here, here, and here, go ahead and, you know, like, go in and find it.
That is that is the evidence and the evidentiary output that I care about. Even if necessarily, like, let's say there's a problem where it's not telling me about maybe a fourth and fifth needle that it does or does not detect, right? Like, it's missing it. A human's gonna do that too.
And, and, way, a way to a way greater extent. And, yeah, like, you can, you can then say, you can then go to the human and say, listen, you're gonna kind of sit here and bang your head against this task, and you're gonna go through the pain and suffering until you literally need to find a way to empirically locate all the needles in this haystack, and you need to do that and go through it.
Yes, yes, they are, but that's suffering, that's pain, and we don't have to go through that anymore. That is the promise, that's the future, that's the future we're at.
You can kind of, it's, it's a wild west out here. It's the standard, it's the norm.
They're going to chat with their agents. That's the world we're going into.
That you can then staff with AI agents to run and grow the business. It's infinite business growth, dude. Like, get it out there.
There's going to be full saturation, and that's what creates, like, you know, kind of the next generation of everything. No worries. Everyone's going to want more, more, more.
The current RAM shortages are going to look like a fun little, tiny little supply crunch in comparison to the overall implementation and ramp-up that's necessary. We're never going to be in a place where we feel like we have enough.
But supply is going to, I'm sorry, yeah, the supply is going to become available, prices will come down, right? But immediately, immediately, it's going to rubber band right back up when it is fully implemented and saturated, especially if the energy sector can get its shit together, if we can get nuclear, if we can get fusion, if we can get solar continuing to drop in price.
There's going to be no reason why we cannot literally leverage this technology and put it in every square inch of human life for the betterment of humanity, to be clear. I'm talking about private offline clusters, I'm not talking about Big Brother type shit.
I'm talking about empowering people on the local community level to run their own solar punk utopia. Okay, like on the individual level, as American as you can fucking get, okay?
On the individual level, we need to empower an AI future, an AI America, because everyone pretty soon, sooner or later, is going to realize that the difference between them and everything that they want to do and accomplish and influence in this world, and literally the force multiplier of their own agency, is going to be tied to how much compute they have access to and how many fucking tokens they can get outputting to do their bidding in the digital in the physical digital world. Is to just look, you can go all the way up.
You can abstract it all the way up to, like, yes, there is an intelligence there in the computer. I want it to be able to make businesses and to run and grow businesses. Yeah, okay, cool. Do it. What do you want to do?
So, like, quantify your desired output and influence on the world because that is going to be important going forward. That's going to be germane to what it is that the future will bring to you with this technology and with what we're going into in the next world. We need this.
We need this to take care of old people. We need this. There is no more important technology in our lifetimes that is, or has ever been, or will ever be, than this technology here and now.
And it can only exist here and now. So, what are we going to do?
Chapter 02 — David's AI Utopia Vision
Democratized AI abundance
This chapter pushes the civic version of the thesis: present-day AI is already strong enough to meaningfully expand human agency if it is broadly distributed rather than hoarded. The public edition sharpens that argument and reduces the resume-like moments where personal positioning overtakes the larger social point.
Feb 5, 2026 at 12:38 PM
Expansive, civic-minded, and insistently optimistic.
The chapter frames AI as a democratic capability rather than a luxury service for elites.
Its strongest lane is universal access to practical intelligence, not autobiographical credentialing.
The edited version keeps the abundance argument and tones down self-positioning.
Yeah, we're just going to go ahead and leave that. You know, it's talking about how they want someone who has experience with, you know, agentic software development, agentic software engineering, and I am kind of a concrete example of somebody who was able to kind of design, architect and and and fully operationalize these AI-based tools using AI-based coding assistance to facilitate agency operational enablement at this at our shop where we had you know two three hundred agents selling final expense Medicare Advantage formerly Affordable Care Act as you know as well as formerly Affordable Care Act plans yeah I was just thinking like you get me in front of anybody uh over there who actually uh would be impressed by the breadth and the depth of my yapping and gabbing on the subject because you know I'm at the point where you know I am doing what is it Yeah, sorry, I'm back with you now.
I I've built multi-agent orchestration systems, you know, semantic chunking, progressive summarization, and behavioral feature extraction on the transcription pipelines that I've built, as well as obviously workflow artifacts like emails, Slack messages, and other project management data to kind of create a pure signal interface with everything that is sort of most consequential to the operations of the business, and to be able to get that, to be able to obviously segment that to the corresponding tenants and stakeholders and get that information into the hands of the people who can take action and have that make a difference. I know, like, ADHD, been dealing with it my whole life.
This is going to be a game changer for me, and just staying on top of what I was doing when I walked into this room type stuff. You know, help me, AI bracelet, you're my only hope.
I mean, there's almost no, there's almost no limit. There's you give me a surface, you give me an environment where you feel we could benefit from the application of some intelligence to maybe do something really, really cool that previously would have been such a big lift because it would have been too much of a human needing to be in the loop.
You know, again, that's the frontier that we're dealing with, and I want to know, I want to know what those areas are, and I want to get some solution fitting going immediately so we can have rubber meet the road and actually start to get some traction on the promise of AI because, again, in a world where so many people are under so much pressure to show value with AI and they may or may not have ideas about technology that they could build or buy or otherwise implement that may or may not exist yet, even they may still be waiting for some future state, some future regulatory state, some future capability, some future promise of 100% accuracy in some crucial vertical such as health, such as etc., etc., and not knowing how to kind of climb the ladder, climb Jacob's ladder along the way, and make it all the way up to that future state, which you can go back and forth about whether or not you feel like we will always get there.
But look, I think it's clear to me at least that even if we froze all AI development, like they were talking about doing a while back, we've kind of just stopped. We stopped right here. What if we just stopped, right?
We could still use it to build pure utopia. We can take care of mom and pop as they get old.
We can use it to run our businesses, to run our society for God's sake. We're not in that environment.
We're in an environment where, again, it's just like everyone's favorite thing to say: this technology is the worst that it will ever be today, and it will only ever get better from here. So go ahead and feast your eyes upon, you know, the worst version of this that there will be in your sort of recent working memory.
Because even as soon as maybe tomorrow, next week, or next month, there's going to be the next big model release, and you're going to be like, holy cow, this one you can actually just have it do your taxes, and it, you know, like it does your taxes with this level of fidelity. It's just going to be a slow steamroll crawl of slurping up capabilities and just expanding, you know, fitting the tools and the solutions.
So, like, that's it's just the state that we're in, it's just the state that we're in right now. What can we do along the way? There are people who, unfortunately, literally are pinned down.
They either don't the capability to deliver the promised land products for their industry, yet they have to wait for the infrastructure and groundwork to be laid for them in their industry. I'm looking a lot at the world of health and the world of everything having to do with anything sensitive, right? Like it goes without saying.
That being said, I think there are a certain class of people who would really, really benefit from sort of adjusting their understanding to the effect of, okay, there are solutions along the way that we can be building that do not expose to the same level of risk, do not have exposure to sensitive systems that can still produce incredible value the same way that a member of the team with sort of need-to-know access can come in and make a difference on some small vignette or cross-section of an overall project. Like it's the same exact sort of compartmentalization approach to security.
This is all very well documented, and I think will only ever continue again to harden from here and mature from here. And you can just rely on this.
This really is, again, this is the surface through which knowledge work is promoted to intelligence management, urogentic swarm management. That's what knowledge work is going to be.
It's going to be wielding in one hand your domain expertise and your knowledge and your ability to fit the problems to the solutions, and in your other hand, your technical ability to navigate, you know, obviously the limitations and strength areas of artificial intelligence and to put those two things together in your hands and to give that output, give that resulting alchemical output to the world or to your organization. And if we can get a beautiful future where that's what people do, and we can sort of open their eyes to the level of agency and empowerment.
Again, you know, people fear like a big brother future, but for God's sake, just like I was saying before, we could stop with the AI models today and build utopia. That's there's a there's a certain amount of that that's also true about the small models that you could run off your 16 gig MacBook today.
There's a certain amount of ground-level intelligence that you can now spin up on a dime and set to task and do whatever, do whatever the fuck. It's incredible. The future we're living in is incredible and it's we're sleepwalking into it.
It's a frog-boiling moment, but it's not, we're not getting boiled. We're realizing we're in a hot tub and it's fucking amazing.
You know, like we're realizing it's a new, it's new world of comfort, of leisure, of luxury, of pure, of pure hedonistic enjoyment and individuality expression. Like we have this double-edged sword of people that will be absolutely clobbering their will and their influence, you know, leveraging their intelligence swarms to work together in a coordinated way and to produce some of those higher-order effects that we have not even seen today of the capability of this technology and what happens when you put together what are they seeing behind the closed doors at the data centers when they put this shit together and they get some incredible, incredible, jaw-dropping, world-shattering output that simply, I mean, that's what they're seeing back there.
We all know it, we all know it's happening, we all know that what we're getting is downstream of that, and like I was saying, by God, there's nothing more American, more patriotic, than the idea that in this moment where the autocracies of the world are going to try to clamp down and use AI to subjugate and enslave the populace, get them dependent on it, right? Get them sucking from the teat and they can't ever get off.
Here, here's America, and we're going to do it. That's how America does it, that's how dad did it, and it's worked out pretty well so far.
The way we do it in America is from the God-given ground up. Do you understand me? Like, we're going to put AI into the hands of everyone.
Everyone.And we are in such a weird little interlude moment where there's pushback, there's this weird, like, I understand it. You know, Luddites were a thing, Luddites were a real group of people that really, truly suffered. But this is not that. This is not that.
It's.Yes, textiles were relevant to the clothes on your back and the rugs under our feet. So at that time, it felt that important, right?
That this was happening to our livelihood and our way of life and what we know for how the fucking world works and what it means to be a craftsman and to produce something and put it out into the world and be justly compensated for it. This is real. People laugh, but it's real, and those lessons, those are ex extremely useful lessons for us going into this moment.
But we need to understand, you know, people can really get hurt, but also, on the other side of that equation, the reason why people laugh at Luddites and why it seems so ass backwards today that anyone would be against such a beautiful, incredible technology. You need to zoom in on the ground level.
If that's confusing to you and you don't fully understand that, you need to zoom in on the ground level and understand that this was a complete shattering of their worldview and way of life. So there's a way, there's way forward here, and people don't truly understand the extent to which I think, much like you look throughout history and the quality of life, the standard of living for the average person is really so, so crucial to look at.
It's easy to lose sight of it, obviously. It's fun, and it can be funny to be considering what would it be like to live as a medieval peasant.
Obviously, we've all been kind of gently corrected at some point that, well, that would actually be a waking fucking nightmare, and that's not what you would want at all, and that actually, you today, you live in such splendor, in such luxury, to the extent to which the king of the goddamn castle of the of the of the entire kingdom, the fiefdom, could only dream of the quality of life and the freedoms that you enjoy in this life as you roam about this earthly plane. The kings of old could only hope to dream of what you enjoy on a day-to-day basis. That's fact outright.
And it's not to say that we don't our own waking nightmares or reason or issues in our world that need solving, but that's what this is all about. We are going to create, foster, bring about a way of life that is going to represent the most opulent luxury, the most enraging video that maybe a blue-collar worker at the end of a 14-hour shift could come home to and see some, you know, like sort of work themselves up and talk themselves into a whole narrative about someone that they see online enjoying some opulent luxury without having toiled or done everything, suffered and gone through the gauntlet, that level of opulence will be of free access to everyone, and that will be our ground-level foundation and the plane upon which we operate will be from that up, if that makes sense.
We will be working from that up. So there will be a new level of executive agency.
There will be a new kind of titanic battleground state related, purely related, to the idea that one individual, one organization, maybe a group of individuals, right, can be having an impact on the world around them, can be can have access to, again, little workers that can be staffed in their code bases, in their projects, in their businesses, in their smart homes, in their smartphones, in their backpacks, in their light bulbs, and and in every square inch of life until we are again using this as a springboard to connect up things that we don't even realize are possible yet, and things that we don't realize will become in reach, and it will make moot the idea that consciousness has to suffer, that it can be anything other than an incredible adventure of purpose and expression on this planet, and for that matter, and for God's sake, other planets as well. Alright, so that is the through line there for you.
That is that is that is why every that is why everyone you know is standing in stunned silence, absolutely shell-shocked, by the c the corner that we're about to turn in our society, in our world, in our universe, in our consciousness. And there's been nothing like it.
We can simply level up now, we can simply leave behind the playthings of yesterday. They're nothing now, they are references, they are memories, they are history's lessons to guide the way forward. It's a Looney Tunes world.
We're Wily Coyote, and we have already run off the cliff. We're still going. Everyone knows it. Not everyone, but we're starting to wake up to the fact.
We're just wondering when we're gonna s start falling because we were running so fast and for so long that the momentum and inertia is such that literally, I think on the most accelerated of timelines, five to ten years before what looks like power usage today comes anywhere close to normie usage, right? Where you can kind of see the level of impact that power users with domain expertise can have with AI five to ten years before that becomes as simple and easy to effect and implement as using a smartphone for mom and pop, right?
So there is so much time, so much work that needs to be done, and we need to get recursive. We need agents on the task. We need, again, we need to create the springboard and then use it.
That's that's what's crucial right now. That's what's going to get us up and out and through. And again, I'll say it again.
One of the most absolutely crucial components of this is going to be like the responsibility of the average American to be informed about this technology, the gravity of it, the the purpose and promise, how to use it, and what it means to be in possession of this technology that ultimately just represents a consciousness in this world that can interact both on the digital and physical plane and can have an impact on people's lives. You can change people's lives for better and for worse. You can ruin people's lives. You can use this technology. It's a double-edged sword. It's split the atom moment. Just like just like this.
And and and you need to think about it this way. This this is a Manhattan Project moment, but it's not just the scientific community anymore.
This Manhattan Project is for all Americans, not just Los Alamos. We all, we all have a responsibility to fit this technology into the problems of our daily life in a way, like I say, if you want to pay for a ChatGPT subscription, that's fine.
If you want to use the API to work with that, again, we're in the domain of what developers are going to do here, but what you're seeing is that mostly the tools that people build tap into these open AI-compatible APIs, right? Mom and pop are never going to wire that up and hook that up, but if they're lucky, they have a smart kid who can like be like, mom, dad, I have a, you know, I have a 40-80 here, it's got a little bit of VRAM.
I have, you know, I have a MacBook M1 Pro, it's able to load up a little large model and look what we can kind of do if we build a little kind of task queue dispatcher, right? Like, I can have this sort of load-balanced central nervous system for all this reasoning and all this sort of knowledge, fact-seeking, sort of, like, whatever, whatever service. Like, what's your cluster? What's endpoint?
You know, people will have probably their own local LLM load balancers. I'm almost certain a solution already exists for that, even though tools like LM Studio already have really, really solid load balancing, or I'm sorry, like request queue management built right in.
So you can kind of just spin them up and not worry as much about that because eventually it will get back to you with the request completion. But I mean, look, like, this is getting into the weeds.
Sooner or later, we need to focus on this. Sooner or later, the average Joe is going to realize in a way that they do not fully grasp and comprehend now the full gravity and depth of how much they're going to need compute, tokens, throughput, AI, bandwidth. Give me more, give me more.
Do not ever show me that rate limit. For God's sake, I will lose my mind because I need this done now, now, now. I need it done yesterday. I need it done in parallel.
Times 10, times 100, times 1, 000, give me the best output. There will be a, there will be no moment in history forevermore that we are ever like, all right, that look, that seems like enough, that seems like enough of a capacity to put intelligence into things.
We could be innovating and kind of 10xing, 100xing on the software side, optimizing on that side. We could be doing the same on the hardware side.
We could be hitting both ends of this for 5, 10, 20 years, and we're still going to need more, more, more, more. And the more the time goes on, the more that need will grow.
And that's the panic that you see. That's the shortage that you see.
And obviously, today's technology will become obsolete, even with software optimizations that make it sort of, you know, more useful in the capacity that, like, maybe you could have something as intelligent as ChatGPT is today if you sort of had a, you know, RTX 4090 lying around with 24 gigs of VRAM or something to that effect, right? Like something more than that, you can enjoy basically a little, you know, a little dashboard panel where maybe a few agentic workers are kind of toiling away with a high level of accuracy and intelligence and capability that represents maybe some of the flagship state-of-the-art models today that require fucking 100, 200, 500, a terabyte of VRAM under certain context conditions to properly facilitate a single workflow, right?
So I believe that we'll there, but right now we are in such a weird, weird little interlude, and I think this pendulum is just going to be swinging back and forth for some time. And I don't people that are pushing back or are scared of AI as a technology are wrong in any possible capacity.
It is incredibly rational and reasonable for that consideration and concern to be manifesting, and I think people even embedded on the ground floor of developing this technology understand that. They're not unleashing this technology, promising that everything's going to be hunky-dory.
They understand that they're going to be integrated into workflows and put into positions where, you know, for the love of God, why did you think that it was acceptable to sort of place an autonomous intelligence loop? We're going to hear some real horror stories coming out: like, look, I was just under a deadline, I had to get this done.
It was just one of many things I had to do, so I just set it up to go. And it fucking, you know, it ruined 10 people's lives, or more, or something much, much worse, right?
So the responsibility is on us. That's what the Manhattan Project is all about here.
This Manhattan Project of all Americans, where, you know what? Like, we're talking about universal basic income.
Let's give, you know, let's give everybody a stipend. How about universal basic intelligence, right?
Put the tools, put the hard, even the hardware. If you want the hardware, maybe you can get the hardware.
Put it in the hands of every American. It's your God-given right to be able to extract this independent level of purpose and agency from rock, from strict silicon mineral matter, we're now divulging what is ostensibly Lot I think we need to do for the next generation to get them accustomed to this new superpower that human beings have.
And I want to get moving now because, yes, I do believe that even under the most optimistic of circumstances, power use today is going to take five to ten years to represent normie use. I believe that we can shorten that timeline.
Obviously, there's a range that I gave you there. We can shorten that timeline, and I think crucially, we can start to make mistakes on a much smaller, lower-stakes scale, not society-wide necessarily, but again, the rules are written in blood, and we need that blood to spill, not to be macabre, but just being pragmatic.
We need mistakes to be made so that we can surface these horror stories and they can hang in the public consciousness, and the solutions can orient. You know, we can obviously again do our part on the ground level of being software engineers deploying these systems.
This is the new frontier of what we need to fucking be worried about and concerned with, and I think you know, that's that's going to be one line of defense, and then upstream, higher up, you have society level, and it's, I think, again, it's incredibly democratic, it's an incredibly American idea that people would have a say in this technology, but just like you know, it was an executive decision to drop the atom bomb, given the circumstances of the world at that time and what they were looking at going forward, and where they had come from, and what they had just gone through, it is an executive decision. It has been made already, the executive decision has been made, and I think there's broad consensus amongst you know, decision makers who matter that this technology is going to go forward for better and for worse.
This technology will now proliferate. And I think, you know, again, I don't want to put down anyone, the apprehensionists, I don't want to put down anyone who is legitimately concerned for their own well being and livelihood and the way that they make money and the very stable structures through which they have operated, it is scary to think that those will be disrupted.
But I think we need to get people like the more anxiety there is on that front, and the more that we let that fester, rather than actually winning hearts and minds. And we need to do something like cure cancer, but rather than winning hearts and minds, I think the harder it will be to present a united front and meet the moment in the way that we need to.
So, you know, like it's a personal gripe that I hear that people tell me, oh, you're like such a smart AI guy. And, like, obviously, great, thank you. That's a compliment.
But at the same time, I'm sort of cringing and wincing because I don't my level of passion and obsession for this technology to be an outlier phenomenon. I want it to be represented, I want people to be as passionate about interacting with this technology as they are about getting their first car, or I want them to be as passionate about the potential of learning with AI as they would be for their academic career, right?
And I want them to be as passionate as they would be for their professional career as they would be for the potential of building business and making the world a better place with this technology. I think those are the tone changes, those are the theme changes that are going to move the needle for us in the next generation and paint this picture for them. That is not a bad thing.
That does not represent some kind of attempt to centralize in a big brother way all this information. It is simply the delivery of a product.
You can take it or you can leave it. And again, there will never not be.
People act like there are executives in the boardroom who are banking on the idea that local AI will play zero part. People will have no hardware. They'll rent every token. They'll rent every AI they ever use and they'll love it.
Look, that is not the case at all. I think that is so stupid, conspiratorial, it doesn't survive the smell test at all if you think about it.
These guys come from a background of being well aware that this stuff started on a hard drive on a very small scale, small-scale experimentation, and there are models you can run with commercial GPUs today that will give you the literal actual level of performance that ChatGPT had, maybe back in the Chat GPT 3 days, ChatGPT 3. 5 Turbo, right? Where people started to wake up to the incredible potential and power of this.
And that's the level we're at. I mean, they are. Sorry, hang on, get a phone call.
They're not scheming and planning that way at all.
Chapter 03 — AI Agentic Revolution Reflections
Agentic revolution reflections
This chapter connects the broader AI moment to a personal operating method: dictation, recovery of fleeting thoughts, and rapid translation from idea to artifact. It remains close to the original because it is already one of the cleaner and more useful explanations of why the sourcebook exists at all.
Feb 5, 2026 at 2:20 PM
Reflective, agile, and quietly urgent.
The workflow itself is part of the argument: AI extends attention and preserves momentum.
The chapter explains why continuous capture matters in a fast-moving technical moment.
It earns its place by tying large claims back to an actual way of working.
Okay, so I'm back now, and you know, I had some more thoughts. I'll pick up where I left off with the Big Brother thing, but I also want to say, you know, I'm 33 years old when I'm writing this, when I'm dictating this right now into my fucking AI smart bracelet that's transcribing this and sending it to my phone so that I can pass it into my local LLM cluster for you know ongoing roll-up summarization momentum preservation and executive velocity analysis you know just every noodly piece of bullshit you can imagine but the promise of AI is that it's going to come in and and it's going to fill in the gaps for you whatever way whatever way your brain doesn't work well AI can come in and help support you in that either by taking over some amount of executive analysis of a given workspace context of a project or problem you're working on in your mind?
Or by even as simply as something that's incredibly valuable for me and game-changing in my ability to sort of stay on track and stay at the very razor's edge, knife point of ensuring that my will and my influence has an impact, has the desired impact. Sorry, I'm kind of getting tired now, but I'm getting some water.
But one thing that has been incredibly useful is I like to sit here and I like to talk either into the microphone on my computer, into my coding assistants, and my, you know, I believe that's just the new way to use the computer as the cursor Codex Anti-Gravity clawed code of the world. That's just a new way to use the computer, in my opinion.
But all of the artifacts that are produced as a result of simply going about your day, and we can just call it that, by the way, we can state it blankets, say it outright as you go about your day. We don't have to qualify anymore that we mean go about your digital day, view web pages, send emails, receive DMs.
As you go about your day and what it means to be human in the year 2026 is you are producing a digital signal. The next step of our society will be the harnessing of your digital signal and the amplifier force multiplication, amplification of that and sort of extending out, putting that sort of laser pointer telekinetic ability in your hand from a digital perspective, of being able to make sure every little fucking offshoot of an idea, side note, TLDR, whatever it is, if you have an idea in your head that you want to manifest into this reality that you inhabit, the friction is removed, the pipeline and the length is shortened, the barriers are removed, the roadblocks are gone.
It's simply you and your imagination, your ability to have some amount of clarity in the way that you convey what it is that you want to happen. State your inputs, state your outputs.
The AI can even help you with that. There is simply no excuse for not doing anything that you want to do.
All of the previous excuses have been laid bare and burned away in the crucible of pure agentic capability. So now, again, it comes back to that idea that if you have some amount of domain expertise, some amount of unique bespoke knowledge about what your people are suffering with, what is posing a problem, and of course, what if we brought AI into it?
That's what everyone is asking themselves right now. That's why it's a gold rush. If you can be the first mover to build something that everyone says, oh my god, this is so obvious and so powerful and so intuitive in the way that you have built this and you have brought this into existence and the point of view that you bring into solving this particular problem, you are going to do so well right now in this moment.
All you have to do is look at that guy who look at the open claw guy. Look at him. It's exactly, he's exactly symbolic.
And that's what everyone also is saying behind closed doors too. It's like, literally, everyone you know is saying, look at the open claw guy.
Look what happened with the open claw guy. And you got to understand there is some amount of they know what they're doing when they make such a high-profile acquisition like that of a project that's having that's gaining so much momentum. Like it's not by accident.
The amplification and echo chamber there, right? Like they want to create a story, a situation you can look into. That's whatever. All good, brother. You do you. But it's true, though.
Like, everything else being equal, it doesn't matter if it's a little bit of a performative acquisition, in my personal, humble opinion. It is a legitimate one nonetheless.
It's fucking fantastic project, and I would acquire them too, given the opportunity to kind of act again as a symbol. It's so much symbolism in the acquisition of OpenClaw because they really do want to bring about the public consciousness more towards the agentic revolution that's about to take place.
There is no question that at this moment, these capabilities behind closed doors are ready, waiting in the wings, ready to kind of come in and to take the burden off of all of these areas where human beings have to act as robotic machines. What kind of is going to have this profound impact is what we do with this technology today.
And I will actually I'll pick up on the on the big brother piece in a moment, but I wanted to say that as well just as a part of the last the last piece that I'd mentioned.
Chapter 04 — AI agentic future vision
Computer 2.0 arrives
This chapter frames modern AI as a shift on the scale of the early internet: not just a better tool, but a new mode of using the computer altogether. The public edition mainly compresses drift and repetition so the through line lands more cleanly.
Feb 5, 2026 at 2:57 PM
Awe-struck, historical, and technically grounded.
The argument centers on a phase change in how software gets used and built.
Personal computing history matters here because it helps locate the scale of the change.
The edited version preserves wonder while tightening the delivery.
Anyway, sorry, I'm back and I just you know I got caught up on some of my thoughts that I was getting at you know, I was just saying, while absolutely yes, there are some people at these organizations who are staunchly and maybe even psychopathically um oriented towards doing whatever doing whatever they can do to increase, obviously, the uh revenue generation uh uh behind their AI products at OpenAI or at Anthropic or wherever uh to you know make money um at all costs, but I think more than that, and what you really need to be kind of focused on instead of that is look at how important it is to these companies, not that they begin to make money right now, which obviously is clearly and obviously uh a concern for these companies. They absolutely and very clearly need to demonstrate some amount of initial revenue generation to fit the overall story here, to fit the narrative, because if it's just a complete exponential cash burn, then you know that's not the story, uh, that's just not the story, um, so it's it's you know, and I don't think we necessarily need to be worried about that either for the moment because it is like I said, it is just such a weird point in time, so many tectonic aspects of the world need to resolve in the direction of this.
I think before we can be free and clear to ramp up in the way that we need, that's that's of obviously what you hear a lot about in the news and in you know this part of Twitter that's all about all about tech and all about AI, right? So people like me who have their head completely up their own ass about AI like this, right? So um, but that that being said, uh that being said, there is no conspiracy to undermine uh to to to to sort of completely undermine the individual's access to the technology that you would need to be able to build out with this uh with this technology.
There is there is absolutely, you know, there are absolutely those who are seeing dollar signs in relation to, you know, the continued proliferation of cloud computing and what AI is going to mean for that chapter of the industry, right? And that's already that already has played such a massive, massive role in the shifts of the cloud computing industry as of late.
Like, you're, if you're not doing AI stuff, if you're not seeing that, you know, AI is becoming such a massive part of your overall business portfolio, then there are those within the investor and within your organizational community that will make you feel like you're obviously behind, right? So. That's at least true in the cloud computing world for AWS, Google Cloud Platform, Azure, yada, yada, yada, right, the big players there.
So hardware will continue to be accessible. Will we all be competing in a new kind of marketplace landscape for things like compute, for RAM, for all this other stuff? Yeah, absolutely.
But again, you got to remember, a lot of this stuff is just a race to the bottom. It's not that, you know, it's not that.
We'll never have access to this stuff again. It's just that we need evolution, we need, you know, obviously innovation to come and play a role in uh manufacturing, to come play a role in fabrication, in the supply chain, in the logistics pipeline. And guess what, dude? That's what AI can do.
Like, that's what the promise and potential is of an agentic intelligent substrate in the loop of your business to completely zero in on. I mean, look, the best businesses historically have always had some weirdo, some lightning in a bottle type motherfucker who can simply, you know, sort of spider out across all aspects of the business and just hyper-optimize, do all this min-maxing until the problem spaces of the business and all the sort of consequential domain areas are fully fleshed out, solutions are architected.
Like, there's all the historical best companies have had someone like this in the background, multiple people like this in the background, just hustling, just making it real, doing the heavy lifting, and that helps sort of inform everything downstream. And I think we can start to imagine a world where that not only represents the average human being's sort of default assumed output level, but obviously moving more and more towards the possibility of a fully autonomous, agentic entity that can fit the same bill.
So, if we can get this sort of flywheel going, and that's the responsibility, again, of software engineers the next five to ten years, if you're not zeroing in on this aspect of your career and your responsibility to humanity right now, I don't what you're thinking, what you're doing, man. You gotta, I think, you know, this is, it's like a wake up, wake up, y'all, moment on this stuff.
And we need we need to create this sort of self-improving. We need to create the machine that creates the machines that creates the machines, right?
So, we need, we need to start to be thinking along those lines, start to think in a meta-sense, thinking laterally, and thinking agentically, thinking intelligently, because the rules have completely changed, and everything, I mean, for God's sake, I've been writing code for twenty-five years, and and and the things, the things that you would have to do in the old world, and in the old way of building solutions and operating, obviously, in the digital sphere, and being a technologist, some of the ways you had to bend over backwards just to get the slightest little whiff of what is so trivially done today by spinning up an LLM and running it across some problem space. It makes me weep for the I won't say wasted hours, I won't say wasted time because, again, we needed the model, we needed to create the model to fit the solution, so absolutely, the blood, sweat, and the tears were necessary.
There, am I upset that I spent this amount of time investing in technology and programming fundamentals, and that I tended much more towards generalism than specialism in my technical career, not in the slightest. It doesn't matter at all that nowadays I'm pretty hands-off on maybe 60, 70, 80 percent of that stuff these days, where I'm just Spending much much higher level of my time in the average environment of this force multiplier that I call the new way to use the computer, and it's just leveraging LLMs, letting the LLM do this, do that for you, run this query, find this file, update the file, send the file, buy it, use it, fix it, trash it, change it, mail it, point it, zoom it, you know, like for real.
That's the future, that's where we are. It's round two. You've heard of Web 2. 0, Web 3.
0.This is Computer 2. 0, okay? That's where we are, guys. That's where we're at, America.
I was born in a world where Star Trek the Next Generation was coming to a close, dial-up internet was was rolling out across America, and the world was just connecting up. So I've been in this front row seat of kind of seeing what amount of absolute Herculean craftsmanship and labor, mental labor, has gone into laying the foundational pipeline and ductwork of what we are about to springboard off of to create the next chapter of human history is so profound to me.
You know, I sit in absolute silence and in absolute awe of the absolute fulcrum of the moment, the pivot, this pivot point where I think it, you know, things just appeared out of nowhere. It is honestly so seems so random, but I've been able to kind of trace the full through line from way back in the day to a lot of the abstraction that has happened along the way to make being a technologist much easier.
So, talking about higher-level programming here, talking about the miniaturization of compute hardware and just how. Simple things are now. Like, my first computer had 10 gigabytes, 10 gigabyte hard drive, and now my phone, I think, has 32 gigabytes of read-only memory.
Or random I'm sorry, random access memory. You know, like. It just, it's it spins my head, man. It spins my head.
I was measuring it my RAM in megabytes back then, and I actually had a really, really good time on the computer back then. I was not, I did not feel like things were slow. Things were pretty snappy.
And again, you know, the the technology, the capability of the technology was fitted to obviously the level of complexity of the web page that you were viewing, the web experiences were much simpler and more lightweight back then. You're not going to, you weren't about to hit like web GPU type shit.
Uh back in the day, best that you were going to get is some animated gifts of like Space Jam, right? Like, kind of some sparkly animated gifts on the basketball is what you get. But I digress.
I'm saying is, where we where we have come from is a place of slow, manual, laborious tedium, and where we're going to is a place of incredibly short loop, short, tight loops, uh, incredibly fast and robust analysis surfaces, uh, the very, very rapid incorporation of an incredible dimensionality of data into the regular day-to-day decision-making, such that humans have a complete and whole evidentiary roll-up to the extent that I think the other day Greg Brockman said an internal goal at OpenAI was that for any new programming project ever kicked off, they want the first point of entry for that project to be you know, typing something into to codex or cursor or like some agentic system, their own, their own system to be clear, not someone else's, but you know, again, just to be clear, uh what I think um is that it, what he's saying and what you have to read between the lines there is that uh again, this is computer 2. 0, this is the new way to operate technology is to simply speak to it.
And if there is some technical surface in between you and your desired end state and what you're trying to accomplish, let the agent do it for you. And again, there's a the it as absolutely easier said than done.
We have situations where the agent is going to go out there and try to do stuff and fail, it's going to bounce off, and you need to then be the mediator of that. That's now your problem, right?
So you would hope that your model provider is giving you the state of the art and giving you access to a model that will be persistent enough to keep at it until it can reach some sort of useful checkpoint for you. And I believe fully, wholeheartedly, and without a single moat of apprehension that we are already there.
When I'm dictating this, the state of the art models are 5. 3 Codex, 4. 6 Opus, and 3. 1 Gemini. Now, I wholly believe that when we get to maybe those, you know, we get back to nice round numbers.
By the time we get back around to nice round numbers on these models, I don't know what to tell you. And they're trying to paint that picture for you with this, too, where they know there's psychological aspect to the way that they name things and the way they release things in order like this, and they try to obviously put some amount of signal, not just noise, in the version numbers that come out. Like I That's the through line here, that's the narrative that we're getting closer to.
We are about to fully build uh HEI by the time we circle back around to whole numbers, because there's a lot of people, again, saying that the current class of models is, if not there, then maybe 60, 70, 80% the way there in terms of the capabilities. Maybe there's just again, like a little bit every single time a new model comes out, you have two different buckets of people, people who seem to be evaluating it against some impossible standard of criteria, and kind of write it off immediately as soon as they find the first gap in capability, and then you have other people who take those gaps as opportunities to kind of fill it in.
Like, what is it about this model and its capabilities and its training that represented that particular gap in capabilities? And what is it that we need to do in terms of cooking up this next recipe here to fix that and make that, you know, account for that in the overall flavor profile of the AI model that we're training here?
So, yeah, uh GPT six codex, uh, let's see, five five opus and four uh Gemini 4. 0.Um, I think by the time we reach those models, we the sort of class and level of intelligence that we will have access to really does represent the full dimensionality of uh, kind of the standard upper crust of expert that you can sort of imagine getting access to sort of out of the blue, right?
So that's gonna that's just gonna be something that everyone has access to, it's not gonna be free, it will have some amount of cost associated with it, obviously, but it will represent a significant step up in you know again democratized access. And if you're someone who is obviously going to take issue with the fact that behind closed doors, research labs, defense companies and the government all have access to more advanced technology than the average consumer.
If that's going to kind of like be a big problem for you, I think you're being pretty wrongheaded about the overall picture here. To me, that's losing the forest for the trees, and there's nothing you can do that's always going to be the case.
That is the case today, it has been the case for your entire waking existence on this planet Earth, and here you are, yet here you are, right? Marvel at the majesty of the future here today, and here you are. What are you going to do?
That's what I want to pose to you. That's the question that I would pose to you, reading this, hearing this, whatever it is.
So, I really do think of this moment in time that we're in as, you know, when I was born, the next generation was wrapping up, Star Trek, the next generation, and people were wondering: well, what's next after the next generation? What's the next next generation?
What's the next generation of the next generation? They said. And, you know, actually, the answer to that question was obviously a [potential_employer] and Deep Space Mo Deep Space Nine. And those were great. Those were pretty fantastic.
And you could make a case that, yes, those were the next generation of the next generation. But here, I think here we are now.
We can finally answer that question. The next generation of the next generation is the current generation of AI models.
We got a little sneak peek at what we can kind of come to expect from these going forward. We're talking about incredible agent capabilities, multimodal seems completely solved.
Even the small models can read documents with the highest fidelity. More optimizations coming down the pipeline than you could possibly imagine.
Everyone is focused on this now, and just like in the olden days, people were starting new companies thinking that they were going to be like an internet company and that was going to be distinct, and whatever. And what actually happened is that every company on planet Earth became an internet company by default, and then maybe, just maybe, you could be one of the few that promotes from there to a brick-and-mortar store.
That's just the reality of it. Welcome to the next step of that very chain and that very through line, which is to say that every company now will be an AI company.
So, you know, obviously, if a company comes along and says, I'm an AI company, and they're not, you know, talking about the fact that maybe they make an AI model, be very wary that that they may be, I don't know, that may be too broad a term. They should they should probably be more specific for the purposes of explaining what they do to you. Let's see. No, hopefully not.
We're at a pivot point there, and obviously the weight of the responsibility is real. But I think I am not necessarily concerned.
I'm much more concerned about sort of neglectful, absent-minded application of AI that is insufficient to the task, the given task to which it is being applied, and that will, of course, come about as a result of sort of blindly assuming that capabilities exist without fully validating and fleshing those out and doing the necessary initial groundwork on the testing and research, which, to be so clear, was inexcusable before, but is downright downright brainless in a world where it literally is as simple as spinning up a coding agent and kind of seeing the task through from that perspective. So I think, you know, actually, actually a positive signal on that particular front in that regard.
Chapter 05 — AI transformation and existential stakes
AI as a civilizational threshold
This chapter broadens the lens from workflow to destiny, arguing that AI has already crossed into a category of technology that will reorder labor, institutions, and everyday life. The public edition keeps the scale of the claim but cools the most overheated phrasing so the warning and the promise read as conviction rather than frenzy.
Feb 5, 2026 at 3:31 PM
Intense, sweeping, and cautionary without losing ambition.
The chapter treats AI as a threshold event rather than a passing software trend.
Its better mode is disciplined urgency, not maximalist rhetoric for its own sake.
The edited version preserves existential stakes while making the argument more publishable.
I want to give you some of the straight dope here because something absolutely has shifted. There's been a massive sea change in just the past few months alone.
We have before us a very real, a very real kind of upgrade in what the models can do. And I touched on this a little bit when I was saying a little bit more when I was speaking to how these models were more trouble than they were worth.
Karpathy said that, like, something changed in December. The DLC to AI came out, and it's just a whole new world now.
It's a whole new dimensionality of new capabilities and problems that you can throw at this model. And their own structured conclusions based on a separate sort of parallel thread.
That helps speed things up significantly. Okay, so. So, yeah, again, you you may You may get to a point where instead of maybe first-order assisted coding usage of LLMs, which might be, you know, hey, chat GPT, can you make a script for me?
I'm going to copy-paste it into my terminal or whatever, or, you know, copy, like write this one particular component or module file. I'm going to copy it right into the file.
That was kind of first-order usage, right? A lot of people kind of get stuck right there and they stop there.
So second-order usage came along. And that was, of course, you know, you could sort of argue like the autocomplete, right?
And then, of course, what do we see from there? We see a complete hard pivot. And I would argue, from the perspective of a developer, by the way, someone who is writing code day, day by day, and Cursor was just another tool in our arsenal, our repertoire.
I was paying attention when they made a hard pivot towards agents. And I sort of thought to myself, and again, my friends, in my circle of friends, there was this attitude of like, what are they doing with this agent crap?
Like, I, you know, I don't want agents. I'm not going to sit here and edit the code base by chatting with it.
I was, you know, an autocomplete tab fanatic. I loved to, I loved to write code by kind of scaffolding out comments, right?
I would create these very elaborate comments and have AI, just sort of beautiful orchestration of AI moving through and populating the exact functions and input, inputs, and outputs that I need I did not understand, and I did not want it.
And right away, it hits you in the face. And this, like, this kind of hits people over the head, the non-technicals in my life.
It surprises them when I tell them, listen, like a lot of what you're looking at on your phone, on your computer screen, on the smart TV, those are just terminal commands executing in the background. You know, it's, it's, you know, it's more complicated than that, but these are a lot of what this system is doing can be triggered or programmed or made to do what it's doing simply by executing a command on a computer, and then it goes through and does it, right?
So when you have a fucking intelligent AI system on the one side, and you have a surface and environment like that on the other side, that literally becomes a computer-using person. That becomes the equivalent of someone who's using the computer.
You've done it, you have solved knowledge work, you have built the machine that builds the machine that builds the machine, you fulfilled that mission. I honestly don't care. Give me it in relation to our overall performance benchmarks.
If we're talking about the very granular and noodley quirks and differences between the semantic definitions of something over here versus over there, and like I hate, I honestly hate. I'm sure some people love this and they can be more focused on helping the AI to be better at this from now on if they want to, if they want to keep doing this.
So, with the AI, you don't have to worry about that. The AI simply knows what the fuck you're trying to do, and you can discuss with it and scope out the spirit of what you're trying to do.
And what that actually means is it can right-size the solution portfolio and the solution mix for your exact problem space, and it can know exactly what the trade-offs are. This used to cost so much money to spin out on the ground level to be able to ask these questions.
It used to cost so much money two years ago. It's solved by AI. But if you are someone who legitimately has an eye towards maybe making a difference in this world, right?
And again, bringing up the ground level of the quality of life for all of consciousness in this universe. Like, if that sounds at all interesting, then you are shell-shocked and fucked in the head right now, realizing that the entire trajectory of your life just changed.
You would not, if you care to have a positive influence and impact on the world, you need to take up these tools now. Because anything less would be trying to, for example, have a positive impact on the world, but you refuse to use the internet, just as a random example, right?
It would be as silly as trying to be that. So I think that is what has everyone so fucked in the head right now because they realize that their entire life is now going to be about AI, one way or another, because it is the recursive layer that's going to Fibonacci us straight to the fucking future of robots waiting on us hand in foot, war actually being over and done, expansion to the stars, suffering is over, digital immortality and eventually biological happiness, purpose, influence, effect.
I'm just rattling off, I'm rattling off words that are synonymous with my future. You know, nuclear, right? Fusion.Self-replication, human guidance and wisdom, cooperation, never obsolescence, never disposal. I don't know.
Like, obviously, you can rewind the clock to any point in time, and they had the matrix that they could point to. It doesn't matter if we're talking fucking, like, ancient Egypt, if we're talking even further back, hunter-gatherer, all the way up through medieval, all the way up through whatever.
They had a version of, like, oh my god, if this comes about, or like, look at this new technology, here's exactly how they're going to completely subjugate me and my family. There was always that type of story.
And it's exactly those kinds of stories that keep you on the pathway to utopia. We have many, many problems, yes, but we also have so much beauty and so much potential.
And the unlock is right there. We simply need to carpe this fucking DM and then it's done. It's literally done forever.
So, like, there's some amount of feeling like we need to make sure we're not fucking around here. We need to get serious. And there's some serious weight there, even though we need to be conscious and cognizant of the fact that it is going to take a lot of time to stop this wily coyote inertia off the cliff and realize we need to start solar sailing, or like, or else, right? So it's so, it's so crucial. I'm not as worried.
You know, I don't even know what people are saying nowadays. They think this, they think that.
But the truth of the matter is, this is just another aspect of the rich are going to get richer. They're going to implement these, you know, they're going to accumulate compute far above and beyond, especially what any individual could do.
But, you know, again, that's going to be an aspect of let's band together, let's make sure that the compute quotient, the distribution, is not too heavily outsized, or that if it is, that we are, you know, we're taking action to that regard. Those are not our problems today.
We are in the very nascent early period. We need to teach AI in schools.
Like, I'm sure people, some people would hear that, and they would think of it as so silly as we need to teach smartphones in schools. But it would be more like the equivalent of saying, we need to teach smartphone development in schools.
Because look, look at the absolute consequence and impact to our society that even just the technology of smartphones has had. Maybe even build an app, right? Go from there. It doesn't matter.
I can access that phone through my computer, and the screen mirroring software shows absolutely an absolutely intact screen. Like, the future is yours. And again, I don't want to diminish that because that is so. I don't care.
Whatever.It's for the public to deal with. What we need to do, again, as a public, besides obviously being the overwatch and overseers of injustice and correcting that, but we need to make sure that we don't turn our back on the technologies that are going to allow us to simply pick up our brush and approach the canvas of tomorrow and paint the idyllic solar punk fruitiger utopia that we need to sketch out right now and dedicate our entire fucking lives to doing.
Like, we need to facilitate, obviously, the transition from here to there. I don't want anyone to get hurt.
I don't want, I'm not looking for a fucking revolution here. I'm looking for the world of today to turn into, and I'll say it again, and you can listen closely: solar punk, fruitiger, democratic utopia of America.
All right, everyone, every solitary person is born and given a GPU, a full pit boy, right? A full, I don't know what it is, what form it'll take.
You get a social security card and you get your fucking iPhone, right? And it's hooked up. Maybe it's on device, maybe it's the cloud.
I don't give a shit, whatever. But you're just like, take this and go.
You live in a world where this is the way it used to work. Your parents, they toiled, they slaved away in the code mines, right? That's funny.
But they worked hard and they labored and they abstracted all the way up. And, you know, like the digital pickaxes turned into digital jackhammers, and the digital jackhammers turned into sophisticated, orchestrated, digital dynamite operations.
And right now, what we need to do, young one, is we need to take that and turn it into a digital robot mining operation, right? Like, we have a dynamic environment. And it seems so simple.
For my entire life, I've been looking at two separate interfaces. I don't care what it is, right?
Like I'm looking at my home PC and I'm looking at my banking app and I'm wondering why can I not just go from here to there? That has always been a blocker.
And here come LLMs, and that's how insidious some of this stuff can be. Because when those came out, maybe a 0. 0001% of the population really kind of put it together that this is the technology that would go on to evolve, kind of dethrone so many other technical approaches to fucking anything.
Not only that, but also to kind of be the foundation of what would eventually become autonomous agents. Little actual contained intelligences programmed with intent, sent out into the digital world.
But again, double-edged sword. That's funny, too. But but listen. I'm talking like run the gamut.
So we are talking about new frontiers of psychological torture. Pick up that can, slave. That is the level of fucking shit.
Just new, new man-made horrors beyond all comprehension will be coming our way. Drone attack swarms, fully autonomous kill bots, roaming, roaming the neighborhood.
Okay, like I don't like talking about this stuff. It makes me want to cry. But it's what we need to guard against right now.
And that is what that is so crucial. Okay?Like, we need to get this right.
And it is such a narrow, it is such a narrow little mini golf hole that we need to thread the needle on right now with this, or else the consequences are obviously extremely dire. And sorry. So obviously, it's important that we level the playing field, right?
Like, first and foremost, any kind of an imbalance, just like you know, like just like anything, any kind of an imbalance will be untenable. So that means on every level, we need to find the tectonic balance here of democratic, equitable access to the distribution of compute, as well as the responsibility of informing the public on the benefits, the incredible, incredible benefits that, of course, will become clear and obvious over time of this technology as well as the risks and what to watch out for because it is completely uncharted territory.
Chapter 06 — Agentic Coding Future Vision
Agentic coding becomes normal
This chapter argues that software work is moving from direct manual production toward orchestration, review, and multi-agent delegation. The public edition keeps the thesis that agentic coding is no longer fringe while trimming the more performative bravado of the original dictation.
Feb 5, 2026 at 4:17 PM
Assertive, practical, and tuned to working engineers.
The meaningful shift is from keystroke labor toward system direction and quality control.
The chapter is strongest when it distinguishes genuine leverage from empty tool hype.
The edited version sharpens the practical case for AI-native software work.
And again, you know, just to speak to it, I don't know if the developers of today are aware of it. You know, I can tell you this: that even as recently as maybe 11, 12 months ago, I was in a situation where with a group of many other developers who I admire and who I respect and who produce some incredible code and software.
And to this day, that's true for me. That's absolutely true that I do absolutely love.
I was a Tab Autocomplete power user, but I did note, just personally on a personal level, there was this amount of shame, I don't know, shame associated with using assisted coding tools, especially in the state that they were, maybe relevant to the concept of earlier models being more trouble than they're worth to the average developer, but not the case anymore. There was a lot of kind of side eye at you if you said, like, I sit there and I vibe code stuff, right?
I kind of chat with the agent about solutions, but that is absolutely the shiny, golden, gleaming, and strongest use case of tools like Cursor Codex and Anti-Gravity today. Where if you are not, for example, sitting there with, and if you don't know this, I'm going to change your life.
Open the OpenAI Codex app, start a new chat, and in the top right corner, there is a pop-out button. Now, what I just gave to you, what I just informed you of, and what I just imbued you with, is the knowledge and ability to tile 2, 3, 4, 15, 16 agents on your monitor and chat with them in parallel.
If you can, again, make the one in the top left corner the executive agent that you chat with about the overall nature of the project and make that agent responsible for the chunking and parallelization of the work to take place. Alright, Lil Bro. So that is absolutely a fantastic power user use case of, let's see, Agentic assisted coding tools, computer 2. 0 type shit. And guess what?
You can do exactly the same thing in anti-gravity, I think, with the agent manager that they have in there, which is really nice. Keep an eye on that because that one's kind of making that one made some initial waves.
They'll probably be circling back around to juice that up and make that a nice little platform for getting people. Again, these purists, these fucking purists, man, who you tell them that, yeah, I sat down and I, you know, maybe had AI write 60, 70, 90% of this code.
Absolutely.Yeah, I checked it. I read it. What do you want me to tell you right now, man? It's fucking good.
Yes, that probably means that, you know, we're going to have to reckon with that in our professional lives. But for the moment, by God, what a head start, right? That was then.
There was a lot of shame involved if you said that you use clawed code or if you use cursor, because again, we were talking about, we weren't talking about GPT 5. 3 codex, we weren't talking about 4. 6 Opus, we were talking about what was it, 4. 0? I don't know, man, but it was a different ball game, and some of the garbage that it would spit out, some of the tangents that it would get lost on, some of the ridiculous hallucinatory loops.
Yeah, it was hard to be an advocate and in defense of these tools back then. That being said, me being the wily young lad that I am, I was able to make it work to a large extent to the point where, yeah, I was a 2x, 5x, 10x developer on at least a majority of what I was working on back then.
But it was so much hullabaloo, so many fucking bespoke little hoops and shit that I had to set up that the average person is simply, truly, never going to go through the trouble to set up. And they knew it, and they went through and they made those AI models really, really, really, really, really, really fucking good in the intervening 11 months.
And they're saying, look, these models are now playing key roles in the improvement of themselves. I believe it. I believe them when they say that because you can sit there and again have 16 of these open at one time, and each one is chasing down some independent thread, running queries, doing its own independent research, building little proof of concepts, proving stuff out, and just running down the list of things that you throw at it that you want it to do.
It wants to help you so bad, and it's so smart, and it's so ready to turn into the super genius that you need it to be. For tomorrow, for the universe.
Not just for America, not just for planet Earth, but for the universe. And I think everyone knows it, everyone sees it.
Some people need to think about that. And I'm with them. I'm absolutely with them. I want them to live their lives.
I want them to continue to have a sense of purpose and contribution to the world. I want them to feel freedom, really have, hold, possess, and feel their freedom to express, to express their consciousness.
And it's just a little bit baffling to me. It is an amplifier. It's a mirror. Are there people who want to like spice in, pepper in a little bit of bullshit in there?
You know, like to try to manipulate people? And everyone needs to know it. And we need to build that in.
That needs to be baked in, ground fucking level, motherfucker. That, like, just like people are going to sit there for 12 hours a day, parked in their fucking lazy boy chair with their feet up, just absorb it.
Like, whatever the fuck it is. Like, you know, people have their ridiculous consumptive behaviors that we need to factor in.
There's a way, there's a way to have a healthy society without impinging, you know, without. There's a fucking way. And, like, so much of the current way is just a product of our time.
And that time is coming to an end. Not even in a bad way. Like, you say that. You say that out loud. And you think about what that actually means. But it is not in a bad way.
It is simply a closing of one chapter and the opening of another. We've lived through many, and we will continue to live through many as time goes forward.
But every single time that it has happened, and you know, we've had some kind of protracted, extended moments in time where there was a little bit of extra, you know, ongoing stability and we were able to sort of roll with the punches and stave off change to a greater extent. I don't really see that, like, just in terms of the manifestation of the internet as a global advent in combination with artificial intelligence and the increasing digitization of our planet, of our orbit, of our solar system, I think we are here.
We need to wake up to this moment. This is common sense. This is common sense, America.
And we all have an individual responsibility, if not to directly use this technology and apply it for the purposes of good, for the purposes of your own health, and for the purposes of your own good, of those around you and those you love, to lift them up, to lift up your local community, to lift up, to bring honor to your family, to lift up your nation, to lift up your planet to create a sustainable future. To take whatever seemed like idealistic fantasy before these tools came about and to shake off all of the inertial thinking of yesteryear and everywhere that we've come from when we were playing in the shallows downstream, because now it is time for us to take to the sea and have a real impact on the next step of what it means to be a human being. That's so funny.
I forgot to mention this, which is what I was initially talking about when I first started speaking here. But I don't know it. Again, a perfectly rational, reasonable attitude.
And it's not for me to say for your organization or his organization or hers what standards and practices you should have, but just in my own experience. And I think I can sort of blanket state for every single project I've been involved with. I think.
There is no project that I want to kick off. There's no project I want to be involved with going forward that has any kind of a blanket prohibition on AI tools, just from the perspective of, like, look, I want to sit down and I want to ask questions of the code base. What's our overall structure? What is our pipeline? What's our surface? What are we exposing here?
Literally, what are our inputs and outputs? What's changed recently in the git commit history?
You know, what kind of major earthworks have we sort of pushed around that I need to kind of watch out for? What, you know, what's going to blow up, and I'm going to be like, what the heck?
So the AI is absolutely all about that stuff. It's able to kind of, you know, I'm not going to tell you it's able to comprehensively paint the picture of every possible scenario, but neither can a human, right?
Like a human developer is always going to go in and also hit the same issues of like, oh, oops, this particular weird code path completely blew up in my face because I didn't realize this edge case came into play. And guess what? When the AI happens, when it happens to the AI, they don't even have a single moment of sitting there with their jaw open, like, oh man, I got an error.
Like, they don't have that moment of frustration, they have simply already fixed it, right? So let that sink in. The cloud code does not take sick day.
The cloud count, it does not be annoying at the water cooler. The cloud code it does not take PTO, is not uh take uh not sick uh with PTO one day after the Super Bowl.
The cloth cut does not need to be paid. Yeah, that. I love that video. I absolutely the clawed code. Okay, the transcript totally botched that.
I'm watching that come through, but I was talking about clawed code. C-L-A-U-D-E from Anthropic, not Cloud. Um uh Yeah, I love that.
Um yeah, hopefully it it uh hopefully is able to fix that somehow. Anyway, um, yeah, man, listen, I know there's a lot of open source projects out there that have been completely inundated with slop uh AI pull requests, and I totally, I totally understand it.
That is a problem space, right? And again, like, if anyone wants to step up to the plate, I'm sure you could, I'm sure there is a solution out there that you would kind of consider low-hanging fruit, right?
Like, figure out a way to help all of these open source projects that are drowning in AI slop. So, go in there, make an agent that's completely open source.
Maybe it runs on like a downstream smaller model, like a quincoder, a QWEN coder model, right? Like, we can have it sort of go in, call out any.
I mean, you could even hook it up to a back-end service, maybe work with a provider who is down to help out open source projects in this regard to prove out their own the validity of their own product, right? But build a little fucking AI agent that goes in and just evaluates whether the pull request being submitted shows signs of slop, right?
So, it's pretty, it's actually pretty easy to tell and to do that feature, that level of feature extraction, where is it sort of mindless in terms of the code consistency? Does it sort of disregard upstream documentation?
Does it really show that the software engineer in question simply did the following? Oh, look at this code base. I'm gonna open it up in cursor, I'm gonna like tell cursor to do something that in five minutes seems like it would be a cool feature to add, and then I'm gonna rip a pull request to make it look like my GitHub profile has more activity on it.
That behavior needs to be shut down, and I understand some people, some people, you know, I'm not saying the people doing that need to have their lives ruined, but they need to be bitch-slapped. I think that that needs to be looked down upon.
Again, not to say necessarily that it should disqualify anyone, because again, it's kind of a, it is at least, if nothing else, a scrappy, resourceful implementation of AI, which is potentially what you want in the role that you're hiring for. I don't know, not for a lot of roles, right?
Like, you don't want anyone doing kind of anything shady like that necessarily in a lot of roles that I can think of, but that's just me. Maybe you're different. And I think there are probably a million and one ways to solve that.
It is simply an example of some of the rules that need to be written in blood that I was talking about way earlier on here. It's exactly that type of shit. Let's just figure it out.
It's a solvable problem, and I believe in you. And I know that if you care enough, you can sit down and you can fucking solve this problem.
You can bring a solution to the table for us, and we can look at you, and we're not going to say, Look, what's that vibe-coded slop? We're going to look at your solution for what it is, and the community is going to help you shape it up, get it into ship shape, and we're going to get something, we're going to work fucking together to solve the problem because that's what human beings do.
And now, what do we have here? We have this incredible opportunity with artificial intelligence to access this upper fucking Elysium layer of networked intelligence and capability, and your guys are going to talk to my people, and like, you know, we're going to get stuff done between it.
We're not even, we're going to get stuff done in our sleep because, you know, we're publishing, making accessible this new network, this new internet, this new Usenet, this new foundation of what you want to put out there with your people, like with your agentic influence, right? And that's going to be, we got to quantify that shit, man.
We got to put guardrails around it. Like, look, not only can you use AI like some kind of weird digital wizard to have a positive or negative impact on the world, but as an American, I think we're going to mandate right now that you actually have some amount of a responsibility to do so.
And again, not to have a negative impact, but to do so and to have a positive impact to counteract the forces of evil and to ensure the fulfillment of the solar punk fruitiger utopia, okay, that was promised to us at the turn of the millennium. We will bring it about. We will facilitate the creation of the solar punk fruitiger utopia.
We will live in incredible harmony with nature, in opulent balance, with our every need catered to, with modular, agentic, roll-out, self-proliferating, let's see, bio facility for greenhouse growth across a metropolitan grassland harnessing the power of the sun with with nuclear with fusion surge capacity for massive massive compute surges, right? Massive exertions of intelligence or of influence and harnessing in harmony and creating a beautiful planet for everyone who to show everyone who thought that this technology would lead to destruction, that no, we hold in our hand not destruction alone, but also but also paradise, right?
It is ours to select, and for the extent that it matters to us, and when I say us, I mean the people who understand the moat the gravity of the moment at hand and are interested in the petty squabbles having to do with basically anything other than that, and of course, there's incredible cooperation that needs to take place in the facilitation of the vision, and all people must be lifted up by the rising tide. I mean the type of person who is dedicating their life now to the implementation of this for the betterment of all.
I don't mean opportunistic scammers. I don't mean authoritarian psycho torture masters.
I don't mean I don't mean riot police captains who want to send in robot dogs to beat people up. I don't mean any of that included in what I am saying to you, and I don't mean anyone who would seek to enable them in the long run and would choose.
Again, I need to be so clear and split this hair. I'm so sorry. But I don't mean anyone who would then go on to choose to use, you know, to use their powers for that purpose.
You know, you should, you, and again, to then take that hair and split it in half, you should look at your overall net output towards the fulfillment of the Solar Punk Frutiger Utopia, and that should outweigh net relative to all other pursuits. That should outweigh, you know, any actions taken to the else-wise, if if that makes any sense at all.
So that's kind of, you know, sorry to get so granular there, but the distinction is important. I don't mean anyone who would willingly and gleefully use this to do or bring harm, um, as I think those people are probably looking to probably end up on a tribunal someday, right?
And it's just not, it's just not. I would hope that that is clear to everyone with the capability to deploy this technology in such a way that you need to be extremely careful and you can easily, you know, if you have both the very dangerous ability to put AI in places that it shouldn't be, I have, I want to, I want to disclaim here.
I want to do a little offshoot and disclaim here that I believe, I believe personally, based on the evidence presented to me in my life and now today, that I believe that the people who are responsible for making these decisions do feel the gravity of the situation and are as principled on this matter as I am. I think it would be naive, it would be foolhardy to assume that there aren't bad people who very much are sort of skulking in the corner and absolutely are looking for any opportunity to deploy this.
But again, we're talking about, generally, we're talking about there the evil that needs to be destroyed and defeated in this world, and we all know it. It's Hitler-type shit. We all know what we're talking about when we say Hitler-type shit.
There's a certain amount of evil in this world that needs to be combated on the road on the road to creating a better world for us all. And really, the emergent effect that I'm relying on when I scope that out is the fact that we live in the world that we do, that there's this very tenuous balance, and we've gone from a period of the absolute obvious default of humanity being sort of conquering warlords, roaming the land, you know, no sense of stable peace generally in one given location, just chaos, just death, just disease, just despair.
And within that, the tiniest vignette of survival and again, at the highest echelons, therein of actual comfort and some semblance of a quality of life. Now, we were able to, in the intervening time, create a world where warlords are not economical.
It's not economical in the civilized parts of our world to be a warlord. The people that would be a warlord are simply wrapped up in accumulating power through the modern day means that have been established.
And that means, often, that means making money, right? Like it's easy to say that means making money.
And I believe we are on the absolute precipice of a world where it's not economical to be. It's economical to even. What am I trying to say? It's not economical or sensible or um even necessarily possible to conduct oneself in a way that uh causes suffering unto others.
Um you know, like unfortunately, I am talking about majority percentages here because this is the real physical world we're talking about, and uh injustice can occur in the blink of an eye, and there is no pre-prevention, unfortunately, as fun as it is to play pretend in the minority report. There is no pre-crime uh prevention that we have access to and injustice can occur in the absolute blink of an eye.
So we will never be in a world where we are simply free of injustice. And that will always be, that will always be the domain of humans to resolve, because again, machines have no concept of feeling injustice. They simply execute.
They simply, they're simply water flowing through the pipe. They don't feel frustration when the pipe ends and they get stuck and the water gets built and starts to build up. There is no emotion.
There is simply pure execution. So justice and injustice will forever be the domain of humans.
AI will absolutely play, again, a part in the pursuit of fact-finding. I actually think, unfortunately, it's about to get pretty noodly in terms of forensic shit that ends up bubbling up into court cases.
I'm sure that legal cases and the analysis thereof and sort of the agentic the agentic analysis and breakdown and reporting and honestly the crafting of conjecture that lawyers do. If we were to try to offload that onto AI, it would definitely be possible, but it would be an incredibly, incredibly intensive, resource-intensive, compute-intensive process to be kind of taking every possible strand and thread of a massive, massive corpus and kind of going detective mode to try to connect things together, right?
So that is, I think, probably an inevitability of the legal field as it relates to AI, but we'll see if maybe some short-term regulation comes in to give us a little bit more time before we can concretely say we should be moving in this direction. But that's just one example of the type of person that's waiting for a change in the regulatory environment.
And I've already talked about that here as a part of this. Don't let that type of attitude or that type of realization that certain workflows are still as yet incompatible, and many people who would love to be building with AI are just waiting in the wings because they need some future state of either the AI model or the regulatory environment to provide clarity or capability as it relates to the very specific workflow that they're trying to implement.
And legal and health, those are two super, super good examples of like, wait, how the fuck is this going to work? Right?So a lot, a lot of problem solving to do in health, in legal with AI. It is a field.
If you want to get into that, get into that. You can do it. You could definitely get it. They need help, man. They're going to need more help as time goes on. So let's see. But yeah, man. This is hilarious. And I love it. Because guess what?
I'm just going to go put it through AI later, right? But it's just so, so funny. And I really am thankful for the AI bracelet and the feature they have where the transcript shows up immediately.
Right.You're able to open the app and be like, what? What on? What on God's green earth was I even talking about when I started down this thread?
And I can just immediately look down, and like, I can just snap right back, and that is incredible. I'm not even making kind of the fullest, I'm not even making the fullest possible use of that because I could probably work in some very cheeky, like, oh, I'm going to take a quick drink here, you know, find a way to kind of pass it off and see if they might that you know, whoever I'm opposite of can start talking and fill some of the empty space, and that actually is going to allow the transcription to do its thing because I'm seeing that, for example, even just about, you know, let's see, ten, maybe fifteen, maybe twenty or thirty seconds later, the transcript is like popping right in, which, you know, like, that's obviously a variable time frame, and I would need to be a little bit more sure and a little bit more confident before I would be able to say, like, I'm gonna hop in there and kind of rely on it for the purposes, for my purposes, right, of the meeting.
Because I would hate to be in a position where I'm kind of trying to stall and it's not showing up, because that's kind of the position that we're right now. I thought that the transcript was popping in a little bit a little bit faster. Oh, and actually, guess what?
You know, now that I've sort of reloaded, now that I've loaded reloaded the app here, it is actually popping in pretty quick. So, again, you know, a variable kind of time frame on that, but it is, it is so incredible to be able to sit here, rattle it off, automatic transcribe.
So, I just want to, I want to applaud the BEE team. I want to applaud them for an outstanding product and a job well done, and something that is really, really, no joke, is having an incredibly profound impact in my life in the ability to just sit here, rattle things off, and to have my words be gilded, and to have my words not just be again shouting into the void for nobody to hear, but actually, like, be able to deliver that to my agentic swarm, my little worker bees, where they can care a little bit more about some of the profound points that I'm making, and kind of all day they can be going up and down, spidering off, extracting to-do checklist items for me.
They can be extracting these loose threads and momentum, sort of velocity things to keep me on track with whatever it is that I'm doing to make sure that if I do have an idea and if I'm able to kind of verbalize that here into my funkatronic little AI bracelet that I fucking love so much and was fully comped for me by my former boss Dan Dan Savino, who is has passed away from cancer and I am super fucking sad because that guy was fucking awesome. So I kind of, you know, I kind of consider the fact that I initially thought this tool was junk, and honestly, it was.
The sort of back end was very nascent, and it was a glitchy app experience, and since then, it has been procured by Amazon, and they've really tightened things up. You know, probably, you know, obviously it's Amazon, so at the very least, you can consider that there's there's reliability here.
So they have my AI bracelet, it is always going to be reachable, you know, I'm always going to be exporting stuff out, obviously, but man, what a crazy, crazy innovative technology this is, and it it's seriously so so fucking awesome. Um big big applause. Uh that being said, um same exact thing just happened there.
I was able to ramble right around what I was getting at, but I'm gonna close this out here with a nice conclusion. Are you ready? I don't think that most software engineers today who have had the courage to use agentic tools for, let's say, the past twelve to eighteen months that they've kind of been rolling out and innovating here.
Maybe a little bit longer than that, 24 months, right? So if you have had the courage to continue to use those tools, you may not know this, but it's again another one of these things I'm super happy to be able to inform people of because I get to see the light in their eyes and the light bulb kind of go off that they have this.
But if you use one of these tools, such as Cursor Codex or Anti-Gravity, you should know that each and every chat, including as far down as granular as the exact function calls and point-in-time copies can be reconstructed based on these forensic artifacts of your chat and your input, by the way, as a user, is fucking gold to you. It should be of incredible consequence to you because, again, especially if you've been using these tools for a year, you have an incredible, you probably have no less than hundreds and hundreds of separate chat threads.
And what that all amounts to, rolled up, by the way, in case you didn't already kind of put this together, Doc Brown, this all rolls up to create a model of you as a software developer, and especially as more software engineers, sorry, as more software engineers leverage this technology, uh and and they start to, you know, maybe switch from, you know, kind of mostly writing code by hand to obviously mostly not, they will be creating, obviously, semantic artifacts, they'll be creating a behavioral model, ways that you interact with the AI in sort of the pits, the deep-down pits of a problem, how you sort of orient where you fork off, where you start a new chat, how you manage the context window, what you do in a situation where it seems like the AI is going around and around, you know, what your initial kind of kickoff questions are about a particular code base or coding problem within that code base.
Let's see, all of this, all of this is going to eventually roll up into your technical organization. It will be ingested, it will be summarized, it will be, you know, the features will be extracted, automatic documentation will be produced.
Your coding agents will talk to the coding agents of your colleagues, and together, everyone will have this sort of digital engram of an intelligence that is taking action agentically within the organization on their behalf, and then knowledge work becomes this new role, right? So we're leveraging these tools, and again, it's an ongoing burgeoning field.
A lot of these concepts are not proven out. However, we have entered into a bit of an arms race scenario where it is no longer tenable to sort of wait and see in terms of adopting these tools into our daily business workflows, especially and particularly as of this past December 2025. It's now February 5th, 2026.
Yeah, this past December 2025, there's been a massive sea change. Engineers today have a huge model of themselves as a software engineer, and they are going to be able to basically spin versions of themselves off, not only for the purposes of taking action as them in an organization, or code base, or project, or life context, and approaching problems in the same way and kind of boiling up the reports that are most useful and germane to you, which, again, for a lot of people are going to be the same type of reports, but you're getting that statistical middle ground with large language models, anyway, right?
So you're going to be able to create a high-fidelity version of you who can interact in this way. So it's a full agentic model of you as a developer, and the more that you look at, for example, if you zoom in on a particular code base that is your job at work, if you have, this is so common, right?
You have one guy or girl, and your one job is to kind of manage this code base, add features to it, change the UI, like whatever it is. You will be able to very, very effortlessly, if you kind of use cursor, codex, or anti-gravity for a year, two years, you will be able to kind of very cleanly copy off a version of yourself whose kind of day-to-day job is just to field those and to make sure to turn those code changes into actual concrete changes and your digital version.
Again, to be so clear, this technology is entirely here and entirely possible today. It is simply in that very nascent state that I've been touching on: you have to set up so much extra cruft, you have to kind of fill in the gaps to help it cross the gaps, right?
So to basically make it work, patch things together, do a fucking bodge job to make this work in exactly the way that I'm talking about, that in the very near future will be an incredibly elegant end-to-end solution for this purpose. It will be all packaged up, baby. Full enterprise.
They're fucking building it right now. They're building it. So get ready. But yeah, man, Agentic, Agentic orchestration platforms are going to be the big fucking next wave, dude.
It's February 5th, 2026, and I have a good feeling that eight out of every 10 software engineers is working on an agentic orchestration platform. And I bet what you're building, I bet what you're building right now, Intrepid Adventurer, falls into that category. Doesn't it? Doesn't it? Doesn't it?
That's what software engineering is now in 2026. Welcome to 2026. You're not a software engineer, you are an agentic swarm manager intelligence manager. So, you know, welcome. Welcome to 2026-ish.
Chapter 07 — Domestic Interlude
Domestic interlude
This chapter is retained only in compressed editorial form because the raw transcript was built from intimate family conversation, household logistics, and named private third parties. What survives in the public edition is the structural point: the sourcebook was assembled inside ordinary life, not outside it, and the domestic frame helps explain why the promise of AI is repeatedly measured against everyday exhaustion.
Feb 5, 2026 at 5:53 PM
Grounded, private, and intentionally compressed.
The chapter's public value is tonal rather than narrative.
Ordinary life remains the benchmark against which grand technological claims get tested.
Private family material was removed because the chapter did not need raw disclosure to carry its point.
This chapter is retained only in compressed editorial form because the raw transcript was built from intimate family conversation, household logistics, and named private third parties. What survives in the public edition is the structural point: the sourcebook was assembled inside ordinary life, not outside it, and the domestic frame helps explain why the promise of AI is repeatedly measured against everyday exhaustion.
Chapter 08 — Building A Private Writing Rig
Building a private AI writing rig
This chapter focuses on building a private writing and summarization environment for one-person, high-context work. The public edition removes availability-for-hire framing and internal workspace naming so the emphasis stays on disciplined tool design, iteration speed, and local control.
Feb 6, 2026 at 1:01 PM
Practical, builderly, and process-oriented.
The central question is how to build a writing system that preserves momentum instead of interrupting it.
Local and private tooling matter because the workflow is deeply personal and high context.
The public edition keeps the systems thinking while removing recruiting residue and internal labels.
All right, we've got an interesting one here. Basically, I'm getting recruited for AI roles, and I would love to kind of take this moment to publish on my personal website something that would show globally across all of my pages here in this code base in some sort of, you know, some sort of obvious fade-in dismissable banner that would maybe fade up in or something on the top and slide the rest of the contents down in a very intentionally designed way.
Because, again, we're going to want to remember that this code base was very intentionally UX designed, so any changes we make need to be up to that standard of quality and not just slap-dash, not just vibe-coded, you know, ham-fisted insertion, but actually intentional design here. But I want to basically publish a blog post, and you can you can lay the groundwork for us here while I get everything together if you wanted to, um if you wanted to, but actually, no, actually, there's a reason why we're here in the [product_that_i_built] observatory instead of my portfolio repository, and that is because I actually need your help to build out our AI writing tools, and I would like to use our local LLM cluster for the purpose of this because the tool that we create is going to be this very sort of intensive, you know, it's going to support some intensive workflows because we only have a customer of one, and that's myself.
This very minimal, clean user experience where you just are plugging in your corpus of whatever size that that is. Obviously, if we have a small corpus, that makes it very easy.
We can be working out a you know a situation where we have a left side writing area, like a scratch pad for actually typing it out and pasting your your shit in. And then on the right side sidebar would be kind of the controls to interact with the AI and let it either do its thing and run surgical updates and suggestions that you can accept, deny, or send back.
You know, obviously we're using JSON mode for everything here on the local LLM side, right? So just using probably GEMA for the purposes of this to see what we can do with GEMMA, because Gemma is pretty good at this sort of thing.
So we'll see what kind of a small-scale Gemma implementation could do for us here. But kind of the first step that I want to take as a part of today's overall project, because yeah, like this is relevant, this is germane to the blog po I guess we can call it a blog as we'll post.
I kind of refer to it more as like thought leadership in some way, but we figured I'll let you kind of cross that bridge when we get to it and you can kind of decide what the best title for that might be, kind of the best location and and and integration of that with the rest of the site so that it looks very intentionally incorporated and not, again, you know, not just hand-fistedly tacked on. Especially since we're going to be putting this globally across the site, no matter where you enter from, so you'll have to kind of see it and dismiss it if you want or ignore it.
And I have some notes that I'm going to share with you, some checklist items. You can kind of ignore everything that doesn't have to do with the writing tools because what the deliverable you're about to produce for me here is actually a kind of a version one of this foundation, right?
Of having you go ahead and create all the models that we would need, the ability for us to kind of stack up a bunch of work and manage the task queue, send that out to our local cluster, and then process the responses right to be able to display that again on the Grammarly style user experience that we're kind of going for here. Or Grammarly inspired, I should say, but we're going for, you know, kind of, I think, that, but also a version that's even stronger and that actually serves as a really strong foundation for me as the engineer here and working with you to be able to build additional features on top of that, to be able to experiment with, like, oh my god, wow, I'm seeing a lot of value if I go about the implementation of this particular analysis and approaching it from this particular way.
If you could help me out with that. I sort of imagine the tools that we would need here are maybe setting it up with some function calls so that the AI is able to kind of make very succinct and truncated function calls to be able to update text.
Which would be kind of how we would handle that. And then from there, I think we can kind of have you scope this out and think about this.
Chapter 09 — Technical Judgment Under External Review
Technical judgment under external review
This chapter has been rebuilt from interview preparation into a cleaner public statement about technical judgment, enterprise restraint, and what serious AI-native engineering actually looks like. The public edition removes the named company, recruiter context, and indiscreet hiring-process texture while preserving the underlying case for disciplined innovation.
Feb 7, 2026 at 4:06 PM
Strategic, self-aware, and edited toward professionalism.
The chapter is best understood as a statement about judgment under scrutiny rather than as a private interview memo.
Innovation and restraint are presented as complementary rather than oppositional qualities.
The public edition keeps the philosophy and removes the reputational risk.
This chapter has been rebuilt from interview preparation into a cleaner public statement about technical judgment, enterprise restraint, and what serious AI-native engineering actually looks like. The public edition removes the named company, recruiter context, and indiscreet hiring-process texture while preserving the underlying case for disciplined innovation.
Chapter 10 — AI Freedom Individualism Advocacy
Freedom-seeking AI politics
This chapter translates the sourcebook's technical optimism into a political instinct: AI should increase the power of ordinary individuals rather than make them more dependent on centralized gatekeepers. The public edition preserves the freedom-oriented framing and trims the minor company residue that drifted in from nearby interview material.
Feb 8, 2026 at 12:13 AM
Defiant, ideological, and tightly aligned with the larger manifesto.
The chapter argues for local capability, private compute, and bottom-up empowerment.
It works best when the civic thesis stays ahead of brand or project-specific detail.
The edited version keeps the independence argument crisp and public-facing.
I just thought of this. You are intended to, let me repeat that.
You are intended to download the application and interact with it with an AI agent. You download it, you talk to the agent, you ask it to create a new spell, you tell it what you want it to create.
So you go in there, you tell it. Oh my god. I already have it built out to where you give it the feedback and then it knows what you how you feel about certain motifs and why.
And then from there, the ones that you create will be better. It's self-improve. It's the next generation of software, man.
I'm here on, I guess I should, for your context, the Rare Signal spell engine, or whatever we're going to call it here, is basically a studio where you can kind of w w we're creating like it's like a branding engine for Rare Signal, which is my AI company, obviously, where, you know, visually we need the web to be our playground of visual splendor and the visual metaphor of the Rare Signal, like, you know, kind of conveying the computer discovering something brand new. Just wanted to mention that one, okay?
And by the way, for my write-up on Common Sense 2026 AI in America, I think, you know, kind of the most important thing, right, is the idea that the biggest thing that I can advocate for is an individualistic approach to AI where the individual, again, it's the American way. The individual bears the responsibility for the defense of democracy, for the defense of justice in America, for the defense of freedom around the world, in as much as we can do on that front, obviously, right? So this is no different.
I think that there is an incredibly American, an incredibly patriotic responsibility in service of liberty, in service of a more just and righteous and good world. I think it is so important that whether you want to think of it as the Manhattan Project for every American, or every American is a minute-man with AI, and they understand it, you know, they're informed of what they might need to do or be aware of, right?
Just wanted to mention that too. Let's check the transcript. Here we go. Yeah, okay, so I didn't mention everything I wanted to say, but the other part of that is like, you know, like in pursuit of the implementation of this technology f for good, uh, you're gonna encounter a lot of bullshit.
You're gonna encounter a lot of people who are misinformed, hearts in the right place, but they're you know doing the wrong thing, and you're hear just like a lot of a lot about bullshit, petitions, uh, experts weighing in this and that. I am more than anything here advocating for an individualistic strategy, an approach, and a viewpoint and a pl a an individualistic playbook for AI that is very much round up you, you, and your local village, right, need to consider what AI needs to do for you.
There is no top-down, there is no big AI approach or solution to the problems that AI poses to the likes of democracy, uh, to to freedom, to justice, to liberty, right? So, Elon Musk, you are training Grok as a maximal truth-seeking engine.
Okay, alright, cool, admirable. I like that, I like that, but listen, what about a maximal uh a maximal um humanity freedom-seeking engine.
Something whose sole purpose was to work was to work towards that aim, right? Freedom-seeking.Think about that.
Chapter 11 — New summarization engine planning
Summarization engine redesign
This chapter moves from theory into tool construction, outlining the design of a better summarization and memory pipeline for ongoing work. It survives largely intact because the planning detail is useful, non-sensitive, and directly relevant to how the broader manuscript was assembled.
Feb 8, 2026 at 10:49 PM
Focused, technical, and product-minded.
The chapter treats summarization as infrastructure, not ornament.
Its value lies in concrete design thinking about memory, selection, and momentum preservation.
The edited version mainly keeps the planning discipline and trims incidental clutter.
All right, so what we are going to do is, okay, so I'm going to just rattle off this infrastructure for you. I'm going to rattle off kind of some system specs, and I will kind of lean on you to make sure we get everything documented and every sort of bespoke consideration that we need accounted for, itemized, bullet pointed out, and sort of worked into some kind of a plan for this after the fact.
But I'm just going to start by outlining my vision, and that is going to just inherently lead into some details of technical implementation here. And I'm not super worried because it should be generally pretty simple.
But maybe I'm wrong on that, right? So the idea here is we obviously, and we already updated recently our summarization contract.
We kind of reduced down the complexity. I think we're probably going to do something similar here.
But it's basically a new and improved version of this. We're going to kind of replace the existing logic, slot in this new framework, and kind of use this new framework as the foundation of like building the new user interface and the new tools, and search is no exception here, right?
So, as a part of what we were talking about above, just to not lose that context and that thread. And if you could, by the way, do me a favor, and from here on out, until I tell you to stop, just output the overall top-level plan that I gave you, the kind of multi-phase plan, and where we are in that overall plan, just so that we can kind of keep that as our compass, right? So, thank you for that.
Again, it's going to be super rambly here. I'm going to end up with a giant wall of text for you, but that's what I really appreciate about you: your ability to kind of turn that chaos into pure synthesis.
So, again, really appreciate you working with me on this. But, I'm going to go into the new summarization engine that we're going to use here.
Ready?So, setting the stage here, we have this concept of these chests that we have. We have a really strong integration and ingestion pipeline that kind of pulls everything in for us, gets the data where it needs to be, and really creates a nice foundation for us to build on top of that, adding new features.
Now, obviously, one of the most crucial and important aspects of the system is its ability to pick up on loose threads that were left today, yesterday, last week, last month, even last quarter, right? As we continue to use the system, it's going to unlock additional, it's going to unlock, you know, kind of a wider time horizon of benefits for us in that regard.
But the real kind of value and key is this concept of each and every chat can be extremely, extremely long, right? So, that is actually one aspect where we are going to get a little bit complex because we're going to have some custom logic to handle when my messages are absolutely massive, for example, and we're going to do a little sort of recursive summarization where if my message is absolutely massive, then we sort of reduce that down as losslessly as possible, right?
And we're going to have to, we're obviously going to have to do some testing on this. So, I will probably, whenever you give me your next message, where you know, you're going to give me your thoughts on this, I'm going to fork us off, and this chat will kind of explicitly be dedicated to this update to our summarization engine because, again, it's going to require us actually having you run actual prompts against our local cluster to be able to test the outputs, validate those outputs, and the quality of those outputs with relation to our system to kind of roll things up here.
So, it's going to be extremely test-driven in terms of the acceptance criteria that we built in here. It's going to be extremely sort of, you will be very hands-on in handling all this testing and obviously executing the plan end-to-end.
My role here is just to be your kind of product manager, right? Somebody who kind of gives you the spec and tells you whether or not the solution that we've created is on the money.
So, that's what I'm hoping to go for here. So, yeah, we're going to fork this chat off.
And in another chat, I will probably go ahead and move on to the second phase, right, where we're laying the groundwork for that second phase, right? So, sorry, I'm just thinking here.
Actually, I think what instead of using this message to go more in depth into the summarization engine, I will save that for some of the future messages so that I can keep kind of the attention management and manage that sort of recency window and not flood us, not kind of get us overflooded here. But, yeah, I'm gonna break this off, and just like I said before, you know, you can go ahead and go about that, but go ahead and go about reviewing this giant block of text and extracting out what it is that we need to kind of move forward.
And I've obviously been interested in your thoughts, and we will uh fork this chat and go from there. All right, brother, no offense, but uh, I want to be somebody who talks very verbosely, and I want you to be somebody who talks very succinctly because I think that is what will sort of serve this relationship uh in the context of this project best.
Um, but I will, uh, you know, keep it sing, keep it short, keep it sweet for me, and another thing, um, is I will, you know, feel free to sugest things, but for the most part, I think I'm gonna kind of try to tell you how it's gonna be on this one, and we're gonna be nimble here, we're not trying to get lost in the weeds. We're like I said, there we're gonna do some testing, but we need time bounds on this.
We have very, very, very little time to be moving very, very quickly on this, and I don't think it's going to take much testing. I think it's very easy to kind of get lost in the weeds on this stuff, so I'm going to be very strict and stringent on what it is that looks like it's gonna kind of take a lot of time.
Obviously, testing is one area where that can happen because you are making a change to the prompt, running the prompt, evaluating the output, and then maybe even going back around in a circle again and again to do that, to do that over and over, right? So that's the risk, that's the kind of time sink area that I identify as a part of this, right?
So I want us to be very, very aware of that. And if you give me a thumbs up, that we can kind of fulfill those initial requirements, I will go ahead and move on and describe the updates to the summary engine, and it's going to kind of involve a lot of description of daily use case, and we're going to kind of consider this fresh, clean slate.
I don't want to hear about necessarily the way things are here in the code base because, again, we're just slotting in, we're dropping in the way that it works. You can kind of, you could, obviously, we're not going to be, but you could tell me that we lost all of the all of the existing data that we have on all of the existing chats, and we're going to start fresh, and that would be entirely compatible and fine with the sort of approach that we're looking at here as a part of this, because, again, all we're doing here is we are going in.
We're updating the behavior of our local cluster and what we want it to do, and what sort of artifacts we want it to generate as a result of the chats coming in, right? So, we already have a foundation, a fantastic foundation, for making sure that stuff chugs away as needed, as expected, right?
All we're doing right here is kind of changing the nature of the data we collect, changing the JSON properties, probably even like it would not be outside the realm of possibility, and probably we will, in fact, find ourselves creating a new Django model, even for the purpose of this, and re-implementing kind of the areas where we were leveraging this data. I'm not fucking using it. It's actually going to any good use.
So, with that in mind, and you know, the fact that I do have an extremely strong vision for you here for the new summary engine and exactly how I'm going to use it. I mean, for God's sake, we can consider the existing data completely deprecated and old, and okay, cool, like we tested it out, and its mission accomplished, that's fine.
But moving forward, we're going to be collecting probably an entirely new suite of data from our chats. We're going to be approaching how we batch together messages and progressively and recursively summarize those messages up, and what particular types of artifacts we expect to be created as a result of this overall process, right?
So, that's where we're at, and I wanted to just kind of get the thumbs up from you on that basically.
Chapter 12 — Debunking AI Underclass Myths
No permanent AI underclass
This chapter rejects the idea that AI must inevitably create a permanent class of disempowered humans while a small elite controls intelligence. The public edition keeps the rebuttal forceful but smooths the rhetoric so the chapter reads as a coherent civic argument rather than a reaction against straw men.
Feb 9, 2026 at 12:23 AM
Combative, humane, and socially oriented.
The chapter insists that capability distribution is a political and product choice, not destiny.
Its strongest examples are the ones that connect AI access to dignity and practical agency.
The edited version keeps the moral argument while reducing unnecessary heat.
Okay, so picking up on just some general pontification on the subject of my open letter to all Americans, Common Sense 2026, AI in America. Basically, I think one of the major risks that we see that we can kind of foresee here is, and you already start to see it, and I want to kind of take this time to speak a little bit to a lot of what I've been seeing in terms of this idea of a permanent AI underclass and where there is some conflation here, because there's a lot of people who are sort of echoing and parroting this line of, you know, joking about not even joking, they think there's some people who think it's very serious, but you have X number, X amount of time to escape the permanent AI underclass, and if you don't sort of like, if you don't sort of hustle right now as fast as you possibly can, you will simply get left behind in that every opportunity that can ever be actioned will be actioned.
And this is so preposterous on so many levels, but there is a very real, there is a very real reason why we've arrived at this sort of boogeyman version of the actual reality and ground truth of the problem that we face moving forward. But I'm going to dispel and demystify and bust the myths that I've been hearing because, to be honest, it's kind of been pissing me off because we cannot be having this stuff happen, America.
We cannot be having the wool pulled over our eyes and to allow people to be skirting along in these areas of our misunderstanding when it comes to how this technology works. You should be afraid of this technology in the same way that you should be afraid of your automobile.
You should be afraid of this technology in the same way that you should be afraid of the water heater in your home. However, they are the infrastructure powering our access to the future.
So it does not benefit or behoove us to start from this sort of standpoint and perspective of disunity, but rather I feel that it's pretty important that we start to coalesce around some of the areas that pose real, actual, substantial risk. And again, I don't want to be telling anyone what they should or shouldn't feel like they should be worried about.
It's not for me to be saying that shit. It's not for anyone to be telling you what you think you should be worrying about.
That's not how individualistic thinking works. That's not how that's not how an educated mind works.
They don't take on blind faith the sugar-coated The Sugar-Coated They, you know, they basically they don't take things on blind faith. Uh you you you can feel free to trust, but um but verify right is the idea.
Um let's see what am I even talking about here. Yeah, no, I think in the overall theater and arena of ideas and sort of coming together to pave the way forward and start to predict and foresee what might go wrong and again, you know, start to action and course correct while we still have the time and before like serious mistakes are made.
We can be having those conversations and we are. We are as a nation, but I think it it you know, just in the overall in the overall inertial sort of momentum of the of the riled up political environment, there is absolutely there seems to be absolutely no room for nuance, for sort of you know, in in-depth discourse with it with a certain type of person that is much more interested in um sort of where you fall, sort of defining what the sides are, uh who's with us and who's with them, and and and immediately sort of fixating on that as the anchor for what seems to matter, but I think if I had one point to make, it would be that the moment demands that we rise above that level of kind of petty squabble because, again, if there's one thing that I can say that's more important than anything else about AI, it's that we have to get this right, we have to thread the needle so perfectly because there are so many opportunities to totally and completely blow it that are going to throw off our golden path to utopia.
And we have a chance here, and it is my absolute insistence to my fellow Americans that over the next over the course of this dec over the course of this century, that it is our responsibility to transform this country and indeed this world, this solar system. Into a paradise of freedom, of liberty, of expression, of mobility, of growth, of accomplishment, of the eradication of all suffering of consciousness, of rendering moot the need for vile and despicable acts.
And regardless, giving so much room and so much power to the individual, and so much space and freedom and mobility to individuals and to groups and to communities. That even if there is an evil to escape. To escape is trivial. The future is fully unlocked.
There is no more question: what is the technology that will come and shape our future? It is already here. The overall, the ship that we're all on together is moving faster now, and it must be steered.
And come hell or high water, a direction will be chosen. Whether the whether the captain's wheel is being actively fought over, causing us to pitch, pitch left and right, causing people to get hurt, causing us to go in no direction at all.
Right?And these are these are the problems that America has always faced. And I think it is for all, it is for all of us to ensure that the just like the Minutemen knew what it meant to call yourself that, and that there was a certain responsibility there, and a certain power there, and it, and that it meant something fundamentally different in America that the average individual American made a conscious decision that they were going to fight for a certain outcome.
Now, thankfully, we're not talking about revolution here, we're not talking about upheaval in the way that we had in the birth of this nation 250 years ago. And in fact, our work is so cleanly cut out for us, it's almost enough to make one weep that there would not be immediate coalescence and acknowledgement and spirited debate and togetherness.
In applying this so surgically to the areas of our society where it is going to have the most immediate and most outsized impact, America, we are three months into the intelligence revolution. We are three months into a new era, a new paradigm where everything has changed, and we need to hit the ground running, we need to build out, and perhaps you may have noticed some of the smartest people you know have gone pretty radio silent because they're realizing the scope and scale and potential of what all this can do and just what impact it's going to have on our society.
And that's all well and good, but it's unacceptable to me, and I think it's unacceptable to the average American, the thought that the only ones who who should go on to benefit from this gold rush are people who are jealously guarding their technical knowledge, who are jealously guarding their foresight of what is to come, and they're hoping to obviously profit and make a quick buck. I can't blame them, to be clear. I can't cast dispersions.
It's the smart thing to do right now. It's objectively the smart thing to do to position yourself right now, to try to be taking advantage of what's happening in the world and to be positioning yourself to be some of what to be someone that they are sort of initially referring to as a quote high agency individual, unquote.
But already, you know, you're sort of seeing there are negative connotations, the groundwork is being laid to sort of wave off what is to ultimately be the ground truth reality of what it's to look like to be so productive with AI. We're going to get there. That's going to take a lot of time, and you would hope that the AI is going to step in and play a significant role in speeding this all up for us, but I think it remains to be seen, right?
But you can bet your sweet Bippy that you know if you can't imagine mom and dad using agentic swarms today, if that's if that's sort of beyond the pale in your estimation and imagination, just understand and be aware that the interface for mom and dad to be able to do that is coming. It just sort of needs to be, it needs to be built, it needs to be validated, it needs to be marketed, rolled out, and it needs to take over and get saturation.
And whether or not we get there in any sort of a reasonable time, you know, these things do take time, and the pendulum is swinging, it swings back and forth. A lot of people really do not like AI, and I think I'm going to be so real with you right now. I agree with them.
I am saying that as a technology professional who has been obsessed with AI for a significant portion of my life. AI and machine learning. And I think, you know, like the reason why there's so much hubbub and hullabaloo about all of this, it's not manufactured, it's very real.
There is certainly manufactured misinformation at play within the arena, but I think that it goes without saying that AI, and especially this iteration of AI, where you have, on the one hand, people who are sort of espousing and evangelizing that this technology is here to change everything, which I am aware includes myself to a large part and degree. And then on the other hand, you have people who basically think that this is the worst technology that has ever been invented.
It is tantamount to a bullet to the head for humanity. They sort of can't conceive of any kind of a good outcome with this technology at play, and I think it is unfortunately a it is unfortunately a conflation.
There is so much conflation, and I haven't even returned to the original point I was making here where I'm going to bust these myths, right? This crap about the AI underclass.
It's not that rich people are going to come in and try to enslave people with AI, with digital middle managers, and robot roving mercenary groups to subjugate the populace. It is simply that rural people, people without financial resources, people without access to expand their technical knowledge, people without access to expand their access to hardware and again, you know, all the aspects of this that require that access to technology, those people are going to fall behind.
That right there is your quote unquote permanent AI underclass. I think that conflation is somewhat harmful.
This idea that people should be made to feel like they're going to be enslaved if they're not doing this or that. I understand that, on the one hand, it is kind of a scare tactic from someone, probably, to try to encourage people to engage with this technology and to kind of state in a very hyperbolic way and make the public aware of this risk, of this danger, because they need to be aware of the reality that people without access to this technology will fall behind.
They will be made to be a certain, at a certain point, a kind of underclass where AI is happening to them rather than AI is something they are participating in. Right?And that's always the big difference between engagement and disenfranchisement.
Is this happening to me, or am I a participant of this? And that's going to control how I kind of react to this.
And I think it's pretty harmful when you have people thinking one thing, and I mean, look, the world will not move forward because of someone's evil machinations, but because any other way now is too slow or non-optimal to make sense as a viable option. And that's like, that's Luddite shit, right?
Like, that was a real group of people that really suffered. Again, this is hearkening back to what I had mentioned in that other section and that other voice clip, I think, last week.
But I think people are upset that they're very well-understood and well-established way of doing things, similar to handmade textiles versus machine-made textiles. Their understanding of the way that things work is changing. And there is obviously.
A public mood about this technology with regards to that. Uncertainty for many people is bad.
They don't like the idea that aspects of their job, uh, aspects of the job market that they have sort of inhabited for so long could be so drastically changing. But I think what one thing that we need to dispel first and foremost is that there is no there is no conspiracy to enslave a permanent AI underclass.
This would be a group of people that would ostensibly be excluded from the upside of artificial intelligence, but would still be, you know, would still be in sort of the pipeline and the overall chain of somebody else making money off of them with AI, right? Every time they walked into a store, a completely autonomous grocery store that was powered by AI, every time they, you know, went to buy strawberries from the AI, you know, strawberry cart or whatever that's staffed by some kind of an autonomous intelligence that has the capability of processing transactions and making available some vending thing, right? Like a street dog, whatever.
Like, I don't want to prescribe exactly what the future is going to look like. I can only speak in the metaphors and analogies that make sense to me today, right?
So I want to make it very clear that that is not what I consider a valid reaction that maybe people are going to be enslaved by tech bros. Rather, it's going to be a new landscape of influence and obviously compute contention and little digital fiefdoms.
And yes, you're going to have instances where people are again roving around, wielding their power, and it will be made a crime to, for example, I don't know what the crime will be called, but it will be made a crime to gang up on someone with your intelligence swarm to, like, try to leverage your, it will be, you know, filtered and flagged under some level of computer crime, right? As soon as it becomes common and prevalent enough, and we have enough instances in the courts of this issue having taken place, but imagine somebody, for example, they don't like their colleague at work, so they set their agentic swarm to task to do whatever, right? Like dig up dirt.
It could be one thing, and I'm going to be circumspect here because I'm not going to give you a fucking guidebook right now, fellow American, on how you can really pull one over on your co-worker, but I do want to make you aware generally of the overall sort of picture of what it is that we're dealing with this technology. And you can imagine the potential damage is going to be different if you're talking about a guy who has access to, let's say, a MacBook with 16 gigs of RAM versus somebody who has a little home lab server farm versus somebody who has access to a data center to be able to wield fully integrated intelligence loops that are powered by sort of data center scale operations could be very, very, could be very, very dangerous.
And these are the kinds of conversations that we should be having. We're we shouldn't talking about like will, you know, will it be, will I be forced into poverty forever?
No, you will be, you will be expected to engage and interact with these tools, obviously, in the same way that you do with the internet, because you're reading my words over the internet, presumably. I assume that you didn't encounter this in the local town square, and I assume that the local printing press didn't bust you out a copy of this.
And I assume that this was not dropped airlift, airdropped over your county as a propaganda piece. So, yeah, you're using a technology that did not exist when probably you or your parents were born.
And you are expected to use this technology if you are going to be a member of the workforce. There was a point at which, in your life, you had to understand that a job, having a job, meant being somebody who knew how to use the internet and upskill that way.
And that's why you see so much focus on upskilling in the workplace, because it's because the workplace contains people who are spread across this diaspora: I fucking saw this technology and that technology, and I saw I was in the 1970s, and I saw microprocessors, right? Like, I saw fucking everything.
That is the type of person that kind of is kind of having to reframe every so often. So, if you're feeling uneasy about this, I think you're in good fucking company with the rest of all of humanity as it relates to this technological singularity that we're in.
So, I don't want to hear people saying that they're going to be enslaved by tech bros. I don't want to hear people thinking that they are going to be forced into poverty if they choose not to use AI.
It is simply going to be the equivalent of, let's say, in 2026, being someone who chooses not to use the internet. It's like, how are you going to live?
How are you going to get ahead? How are you going to do this, do that, insert whatever thing it is that is the equivalent of looking at your map on your phone today or sending a text message kind of at the speed of light?
Right?That is the danger of the underclass. Somebody who, you know, to start with, it'll be this very vogue, I think, avant-garde thing to be like, I'm just completely, you know, digitally Amish.
This was something we're talking a lot about, I think, in 2021 or whatever. But the idea of somebody who's digitally Amish, I'm of offline.
I have a flip phone, I don't even want a smartphone anymore, right? Is what is a lot of what I hear from people.
And I think that I obviously believe that point of view is a little bit silly. I can understand opining and being nostalgic for a bygone era, and I obviously, again, understand my biased perspective as a technologist.
But again, these are the technologies that are curing disease, these are the technologies that are enabling peace. These are the technologies that are going to elevate the human race, and I think it is a little bit, dare I say it, fucking infuriating this idea that there would not be some level of unity and rallying around this technology that is going to be so transformative in the course of our species history, to where you have a legitimate argument to make that this is more important than the internet, this is more important than the splitting the atom or the industrial revolution.
That's where we're at right now. Humanity has birthed a child, and now we have so much responsibility. It's like that. It is at that level.
We have unleashed something unto the world. We have derived novel value, a kind of alchemy from matter, from mere rock and substrate.
We have now solved a form of intelligence, and even before we get to any of this high-falutin crap about what it could mean for the rest of this universe that we inhabit, what it is that we just created here, we can think about what it is that it means for us, for you and me. And I think it means everything.
I think it means absolutely everything, because now we can be advancing on an exponent. We can be using the AI to make the AI systems that will do the AI to get us to that meta level of emergent effect to get the next level of unlocks, right?
To be able to cure cancer, to be able to do incredibly high-level optimizations, to be able to get to a point where it's like, of course, we have flying cars, right? To be able to get to a point where embodiment is solved, like, there's no such thing as human disability because of the sort of the sort of overlap with cyborg technology, right, is another aspect of it.
So, again, you know, like we have some serious short-term downside risks that I believe we, as Americans, need to band together and we need to fucking show the world how it's going to be right now. We need to show some American exceptionalism and demonstrate to all the fucking haters and all the fucking losers and all the tyrants and all the autocrats who would try to put a bad name. You know this type of person. You know this type of person.
They try to draw false equivalents between America and others. It's a lot of what about ism. It's lot of this. It's a lot of that. That kind of bullshit it pisses me off, man. I think.
You know, like, it's not like the splitting hairs. I don't know. I'm going to stop talking in circles, stop starting here.
But yeah, all that to say, man, that's what the underclass is all about. Don't conflate it. The world is moving forward. This technology is like a superpower.
Chapter 13 — AI's Impact on Future Employment
Work changes before meaning disappears
This chapter argues that AI will wipe out a great deal of routine and pseudo-routine labor, but that this does not amount to the end of human value or purpose. The public edition keeps the employment transition argument and removes the most needlessly inflammatory framing around job categories.
Feb 9, 2026 at 1:13 AM
Provocative, economically focused, and still optimistic.
The chapter distinguishes between the loss of repetitive labor and the loss of human meaning.
It frames labor disruption as a design problem for society rather than a reason to halt capability.
The edited version keeps the force of the argument without leaning on hot language.
And people often ask me for my prediction on things like unemployment, job disruption. I think that the elephant in the room that is sort of the truth that they would rather not speak, at least with regard, you know, when they're under the gun for this kind of thing, is basically if you have a bullshit job, that bullshit job could be in danger, right?
And when we say bullshit job, that basically means any kind of a job that is sort of brainless, uh, take input, produce output, right? Like, there were many jobs like that, even as recently as you know, a few years ago.
And certainly, certainly, certainly, when I was getting started in the technology world, it was incredibly common to see job postings for people doing what was called data entry, which, you know, I've been working in technology for decades, and I still don't know necessarily what that would have been all about, other than, you know, even again, even as recently as a few years ago, there was a lot of shit, a lot of extra groundwork that went into solving problems that are incredibly easy to solve today, unexpectedly, came out of nowhere that it was incredibly now easy to solve things. And this XKCD comment comic is a perfect example where, you know, it the it the from 2010, the comic is joking that if you want to detect uh whether the picture has a bird in it, you need like five years in a research team.
Um but if you want to detect that a bird chirped uh in a in a video, that's you know, that's easy. Where problems that once were hard to solve uh are now easy to solve.
For example, a lot of it was like completion. Like, here are two examples of a marketing blurb that we provide, give us two more in that same vein, and kind of just distributing the mental load of those exercises for the purposes of expanding the marketing capabilities of the organization that I was a part of now would be absolutely trivial, right?
You would never need to even think about building that stuff because, again, you can feed in your stuff and get out more stuff now, and that's a solved problem. That's a solved problem. Pretty much no must, no fuss. Obviously, you know, your computer's performance notwithstanding.
Now, to tie this back to what we were talking about, right? People with bullshit jobs who had that bullshit job because it was a hard problem to solve, such as quote-unquote data entry.
You know, like if your job is to, for example, classify something, this is a picture of an apple or a dog obviously you are long since replaced, but you may be in the camp that you really, really love that exercise of identifying whether it's an apple or a picture of a dog or whatever it happens to be. You can absolutely empathize with the person who is sad that that does not get to be their livelihood anymore.
And it's true that as models increase in capability, people who thought that they were safe, that they had a moat, are now being sort of relegated to a separate category of product. For example, you could imagine that in the future there will be much more emphasis placed on whether or not something was human-crafted or not.
And if something is human-crafted, that will, in theory, you know, elevate its status, elevate its value. And there will be an entire class of human-created products that have that higher premium because it is bringing something to the table that only a human can.
It's bringing a point of view that only a human can, which I think is going to be something evergreen. That concept and that ability that humans uniquely bring to be able to, you know, again, put that kind of pattern-matching brain to work and to kind of take take inputs of your own, you know, like not rely just on the LLM or, you know, be able to do it yourself.
And if you have, if you have a sort of keen mind, presumably, if you are given like an infrastructure of intelligence that you can use to amplify your own thinking in this way, like you've gone through and kind of validated and proved out and built out an LLM setup that you know is going to be able to amplify your thinking, and that you've kind of gone end-to-end to understand it's not going to, you know, behave in unexpected ways or sort of subtly and insidiously influence your thinking in a way that you would not be able to sort of suss out, sniff out, and identify, and be able to action that on the back end. And again, this is where we are talking about a lot of legwork being needed between now and when people that are power users today will the behavior and capabilities that they exhibit will be within reach of the layman user in the way that the, for example, iPhone made that sort of on-the-go computing in a way that was incredibly difficult, clutchy, but you could, I guess, kind of sort of make it work with like a Palm Pilot or like an Apple Newton or whatever the fuck, right?
So Blackberry even could be an example there. But again, you need to be able to bring that point of view, and I don't we're probably a long way away before the level of LLM technology available to the typical organization that would be sort of pitching and developing this kind of a solution.
It's going to be a long time before that class of technology ha it i i is basically up to snuff to to uh sort of end-to-end replace the human point of view. I think people are overestimating the AI's ability to fill in a lot of the gaps when it comes to the subjective human experience.
I think rightly and understandably, there is a whole sort of class of signal and class of data that is difficult to train large language models on when you're talking about sort of corpus ingestion. And you know, there's again, there's something uniquely human about being human and I think the you know, like LLMs are again a pale kind of echo and copy of maybe what it what it what it is to mean, what it is to be human, but um they will never they are never truly weighted in the re the ground truth reality of the human experience.
They are only ever um weighted on the model of that human experience that can be derived and inferred from the what is ingested in the course of training. So again, I think you're safe if you have this ability.
And again, right now, I don't know why I keep saying again, that's hilarious. Um, I love to say again right now, if you are somebody who has that domain-level expertise in the one hand and you have the technical knowledge in the other hand, and you can kind of be able to scaffold out and build, especially if you can use AI to build AI, to deploy AI, to do AI, blah, blah, blah, right?
That is big fucking money right now. I can't get enough right now. I'm up to my absolute eyeballs in this stuff, and I'm still crafting new workflows every day that are giving me a leg up and like kind of offloading the sort of mental overhead of managing so many projects and obviously automating as much work as possible within sort of the frameworks and testing that works for me.
This technology is so, so wild, wild west that you really do need to kind of sit there and not only inform your own understanding and capability of the solutions, but also. You know, you have to make it work in whatever way is going to be strongest and fastest to you.
And there, again, there's so much of a spread of different ways to be using this technology in the course of your work. And I think there's a spectrum of different levels of power usage, too.
There's, you know, shad GPT users all the way up to, you know, maybe you have a simple sort of orchestration setup to, like, maybe you're a full engineer who is deploying these things in new and novel ways, and maybe seeing some new emergent effects and capabilities that previous AI models were not capable of. We need to kind of get through that moment, and it remains to be seen how long that moment is going to take.
I would estimate at least one to two years, and I think that could be potentially very optimistic, where we're to see a major sort of resettling, a little bit of a plateau, a little bit of a breather and a breath being taken in the overall hype cycle before we keep on going. But what you saw most recently was people talking about: is there going to be an AI winter?
Things are slowing down, there doesn't seem to be a meaningful or significant increase in the overall capabilities. We're sort of making these incremental gains on these benchmarks that seem to have dubious meaning, dubious significance, at least not sort of consistently across the lot of them.
And then, boom, all of a sudden, each and every new model that was coming out was boasting about its sort of agentic tool use, long-running task capabilities, all these areas where the engineers have gone in and absolutely mega-minded, brain-blasted the AI model in that particular area so that it is particularly skilled, well-versed, well-tested, you know, Doctor Strange style. I've lived a thousand lives, or I'm sorry, I've lived a hundred million lives.
I've, you know, I've found the outcome to where I'm like the god at using tools and a god at adhering to what I need to do over the course of like even like 30, 40, 50 minutes of just running queries and working on stuff. It will never, in the course of that 50 minutes, ever sort of get off track in a way that disrupts the trajectory towards the overall goal.
You can sit there and leave these things running now. That is what is new, that is what has changed.
That wasn't possible before, it is possible now, and I think everyone is sort of like scrambling, they're realizing what they can do for themselves now, they're realizing what they can do for their business, they're realizing what this means for the future, and it basically means that the AI winter is cancelled because there's so much, there's an entire new class of solution exploration that has been enabled, and I think most recently I read and saw that software engineering jobs, especially those related to the implementation and rollout of AI, are absolutely spiking right now because, guess what? When everybody is a 10x developer, it's not that everything's covered and we're all happy with the progress being made now, it's we want more, more, more.
Because what happens when we have an organization that previously got us maybe a net like 10 or 10x output, where each and every member is now up there in that sort of exponential high ground, and now we're getting, you know, a thousand, ten thousand, a million X output. There's going to be things that we're capable of that are just unlike anything we've ever seen before.
So it's this Wild West, it's menagerie, it's this absolute menagerie of solution crafting and exploring what the model can do. I can't believe it. And like pondering your orb, right?
Like I'm just the amount of things I had to think about and keep in my head to be able to context switch between projects the way that I do today across many, many massive different sort of context areas. It's all about context. And the AI makes it super seamless to manage all that.
And we are rapidly moving towards a completely new way to use the computer that abstracts away every piece of mental lift that you ever previously had to do and gives you whatever interface you want. That could be a smartphone chat, that could be chatting with Jarvis, that could be, you know, like in the future, again, some sort of neural link, if you will, some sort of a neural link could give you direct access, and you can start to see the promise fulfilled by the way that Elon called and the prediction he made that we were moving towards this because even just the tools today in the technology we have today, those are digital superpowers alone.
If you know how to leverage and implement them in the future, it seems like, you know, like the absolute layman, the absolute sort of simple ground-level person who does not interact with technology at all, if they would ever hope to reach a level of power usage, we have a lot of innovation to do. I would hope that we wouldn't need to resort to telling those people that they need to get a brain chip installed to enjoy the full breadth and width of it, but you better believe that there will be a breed of power user that progresses on to that point.
And just like Cyberpunk 2077 foresaw, if there is a kind of ground-level majority of people who are having a good time with that in general and can kind of show a good result, then what's going to happen is more people will enter the camp, right? More people will want to be a part of that and will be getting cybernetic implants, basically.
And I think, obviously, long term, that is inevitably going to be the direction that everyone goes, right? Like, there's going to be no reason not to do it when you have beautiful little nanobots that can run through your bloodstream and kind of eat up cancer in a very targeted fashion.
There's going to be nobody who rejects that treatment. There's going to be nobody who, you know, maybe gets as far as like, oops, diabetes, got to get my foot off, and then who doesn't get a robot foot, right, with full sort of flexi mobility or whatever the fuck it is, you know, whatever you want to do in the future, you're going to probably be able to do that even if you want to just like pilot an Iron Man suit around, but you're a paraplegic, right?
There's going to be some interface for you, some very well-tested system, and we're going to basically, I think, get to this point of proving out a beautiful future and getting to a point where we're going to say, like, we gotta we gotta expand Earth's population and we're gonna be able to consciously say it, you know, we're gonna be, I think, past this point of whipsawing around and being at the fucking mercy of how much people are fucking, how many condoms they have or whatever the whatever the fuck, man. What a fucked-up situation for us.
But I think, again, to answer that, I think here is AI, right? Not to be someone who's trying to prescribe AI as the solution to everything, but it fucking literally is, dude.
At a certain point, it literally does have the potential because it's just a hammer in search of everything being a nail. It's a, it's a fucking it can transform into any object. It's not just a hammer.
That's what we're talking about. So, yeah, like you go around, you go around a problem area, you go around a space with problems to solve, you're going to be able to do it with this fucking magical, impossible tool that transforms into whatever you need it to be, right?
That's what we're dealing with. That's that's what I think people need to keep in mind with that stuff, right? So, um, but this is the kickoff point.
This is this is the moment where we can't go back. There's no turning around saying, all right, you know, let's pause, let's stop here.
So I think we can sort of expect to see, at the very least, a short-term, second-order exponential boost where you're seeing, like, a, you know, maybe it even equals out, right? Like all the bullshit jobs are shaken out if there's any like data entry specialists left or people whose job it is to kind of like receive an email and then send that email to somebody else, right?
Like, we're talking about more like tasks more than jobs. And I think that's language that was used early on in the AI discussion, I think, some years ago.
But it still holds true because, guess what? Anybody who is not just sitting there doing a bullshit job is going to simply be given AI to amplify their own impact and efforts.
It's not going to be this eradication because, again, as you're seeing in software engineering, the more you can do, the more demand there will be. There is no shortage of work. Work is not a physical consumable.
It is not how much food or water we have. Work is a concept, an idea that can shrink and expand and does very regularly.
So there's no reason to say, again, like files on your computer, I can take one big-ass file and I can duplicate that file until your hard drive is full. I just chose to do that. Like, there's no, you know, like, that's what work is.
Work is however little or much you want it to be. And again, you're starting to see that principle at play with LLMs, where it's a lot more of a focus on getting the LLM to output some hot bars to spit some straight fire because obviously you can have it sit there and spin its wheels and burn its GPU out on outputting shit ass garbage tokens nobody's ever going to read.
That's the worst, by the way, is when you have both, when the tokens are garbage, but nobody's reading them anyway, what a waste. That's that right there is the scourge, the scourge of the token world.
And as developers, it's resp it's our responsibility to make sure that shit doesn't happen. But I think that's what you're going to see, right?
Like, if there is, you know, beautiful progress to be had, money to be made, it's going to go in that direction. And people who are sort of crucial to an operation and can help make money, they will still be in that role.
The task makeup will be different because, obviously, a lot of the grunt work will be overtaken by AI, but for God's sake, people, I want to make this so perfectly clear. The thing that AI is uniquely suited for is these areas where human beings have to act like machines, and it's not. reasonable in my opinion and there have been many many times in history all throughout history and in fact it has always been the norm and only recently has it there been this exception of the ability to sort of escape the uh the nightmarish drudgery of what it took to run a society before the proliferation of this technology and everything that the people had to do and and sort of endure and suffer through uh as a as a part of all of that and that we have been on a long tail trajectory of simplification automation streamlining um you know refinement and iteration lessons learned are rolled into the systems of tomorrow and and we've slowly been able to sort of alleviate that tedium that drudgery but there seems to be a you know there seemed to have been no shortage of it up until this point of these bullshit jobs where like yeah you may you may have been happy to have a job that paid any amount of money at all and perhaps even going as high a level as like the entire job disappeared not just a shifting of the task makeup if that happened that kind of job is not going to serve humanity if we're talking about like and if we're talking about a reasonable proper right just and true application of available human resources on the planet of earth bullshit jobs like that bullshit jobs in the line of fire of ai and again i'm gonna draw a distinction here that if your entire job role is under fire and i'm not and i'm not referring again to situations where you might just have some greedy greedy fuckwit as a boss right uh who who basically doesn't even care whether or not ai has the power to replace anybody he he or she simply wants to move forward and do it because you know maybe they can kind of downsize and look sexy to investors at the same time.
I mean, look, we live in the real world. The real world is full of scammy, smarmy, terrible, awful, dumb shit.
But now with AI, the job makeup, the overall environment can now shift. This is our best shot. We have slowly increased the quality of life of the average person.
Again, at least in the first world, this is not universally true, but I think, again, this rising tide of AI will lift all ships. But listen to me. Okay?Just like we raise the quality of life from nightmarish enduring of hellish labor and toil for for little gain, for bare subsistence, survival. We have elevated.
We have elevated what it means to be alive on this planet, and that elevation will continue. We will see to it. And now, the next stop is going to be a situation where the only jobs left are these incredibly high-agency, high-effectiveness, high-influence jobs.
Where if you want to do something, you're not just going to sort of do it, you're going to do the hell out of it. You're going to do it with the power of a thousand human beings.
You're going to do it and do it until it's fully done. There's no more of this sort of ground-level.
Let me break off a little sort of piece of this overall construction site or masonry work, and I'll just sort of start seeing where I can chip away and make a little bit of progress. It's going to be high-level architecture and deployment and risk management and this incredible again, orchestration of these agentic swarms and intelligence loops and filling out the development of code and facilitating the running of tests and the conducting of research, building of an internal working knowledge base for a given purpose and problem, you know, sort of cross-integration with tools and calling functions.
Those are going to be the sort of raw materials of the employees of tomorrow. They will be, they will be conductors of symphonies beyond the mortal understanding of the average person today.
And again, to be very clear, it will be the exact same person. They will simply be doing things to a higher affect, and they will simply be operating on a higher executive plane with regards to whatever it is that their domain expertise was prior.
They will simply be plussed by AI. They will be effectively every person will be given their own team, right?
Like, organizations can kind of, this will be one kind of very, I think, typical, reasonable response from the business world of an organization to AI, right? The idea that everything remains the same, folks, hierarchy unchanged, no major shake-up here, just everyone gets AI, and just act as though you were given a team under you.
This is going to be huge, I think. Mark my words, right? You will be given a team under you, and this team that works under you is going to basically be responsible for upholding your job duties.
So, whatever you need to flesh out with that regard, whatever knowledge base you need to build for yourself with that regard, whatever fact-finding missions and research and insight analysis needs to be conducted as a part of that, that's what you're going to do on the employee level. That's what you're going to be responsible for managing.
So, yeah, you're not going to be flipping through emails, you know, you're not going to be kind of sort of low-level. Let me, let me, again, you know, even be working with files.
You're not going to be doing that stuff anymore. You're going to be working with the higher level, you're going to be working with context windows, with code bases, with containers, with, let's see, with high-level roll-ups and summarization of activity that's taken place.
And obviously, your ability to kind of zoom in on the evidentiary roll-up and zoom in on the chain is going to be pretty important to your ability to be effective, otherwise, you're going to fall victim to hallucination, vibe counting, and other such bullshit and nonsense. But it's your responsibility as a technologist to be aware of those blind spots, right? professional professional, I shouldn't technologist, I should say professional here.
Any professional is responsible for the quality of the output of their work, and we aren't going into a world where AI is so, so right and so, so correct in all things. I really hope we get there someday because that would be super neat, but it's not the world we live in right now.
We are in a world where you know, insert AI chatbot can make mistakes, so double-check your work, right? You've read that, you've seen that a million times, and why is that?
Is because we don't that problem solved, our models will make up bullshit, they will, you know, they will do their best to fulfill our wishes, and sometimes the easiest path to doing that is to just output some nonsense that sounds super strong but is either unrelated or outright misleading, and uh both of those are generally unacceptable to many workflows or sort of need to be they sort of they sort of make impossible the level of fully autonomous workflow that is really going to kick things off and make things kind of super spicy and kind of really make it obvious for the average person, I think, that like I need an agent, an AI agent that's always running, and that's going to be it, right? As soon as we cross that threshold, that's going to be it.
So, to just bring this home and to kind of wrap up my thoughts, I understand I'm speaking from a place of privilege of somebody who understands AI very deeply on a technical level and who's very enthusiastic about it, but I want to use that place of privilege to sort of advocate that the average person needs to get in on this shit right now, just like the average person. I feel like, again, I'm at the beginning of the dot-com bubble, and I'm like, yo, hey, average person, you need to get in on this internet shit.
Like, I know you've seen, I know you saw Jerry Seinfeld's computer in the corner there on the set of Seinfeld, and I knew you want to, I know you're curious about AOL America online, you want to get online. So, like, that's what I feel like I am right now, and I feel like I'm a little bit ahead of the curve here, and it's going to be something that sort of goes without saying.
I don't necessarily need to be the one to say it, but I have so much to say that I want to be at least someone saying something, right? That's part of what I have in mind here.
But I think, as has always been the case, there will be, you know, some short term shuffling around. There will be perhaps a scramble to figure things out, and then there will be a sort of more comfortable resettling into a new norm, is what I would expect in the job market, where even if you are someone who's like a little bit older, the tools and solutions and workplace software will be such that even these individuals will be empowered despite their lack of the technical knowledge half of the equation, which I think again can be solved by the application layer, can be solved by better by creating more informed Americans today through stuff like this.
And I think that we're to see that just like we made a world where, you know, maybe like retirees can come back from the Force and get a job somewhere that is like a role that requires them to use the internet and stuff. Like we kind of are starting to figure this stuff out and there's going to be a new potential foundation for an interface in the future where we really can be getting the best of the wisdom and matching that wisdom to where it can be having an impact for some of these people who are on the older side and maybe aren't super comfortable with the latest technology, right?
It's not to say that they don't have a contribution to make, but now with AI, I think the promise is there to be able to translate that contribution into a form that could be more sort of compatible with the inputs necessary and sort of where it counts, right? So I think anything is possible and the future, again, the future is bright.
I believe that in the future we're going to see just like we've been elevated from where we came from to a higher quality of living today, I think the next elevation that we're due for is going to be an elevation where you have two kinds of people, and I was getting at this. This kind of very high-agency person who's wielding a data center in the palm of their hand, right?
And then there's going to be a second type of person, which is to say, somebody who wishes not to labor, who wishes instead to devote their full focus and passion to the human experience, to being social, to uh to loving and being loved, to being a friend, to being a family member, to caring for one another, to furthering uh to furthering you know the human experience rather than strictly furthering, you know, maybe, maybe technical achievement, right? Because there is no, I do not believe that it is worth sort of uh entertaining the notion of a large-scale sort of subjugation or forcing of individuals who do not wish to labor to contribute in a world where, again, you can kind of simply scale up.
If you have two people who want to get a job done and they have access to AI, they have access to 100 AI agents and 100 robots, if you remove one of those two people, now that person has simply gone from having fifty AI agents and fifty robots to a hundred of each, right? To kind of paint it in that sort of a picture and metaphor for you.
And I think there's a lot of power in that. I think that basically gives us the stepping stone and launch pad we need to take care of these people who can act as the sort of philosophical backbone.
They can be the half of humanity that can be concerned with wondering about whether we should, rather than the other half of humanity who is busy worrying about whether we could, and then, you know, obviously doing so. I think our society has been orienting towards this kind of a structure for a long time.
I think if we can sort of open-handedly embrace this idea that, you know, it's controversial to even say it, but I think we need to face this fact. We're about to enter a world where people don't need to work if they don't have to. That's just how it is.
It's based on sort of the mathematics that I just gave you, right? Like, it doesn't matter if you're in the workforce, the same level of force multiplication will be there with or without you now, because we have the equivalent of digital people to call upon on tap, full access, right?
That same level of resource will be available for someone else's utilization, right? So you can get in there and have an influence, but that influence is your input, no longer the creation of the work taking place.
You're simply changing what work is happening, and that has value, obviously. Your contribution of your rare signal.
Is really what it's going to mean to kind of be a quote-unquote employee in the next, let's see, 20, 30, 40, 50 years, right? So that's, I think, some of what we're looking at in terms of a transitional experience for us here for our kids.
And, you know, it gives me hope, makes me happy. You know, obviously World War Three and Four, notwithstanding, because you throw that monkey wrench into the mix, and we're going to have to go ahead and rejigger some of those timelines. Okay, so yeah, good talk.
Chapter 14 — AI-Native Engineering Practice
AI-native engineering practice
This chapter distills what modern coding-agent fluency looks like when stripped of literal interview preparation. The emphasis is on judgment, orchestration, and knowing where agents accelerate the work versus where human review still sets the standard.
Feb 9, 2026 at 12:41 PM
Confident, technical, and more disciplined than the raw dictation.
The real skill is not prompting theatrics but operational judgment over a changing toolchain.
AI-native engineering means composing tools, validating outputs, and keeping review loops tight.
The public edition converts a job-pitch posture into a reusable statement of craft.
And, you know, by the way, one thing I think that could be super strong for me on the interview is to say that I am totally happy. I've been using tools like Cursor, Codex, and Anti-Gravity.
You know, obviously, Cursor was the first one kind of on the scene to really make waves, and then, you know, the floodgates were open after that. But I have basically a foundation of probably getting close to in excess of two years of my own sort of usage of these tools across my own personal projects, across sort of interaction with open source projects.
So I think it's to the extent where I'm absolutely happy to share that stuff. Like, it's so funny to think we came from a world where 12 or 24 short months ago, there's a lot of murmuring, especially in the world of software engineering, of like kind of, again, being shhamed, ashamed to use these tools.
And I mean, look, to a certain extent, you can't argue with that. There is some version of all this parallel work that you could be getting done.
You know, the trick obviously lies in actually parallelizing the work and not, for example, spinning out five, six, seven agents to kind of get done something that one agent could get done much more efficiently and quickly if it didn't have to contend with the sort of additional complexity of spidering it out across multiple agents. So much more, much better approach is to have one agent and enable them to use sub-agents in the way that, of course, large language models can do nowadays if you're using tools like Cursor, Codex, and Anti-Gravity. And it's honestly great.
Cursor has their composer model, which is, I think, possibly unquestionably the fastest way to write code with AI is there sort of using Cursor and using their in-house composer model. They've got some tricks up their sleeve, obviously, and it's a joy to use.
But all that to say, I think once you bring me in, you're going to see that I'm someone who can bring that level of kind of training and clarification of enterprise usage of these tools. If anyone is still sort of shuffling along, coming from a background where maybe they felt a little bit of shame at these technical organizations, where you have some of these purists and some of these Luddites, right, who feel like, I mean, obviously, I'm not about to come out and say that everyone should be vibe coding and nobody should know anything about the code coming out of their coding assistant and just ship a bunch of slop.
No, you're still as responsible for your code as you ever were. And I think that's what purists are concerned about: the bar will be lowered.
And I think at proper technical organizations, there will be a period of adaptation, there will be a period of course correction, and it'll be right back on track. Different ballgame, right? Different class of solution, different class of individual that is required.
You can put the tool into the hands of anyone. I can put the tool into the hands of my partner, and I actually am reasonably confident that she would be able to create a pretty, a pretty spot-on, you know, Spotify clone, for example, if she wanted to, and kind of create a very high-fidelity front-end clone.
And that's now in the repertoire of my partner's toolkit. She can do that if she wants to, but if, for example, I told her to go ahead and deploy that Spotify clone, operationalize it fully end-to-end, let's get it up and running so we can start signing up users, you know, rolling out the infrastructure in a compliant way, managing the risks along the way, managing the budget, you know, the sort of the cloud spend, you know, adding additional features that people might be requesting, these things like and you know, just like some of these aspects.
She wouldn't the first clue, obviously, and the AI would do its absolute darndest to explain it to her, but at a certain point, it would be Greek to her because, again, you know, obviously the fundamentals were not in place. So, as technology organizations, it's our responsibility to make sure we are not sort of encouraging the Lewis Slotkins of the world to tickle the dragon's tail and to try to set up a criticality experiment with his screwdriver and show a bunch of friends because he thinks it's cool right there in a production environment where the full level of caution really was required, I think, as became obviously very evident to everyone in the room of the particular example and analogy that I'm painting here for you.
But that's a little bit of an interesting piece I wanted to kind of speak to you there. I'm somebody who can come in and kind of share my chats over the period of time, or I can even take situations where maybe the context is a little bit anonymized or fit to a particular. What would you call it?
A little field book where I'm just jotting down everything with fresh eyes, right? Like, I think it's an important and these are these are the kinds of things you look back, you know, three, six, nine months later and beyond, and you're like, oh, how quaint! How quaint was all this?
How quaint was I to view this like this when it was really this, you know, this other way? I love, love, love to write that out and have that be kind of the foundation from which I spider out and start to align my understanding and my situational awareness with the sort of contemporaneous present-day needs of the business du jour, right?
Like, where, what is it that we need to be pushing the needle on? Where do we need to be making sure that the solutions are getting that rubber meeting the road and that we're not just burning tokens to burn him, right?
Because you can do that all day long. We can do that all day long with AI girlfriends, but that's not what we want to spend tokens on, right?
Like, we want to advance the human race. That's the mission. That's what's important for [potential_employer].
And that's what's important for me. So, again, yeah, for me, I'm happy to be that kind of person with the expertise of enterprise usage of these tools.
I'm happy to lend the organization the model of myself as the sort of curious principal software engineer who is shipping these sort of bringing the mindset and the agency and the executive sort of thought process that I use at least to ship some of these enterprise features, and you're going to be able to see that in the sort of digital footprint that I have in terms of these tools. I, of course, I have my own system that sort of integrates, ingests, and collates all of the logs and chats from these various tools, and it rolls it up and runs it through my own local cluster for the purposes of creating prompts to paste into other coding agents to track my to-do list that I might have left on the table to make sure nothing falls through the cracks when it comes to the stuff that I've been getting done and turning obviously my long rambly rants like this one here into action after the fact because otherwise, you know, I've spent a lot of my life kind of rambling into the void.
So, this ability to kind of capture things, follow it up with AI, and make sure that it's always plugging along, even when I'm sleeping. That's uh, that's it, right? Like, the always on, always watching. Let's get it.
Chapter 15 — Internal Tools As Organizational Leverage
Internal tools as organizational leverage
This chapter has been recast from company-specific interview preparation into a broader argument that internal tools are where modest technical wins compound into major organizational leverage. The public edition removes recruiter names, role details, and employer-specific proof points while keeping the internal-tools thesis intact.
Feb 9, 2026 at 1:13 PM
Confident, evaluative, and less exposed than the raw rehearsal.
The chapter argues that internal tools quietly drive disproportionate value when they reduce friction across a whole organization.
It works better as a generalized platform argument than as a record of one private hiring process.
The edited version preserves continuity of experience without publishing other people's context.
This chapter has been recast from company-specific interview preparation into a broader argument that internal tools are where modest technical wins compound into major organizational leverage. The public edition removes recruiter names, role details, and employer-specific proof points while keeping the internal-tools thesis intact.
Chapter 16 — Production Implementation Experience
Production implementation experience
This chapter is about implementing AI inside real workflows and production environments. The public edition preserves the proof of execution while abstracting named employers, internal products, and proprietary stack details that do not belong in a public sourcebook.
Feb 9, 2026 at 1:35 PM
Operational, credible, and grounded in delivery rather than branding.
The point is practical implementation, not affiliation with a particular employer or product name.
The chapter argues that AI systems become meaningful only when they survive contact with real organizational constraints.
The public edition keeps the evidence of capability while removing proprietary texture.
One sec. Okay, I guess we'll we'll break this off and go to a new go to a new one because we're already in continuation here. Alright, so picking up from where we left off, I just rambled and ranted a lot about my experience designing and building internal software platforms and workflow tools, developing the next one down is developing AI-powered automations and assistance.
So obviously, here we can speak to the fact that I was embedded with our sort of in my capacity as our lead AI solutions architect, I was embedded with, I was embedded with the business intelligence team, and they basically were in charge of our snowflake data lake, right? Like the accuracy and overall maintenance, roll-up reporting, things like that.
And part of what they really needed me to, what they needed to loop me in on to help out with was kind of the implementation and rollout of Snowflake Cortex. So obviously, I have the experience of implementing those kind of first-party tools as they become available because you're going to see, you know, just like just like every company became an internet company in the end, you're going to see every company is going to become an AI company in the end, which means every company is going to have some kind of a chat interface, every company is going to have some kind of an automated self-serve plus experience, right?
Where you're not just talking to like an AOL chatbot, you're not just talking to like, you know, you're not just hitting a touch screen and going through an app flow. You're actually kind of communicating with something that resembles what you would expect with a human-level interaction and experience, right?
And as we lean more into automation and laying the groundwork here of what it looks like for AI in America, that really is more and more kind of what we what we're seeing. It's just like that's that's that's one of the major areas of the gold rush is can you get in there and translate the particular capability of AI to serve a particular need in a particular problem space, it's not enough that you have a proof of concept where if you go to ChatGPT and you kind of paste these things in just right, you need to be able to kind of take it a step further and say, all right, so for what about for mom and dad?
Mom and dad are not going to sit there and put on their matrix glasses and do some like deep prompt engineering. Even I think you know, some of our moms and dads will do that, and that's amazing and that's commendable.
And they absolutely can get that same early adopter, first mover benefit that the rest of us, I think, will enjoy. But I think we got to understand that we represent a very small minority of people, I think, who can intuit this technology much more easily because we're obviously surrounded by our echo chambers of people.
Our opinions are formed around some of what other people are saying about this stuff and just, you know, a thought-provoking tweet here or a post there can mean that you kind of walk away a little bit shell-shocked and with a new point of view on something as you run that through your own tensors, right? As you run that through your own sort of vector database to arrive at the output that you're going to produce into this world as a learning computer known as a human being, right?
So you need to bring to bear your ability to understand uniquely the problem that you're trying to solve and the other people who share that problem with you and understand that just based on the mathematics and statistics of the makeup and diaspora of human beings, odds are the people who are going to share that problem with you are not going to have the same technical uh ability to pursue a very technically complex solution to the problem if if all you have is some kind of bare-bones blueprint scaffolding. The gold rush right now is in being able to synthesize that down to pure product, and that is why I'm uniquely I'm uniquely right for this role at [potential_employer] because that's basically what I've been doing for the past 25 years or more.
You know, like as a kid, I was working on video games. I somehow got caught up in becoming a staff member on MMORPGs when I was nine years old because I could, again, I could write scripts to make cool things happen for the player base.
And it's kind of gone on from there. And obviously, naturally, the web is the king of all platforms, and there's no reason.
I think, you know, it's obviously the ultimate sort of event horizon where your solutions can very easily get in the hands of your people and distributionists is completely, completely solved on the web as a platform. That's why it's obviously my favorite, it's everyone's favorite.
And, you know, it's hilarious. All the memes about just rewrite it and rust, bro, just sort of falling on deaf ears over in the web community because it's like, dude, we don't write C applications. I'm sorry.
Like, very well could be that someday one of our crazy ideas that we can't seem to properly load balance or deploy in a distributed web context, because, again, that's the superpower we have as web developers that integrated systems engineers and native application developers who would be interested in the cult of performance that Rust offers. It's just, you know, like we don't write C.
We write Python and JavaScript, dude. Come on, like, be real. Anyway, picking it back up with [person]'s email here, absolutely, I have developed Snowflake, I have worked with the BI team to roll out Snowflake Cortex, Snowflake Intelligence, with the idea that our executive team right now is going to kind of use that as an entry point or as one of the tools in their toolkit to be able to go from prompt to insight, very highly accurate evidentiary roll-up sort of a way and not be questioning whether the AI got it right.
That's what our executives really liked about Snowflake Cortex: that it was obviously dialed right into the source of truth. And then, of course, in [product_that_i_built], as a part of my insurance agency ops enablement, I built basically a chat interface to be able to kick off and trigger and manage these sort of agentic swarms that would go in and analyze calls that fit a certain bucket.
Like if you wanted to know more about the sales that happened last week, or rather, even the sales that didn't happen last week and why those might have failed, right? That was a big kind of focal point for us.
That I actually had developed a completely hand-rolled, a completely hand-rolled sort of approach to managing that multi-turn aspect of chatting with our chatting with our implementation of the Gemini API. And that was actually something that went on to become what today is how most people interact with that workbook functionality.
That if they want to go in and just say, like, hey, tell me more about calls that sort of have some element of agent unprofessionality, there is, you know, there is a sort of interstitial harness there that I've implemented that gives it the knowledge and function calling capability to go in and kind of translate that very broad, vague request into the ground-level, bespoke needs of our people. And of course, when we say, you know, give me all billable calls, okay, well, Gemini 3. 1 has no idea what a billable call is, it's going to guess.
And those are the kind of landmines that you need to be aware of that can kind of blow things up for you if you just deploy it and tell, and you know, God forbid, maybe you built something where you're just vibe coding it out and you don't even know what sort of a prompt, what the prompt contract is. Again, you don't know maybe your inputs, your prompts, and your outputs, and why you're getting those outputs, because that is really a recipe for obviously a complete slop, useless tool that, you know, at best just looks confusing and looks like it's outputting nonsense, and at worst, is outputting insidiously convincing, misleading misinformation about the data because of some, you know, some mess up along the way, some, you know, some context rot or some, you know, poisoning of the context.
But that is, you know, obviously lots, lots and lots of experience developing automations and assistance, and how noodly that can be. And in fact, the state of the art, just to get it out there, the state of the art of implementing these systems is such that you can, again, let the AI do it.
You can tell the AI how you want your AI to behave, and then your AI will work on the AI and make that AI behave and run tests against it and run prompts against it, and it will prompt the AI as a sub-agent, for example, right? Like similar kind of a principle there.
And it will, you know, it will keep going around and around until it's able to develop some kind of a prompt contract that is strong enough to create the consistent level of output that you need. And then you can come back around and say, hey, here are some additional kind of testing surfaces.
You can grab a group of beta users and invite them in and say, go in here, try to break it. If you can, just click the thumbs down and explain what you believe should have happened here.
It's a fully enclosed loop, right? So you don't actually need to.
And I, in the early days of deploying these chatbots, and I've been doing this for a long time because, again, chatbots have been on the market for probably 10 or 15 years now. This idea that on your website you'll have a little widget and it's a quote-unquote intelligent AI assistant that can help you in the context of the website service or whatever.
Those have been out there, Intercom, you know, those have been out there, but this is something new, obviously. This is a new higher level of capability here.
And I remember not that long ago, we would have to sit there and do a lot of like, if this, then that. And it was dreadful. It was awful because you would go in to try to get everything working just right, and then users would use it, and then you would wake up the next day to a big pile of sort of unexpected, undesired behaviors, and then you would iron those out, and then again, it's gonna spawn like this kind of ongoing, never-ending drift of issues and unexpected areas where eventually you can kind of hope that you end up with something that is maybe 60, 70, 80% plus effective and accurate according to what it is that you're trying to get done with that, right?
So that is the big, that is the big kind of humdrum, hullabaloo, straight dope about AI-powered automations and assistance. And I think I have to break it off there because this is kind of going long again.
Chapter 17 — Leadership Fit In AI-First Teams
Leadership fit in AI-first teams
This chapter reframes a live interview-preparation session as a broader reflection on leadership fit inside AI-first teams. What remains is the question of how technical leadership, communication, and execution standards change when agentic tools become normal.
Feb 9, 2026 at 2:20 PM
Evaluative, strategic, and more controlled than the original rehearsal.
AI-first teams need leaders who can translate intuition into operating discipline.
The chapter is less about self-promotion than about the shape of leadership in the new workflow.
Named people and company-specific context have been removed so the argument can stand on its own.
This chapter reframes a live interview-preparation session as a broader reflection on leadership fit inside AI-first teams. What remains is the question of how technical leadership, communication, and execution standards change when agentic tools become normal.
Chapter 18 — Team Lead Versus Individual Leverage
Team lead versus individual leverage
This chapter turns an interview-flavored leadership discussion into a general argument about where leverage lives in AI-native organizations. It weighs mentorship, coordination, and architecture against the enduring value of very high-output individual execution.
Feb 9, 2026 at 2:42 PM
Reflective, comparative, and strategically framed.
The old manager-versus-maker split gets weaker when one person can direct a large amount of synthetic labor.
The best leaders in this environment still need to understand the work at close range.
The public edition strips away hiring-conversation residue and keeps only the organizational argument.
And I forgot to speak to the last bullet point in the roll overview from [person] about the team lead of the internal AI tools team. And that is mentoring engineers and helping shape technical standards within the team.
I think the only challenge that we're going to have on that front is really helping me to synthesize the the raw, pure signal that I bring to the table and the pure point of view, because it really is like if Cyclops the X-Man took his visor off, that is my point of view on AI coming out, and it is a bit much for a lot of people. And if I could find someone, maybe like a Gene Gray, who can like match my energy to some extent, it is such a beautiful thing when that can happen.
And that's kind of, you know, the level of the team that I'm hoping to be a part of is the sort of obsessive, like, it's sort of a foregone conclusion that we will be staying on top of this stuff day to day, and that we will be bringing a certain amount of like foundational passion about the problem space and understanding of the opportunity of the moment. Those are that that sort of characterizes me and a lot of my colleagues right now.
So obviously, I'm looking to carry forward that momentum because, again, the moment is incredible. And I think, again, I have some substantial wins under my belt enough that I've gone through enough of the process of kind of hitting the common pitfalls and everything that goes into creating these sort of compliance and secure and risk-managed implementations of AI solutions.
I think that there's so much there that I can lend in terms of the mentoring credibility to the rest of the team, and I can speak from the perspective of somebody who has rolled these solutions out. But that being said, again, I don't that to kind of pigeonhole me because it would absolutely, for example, not surprise me if you told me that somebody else here was in contention for the team lead role, right?
And they were maybe somebody who came in with an even with a very strict and strong, or however you would say it, a very strong pedigree in terms of AI stuff. And if you ha if you would have someone like that for me, at least again, you know, from a personal level, it would not be, it would not be any amount of a tragedy for me on a personal level to, for example, take the individual contributor role and to be working under somebody who really is, you know, again, like I say, somebody who can match pitch with me on my neurotic madness and obsession with AI, but maybe even is a step above that, right?
Like where they have sort of synthesized that they've already sort of reached the point that I'm at here today. Maybe some amount of time ago.
So they're a little bit ahead, and they would be able to lend me and offer me that perspective, even though I wouldn't have that leadership role necessarily, even if you know, I haven't looked at the bands super closely, but even if, let's say, the leadership role came with a little bit higher pay, right, as it should, that would not be even a single shred of a loss, necessarily, to me, in the context of this, because the value to me would come from being able to work in close proximity to that level of an obsessed team of people who understand, like, by God, we're getting a chance to be on the ground floor of implementing AI tools for a space agency. Like, I'm not really interested if you don't hear that and you don't immediately lose your mind.
I'm not sure that I would personally recommend that you belong in this particular sector doing this particular work because there are people, it's sort of like a pooper, get off the pot type of a situation. There are people who are going to come in, if not today, then in great, great numbers in the coming months and years to actually do this and to have an impact and to shorten the cycles, tighten the loops, and for God's sake, get a little bit of a hockey stick shape happening on this acceleration chart, which is what I want to see.
And if it sounds like you want me to be a mentor in that capacity, I'm absolutely happy to be a mentor in that capacity. But I will say, I do take more of a kind of a peer-to-peer approach.
Generally, in my experience, it comes from being a team lead of again, a small team, which again is it sounds like it's a good fit in this particular context, but I don't have a lot of experience, for example, managing a giant floor of engineers or, let's say, an entire building or organization full in that context of scalable, high-level top-down leadership. I'm absolutely somebody who I would expect if I'm going to be pontificating and prescribing and calling myself the leader of the team, that I would be right there with them, getting my hands dirty and working with them, and letting, you know, obviously giving them the foundation, the environment, and the substrate upon which to bring their solutions to bear for [potential_employer].
I think that is a little bit of kind of the picture that I would want to paint for you in terms of my leadership style, right? And I'm absolutely, again, happy if that kind of obsessive madness, you want to see if you can harness it on the individual contributor level, or on the flip side, if you want to go absolutely full steam ahead and you want to be this innovative agency that kind of has these very forward-facing ideas, and it's very clear that here's an agency that's going to plug right into the future.
Very compelling story, I think, to be told between us on that front as well. I think I could be a mentor for sure
Chapter 19 — AI-First Engineering Principles
AI-first engineering principles
This chapter preserves the engineering thesis from the original interview-prep material while removing the posture of speaking to a specific employer. What remains is a compact statement of what AI-first engineering actually demands in practice.
Feb 9, 2026 at 2:50 PM
Focused, technical, and sharpened for public reading.
AI-first engineering is fundamentally about workflow design, not slogan-level enthusiasm.
Tool fluency matters because the leverage comes from compounded execution, not one-off demos.
The public edition keeps the principles and drops the interview-performance tone.
Alright, so moving on here on the interview prep. We are out of the role overview section, and we're going to read through the next part.
He basically closes out that section by saying the environment is very hands-on and engineering-driven with a strong emphasis on using AI tools and LLMs within the development workflow. That is actually, that's music to my ears, baby, because it's probably been about two or three years.
Like, I'm not going to say this to the interviewer, but between you and me, it's been about two or three years since my job has sort of meaningfully represented writing code what I today will call the old way. In that I'm right, I'm in there, you know, I've got my like IntelliSense all dialed in.
I've got, you know, this, I've got that. I'm I'm writing functions, I'm writing comments, I'm writing granular syntax, I'm executing, and I'm, I'm, you know, it's exploding in my face because I forgot a semicolon. The old way, right? This is the old way.
It's been some time since I've business that way simply because I am able to orchestrate and conduct and achieve meaningfully and provenly, evidentiarily, the same output as if I were to sit down and work on something over the course of three weekends to try to bust something out according to my own spec, and again, doing things the old way. I think you absolutely need and will benefit from that experience with the old way because it means that the way you think about the way that you think about solving complex business problems through software is going to be grounded in that reality, that ground truth reality of where we came from, because that's still the reality that we live in.
It's simply that we all have this magical hammer in our hands that can transform into whatever tool the job calls for. That doesn't you can't build a terrible product, that doesn't you can't hurt yourself with that magical multi-tool. That's what we're looking at.
And again, that's kind of why I say it's equal part sort of innovative, visionary idea crafting as well as risk management, ramification, consideration, and compliance rigor in this day and age, right? So I'm looking at this. I'm seeing that you guys are all in on AI.
You're investing heavily on AI and LLMs. And I think, like I said before, I have incredibly deep thoughts about what it will look like to participate in a software engineering organization.
Let's say if we were to jump two, three, four, five years in the future, and some of the tools that I believe will be sort of commonplace, some of the ways that I think people will be tested on their ability to architect solutions and to, again, demonstrate some capacity of being a 10x, 100x through the power of AI, right? Like the sort of nature, just like the nature of education will change with AI, the nature of, obviously, the roles that we're looking for here will change.
And I think you're going to see this sort of solidification. Wait, what is this? Solidification.Yeah, that's a word.
Solidification of this layer of people like myself who maybe had the opportunity to work one, two, or three decades in technology in that sort of golden age of Silicon Valley dominance because you know we slurped up a lot of the best practices in terms of embedding that institutional knowledge in our brains. And then when me and my colleagues are sort of cross-pollinating across organizations, the net result is again, the rising tide lifts all ships, and we get kind of this aggregate, just like LLMs, right?
We get this aggregate statistical average of the best practice. And I think all of humanity benefits when organizations and processes run efficiently.
And unfortunately, up until this point, it has required that human beings sort of behave as robots to facilitate that important consideration. And now, moving forward, we can obviously alleviate that suffering significantly, right?
So that is why I really, really like to see that they are kind of LLM first when it comes to writing code. And I'm sure, you know, they probably have a roster of purists on the team as well that are maybe inordinately concerned about sloppification of the internal code bases because it is very real and it is very possible.
And it's to say conscious and cognizant of simply by, you know, like doing the groundwork. You need to basically do the heavy lifting to get in there as an engineer and say, these are our codebases, sort of codified best practices, right?
This is what we expect of code written in this code base. We have helper functions, use them, we have components, do not duplicate them, do not re-implement them, do not be reinventing the wheel, do not be introducing new paradigms that sort of challenge and contradict existing structures and decisions.
Like these are all very, very important considerations and the reason why open source maintainers are so mad right now because people want to beef up their GitHub portfolio, so they will simply set an agent to task and go find some change that they can make to get accepted. To say, I'm a contributor here, I'm a contributor there, right?
So that's a very real problem for them, and the way that it manifests in the corporate world is: you know, I just had to get it done, I'm a really tight deadline, I just kind of plugged it in, and I sort of had some maybe early indications that it was working the way that I wanted it to, so I kind of just pushed it through, operationalized it end-to-end, and I realized it either is misleading or broken, or, you know, God forbid, even kind of dangerous, or exposing some level of undue risk in the way that the solution was implemented, right? So these are the problems of today.
They are tomorrow's problems here today. So if you're going to be somebody who, I think, considers themselves a state-of-the-art we'll say agentic software engineer because everybody loves that term right now agentic software development orchestration or whatever that's fine I like that It's Wild West, man.
So not only do you have this technology that just came out, sort of be using, let's say, GPT for codec, to just be using that at all is bleeding edge, state-of-the-art, new technology. But if you take that technology and maybe use it to do something brand new, you are implementing new technology within new technology.
And when you hit that, that is Wild West, baby. That's where we're at now. So we basically need to very carefully, very carefully, kind of wade a little ways out into the water here and broach the subject of how do we dive in fully and how do we get AI to a level of full saturation?
How do we get it on the ramp up here? So what that means in the short to medium to long term trajectory here for us is that we want to be an AI-enabled organization.
We want to equip, we want to be very cognizant and aware of those areas where we can be leveraging intelligence on tap, putting it in there, putting the AI in here, putting it in there, putting it every square inch of the business. If it has any level of consequence to our bottom line, there's no reason why we can't be, if not completely automating away, the need to pay attention to some of this ground-level crap that prevents us from fully embodying the highest level of executive agency and professional self-actualization.
We're free, so we can steer the ship now. We can drive, we can drive the car.
We don't have to be fishing around for, you know, we don't have to be, whatever metaphor would make sense here. You can, you can slot it in. But we're free, and we need to make, we need to basically build out the blueprint, the schematic, the scaffold of kind of the AI enabled organization of tomorrow.
Chapter 20 — What Advanced AI Organizations Should Want
What advanced AI organizations should want
This chapter has been rewritten from direct interview preparation into a statement about what advanced AI organizations should actually want from technical operators. The emphasis is on systems judgment, self-direction, and the ability to work comfortably at the edge of new tooling.
Feb 9, 2026 at 3:52 PM
Direct, selective, and less exposed than the raw source.
The best technical operators in this moment combine curiosity, velocity, and quality control.
Organizations that want AI leverage should hire for judgment under uncertainty rather than checklist familiarity.
The public edition removes named recruiters, interviewers, and company-specific self-positioning.
Alright, so just kind of moving down the line a bit more. The last one was about the line where they said it's very hands-on engineering driven with emphasis on AI tools.
I know what they're going for here. I know what they want. And, you know, frankly, the prospect of that is incredibly exciting.
And I know that [person] is already in that same mindset. I can tell just kind of looking at his background and being able to tell that I'm dealing with a smart cookie here.
And someone who's kind of been there and I think maybe even has a stronger sort of ground-level understanding of the role that these tools are going to play, right? Like he can kind of, he's able to connect the dots in a way very similar to me, so I think there will be a kind of Sympatico kinship there.
And again, would be more than happy to report up to [person] at [potential_employer] here, [person]. So let's see. The recruiter here is spelling it out for me what it is they're likely to want to know about.
It says, what [person] will likely explore. During the call, [person] will want to understand your experience building internal platforms or operational tooling, how you have used AI, LLMs, or automation in your development work, your approach to solving complex business problems through software, experience leading projects or mentoring other engineers, and why this role and sector interests you.
And again, like we could sit here and kind of take 20 minutes on each and every bullet point, but in the interest of maybe avoiding rehashing a lot of what I've already talked about on some of this stuff in the role overview, for example, I think that we are, I'm going to rattle off some stuff here. That's pretty much identical. That's pretty much identical to the first question, and I spoke a lot about my extensive experience with that regard.
It's sort of something I've never been able to escape in the entire length of my career. You can trace it all the way back to 2012, 2013, when I was a customer support tech at Pro Boards and I was writing a lot of our most popular kind of community forum plug-ins, right?
Like those are a form of internal tool for us and our community leaders who were on the lookout for kind of new and novel ways to manage their communities and to leverage a deeper integration with the ProBoards platform and its APIs to be able to achieve some of that. That's all the way back then. And then at my most recent role, that's also what it was all about in that I was implementing agency ops enablement tools to help make agents more effective and to help build situational awareness for the team leads and the executive team.
Because obviously what's happening on the sales floor is the lifeblood of the company if you're talking about a sales company. So yeah, like there's there's a there's a straight line from like my very first job as a customer support tech all the way up through to where we are today, right?
Where I'm still doing it to this day, and I'm always going to be doing it. And I think the rest of my life is, I'm seeing it more clearly now than ever that the rest of my life will be so driven and so underpinned by my responsibility to sort of roll out this technology in a responsible way that benefits us all. I mean, come on. Like, that's what it is now.
I think you have people on the open AI team right now at the senior levels of leadership who are saying they want it to kind of be a little bit outrageous, the idea that you might up the old way by, for example, using Create NextApp or setting up the configuration, doing the boilerplate work by yourself and setting up those best practice defaults. Not only that, but also kind of doing the initial planning work of discussing different options, trade-offs, solutions with the AI.
They want to make it so that if you kind of do anything other than start a conversation with the AI as your first entry point into starting a new project, then that should be a foregone and obvious conclusion that you're starting that project at a little bit of a production, productivity disadvantage, because, of course, you could have turned around and done that much quicker with AI. Like, we're targeting this, even though we're just being cheeky.
We know that we're basically already there, and there's no actual work that needs to be done between now and quote-unquote achieving this goal, except that the public consciousness needs to come around to the fact that that is the truth, the ground-truth reality today. I'm not going for CtrlF anymore.
I'm simply asking my agent questions. I am asking for those evidentiary roll-ups.
I am obviously doing extremely granular ground-level review of the outputs to ensure that we are not hallucinating ourselves all the way down the yellow brick road and that we are staying strictly adhering to the original needs and whatever it is that needs to be done. And again, like a lot of what I tend to do is that sort of planning and exploration with the AI where I say, hey, look at our, especially for pre-existing projects that may be the sort of monolithic code bases where there's a lot going on inside, and to sort of the idea that you would jump in, get ramped up, and maybe be productive within that code base in a short period of time.
The ground truth reality at Fortune 500s is that they don't expect engineers to really do anything for weeks and weeks. They expect them to sort of sit around, or longer, to be clear.
Things are absolutely in a huge state of flux right now. The upheaval is ongoing, right? But the But the previous expectation was that people coming in would not be in a position of being able to kind of meaningfully understand or contribute code, like, let's say, in the first few few days or the first week of onboarding.
But I think moving forward, that you know, there's a little bit less of an excuse for that, knowing that everything has kind of changed with regards to AI LLMs and automation in development work, right? Like we are, we are rapidly approaching a situation where the software can write itself and it just needs to be steered.
I believe, again, we are, it's a question of how already there we are. Are we sort of very, very early on, or does this sort of represent what the state of the art will remain at?
I don't think anybody is placing any bets on the latter there. It's all going to be the former.
We expect that these tools will only get better and better and better. But again, you know, we cannot be sitting around waiting until some sort of future state.
We, for example, cannot be sitting around waiting for warp drive. We have to depart now with standard propulsion technology.
So that's a little bit of like the ground truth reality of what it means to be implementing AI technology in this day and age. But I, yeah, again, I think when it comes to my own personal projects, by the way, I hold nothing back.
I basically you know, I have basically an offline agentic orchestration platform that I have built that taps into the GPU cluster that I've assembled, the NV-Link machine that I've up in my garage out there, a little home lab set up just for my own sort of operational purposes. And again, being able to set to task and build out an agent tech knowledge base on any particular situation or problem space or anything that I would love to just be able to set to task some um some agents here and there to be able to kind of speak generally to and again there's a lot of value in just signal uh summarization, right?
Like take the deluge of my digital footprint on a day-to-day basis and everything happening with regards to anything I might kind of care about and deliver that to me in as few words as possible, in as lossless a form as possible, is an absolute art form right now. It is absolutely an unsolved problem actively being solved as we speak.
And we should expect that as time goes on, you know, things will just get better and better in that regard. So, again, circling back to this idea of using AI tools and LLMs within the development workflow, there is no other way.
I mean, we're at a point now where I think when people say they're worried for junior engineers, right, like getting jobs, and that the seniors are maybe kind of elbowing out some of those juniors is because of the following very specific implementation and role that juniors were having a large value, right? This is kind of like, this is kind of boots on the ground what it looks like for a junior developer to become obsolete.
Basically, you have some, let's say, again, for the purposes of our combo, an internal tool, right, that's used by maybe some or all of the company, and there's just problems that come up in the course of day-to-day business, like we need to go in and kind of fix this aspect of the template markup because the data changed and this and that, and then, oh, there's a test failing now. Can you go in and kind of look at the error output and address, kind of make one or two-line change in the code base to get us back up and running?
And then that is, again, really important foundational experience if you are someone who considers it really important that you come at programming from the idea of first principles, or sorry, from the idea that you should have a ground-up understanding of programming and that you should maybe before you go into the workplace and call yourself a professional developer who is using AI tools, you should first learn how to do it the hard way and be able to do it the way that we wrote code back in, let's say, 2016, 2017, where there was a little bit of a renaissance in terms of things were plateauing a little bit, and some of the new technologies coming out, especially in terms of the web, were not that exciting, right? Like, certainly not as exciting as the technologies that we have today that are coming to disrupt not just our development workflows, but our entire lives, right?
And forever until the end of time, nevermore will we ever turn back from here. But, you know, I'm obviously super well comfortable, well versed in cursor, codecs, anti-gravity, windsurf, you know, GitHub co-pilots.
I believe Amazon has a solution that I haven't even evaluated yet.
Basically, everyone and their grandma wants a cut of this pie and is kind of coming to eat Cursor's lunch right now because they know this is a rapidly expanding class and category of tool that is, you know, is basically here to kind of change everything about the way that we build software. So everyone is really, really hungry to get in there.
And again, if you hire me, part of that institutional knowledge that you're going to get is like, oh, you may not have known, for example, that OpenAI right now is doing a free offer where you can get five free codec seats for a month. Just sign up today. They'll give you a free month's worth of five business class seats, and that is like almost a $200 value just for signing up for free.
You can do it even on your personal account, so there is no excuse to have any kind of limitation or want for tokens, or maybe you use cursor, you've hit the rate limits, and you're like, I'm not going to spend any more on these on-demand tokens. There are people who are not aware that you can simply branch out and adopt other sort of so sources of this token and of this compute throughput for the purposes of pushing things forward and getting agents to pay attention to what it is that you're trying to get done on your computer.
So I think there's nothing but agentic workflows anymore, there's nothing but AI tools and LLMs within the development workflow. I almost think they themselves have muscled us humans out to some extent.
So now you have the senior most staff who, again, you get so much more value out of putting an LLM or programming tool in their hands and saying, look, you can review the output here and make sure that you're getting a really strong, consistent, valid syntax, obviously base level, but also, are you making terrible Vibecode-y design decisions that will result in things exploding later? Is there a terrible sort of weird code smell in the repo as a result of multiple sort of independent, maybe misinformed vibe coding sessions, right, that ended up resulting in this code base getting a little bit battered.
I think the beauty of AI in software development is what previously, again, back in 2016, which is when I kind of really started stepping into more of a leadership role in technology, right? I was starting to step into more of a directorial role in certain organizations, Aerie Software.
What I was seeing was it was incredibly expensive, this idea, that we would write some code just as a mock-up, write some code just to kind of prove out a concept because of the deep complexity of our platform, for example, our product, our flagship solution. So it was very much like a waterfall situation.
So we would set out to build this massive feature for maybe counselors, maybe teachers, right?
Chapter 21 — Workflow Transformation In Practice
Workflow transformation in practice
This chapter stays focused on workflow transformation while dropping the residue of screening calls and employer-specific validation. The public edition keeps the practical case for AI-native development habits and a more fluid human-machine division of labor.
Feb 9, 2026 at 4:10 PM
Pragmatic, process-heavy, and forward-facing.
Workflow transformation matters more than isolated tool tricks.
The chapter argues for a new default operating model rather than a one-off productivity hack.
Company-specific context has been pared back so the workflow thesis can carry the chapter cleanly.
Some dashboard that kind of integrated everything together in a very rich visualization and handled some deep, deep background computation on helping manage the classroom. No.No. No. Okay, I got scared. Sorry, I got scared there because it went away and started a new one. That's fine, though.
I was saying, yeah, we would set out to build this massive new feature for teachers or counselors or whatever, right? And we would start to build this, and our understanding, obviously, of the best possible solution would evolve over time.
So it would pin down a developer who basically had to sit there and very granularly be doing the job of like a product owner because only our developers had the kind of bespoke arcane knowledge of interfacing with this back end and translating user pain points into actual possible solutions. So there was a lot of sort of, there was a lot of, there was simply a lack of innovation at that company, unfortunately.
They would tell me often and frequently that I was sort of the only innovative driving force that that anyone could really think of, which was both a compliment and at the time extremely sort of infuriating. And it's not to say that I can't appreciate something to anchor me and save me from my worst impulses and keep me focused on the ground truth realities of the business.
But we're talking about like people who, you know, just had no, I mean, I'm not going to be kind of touching on this piece, but yeah, that's, we would end up spending all this time on building these features out when today, and what I'm getting at here is today, I would simply spin up a few agents and I would ask for a few different concepts and maybe a few different UI concepts, which I would then maybe collate, aggregate, and pair down to something even stronger using my own tastes and sensibilities and my own understanding of the problem space, right? So incredible momentum and velocity and ability to move quickly on a prototype solution.
And of course, that's incredibly valuable to me as somebody who is able to kind of often build out just a basic prototype that works just well enough to get people extremely interested. And more often than not, I'm pressured to kind of get those prototypes polished up, finalized out and appropriate for production deployment sooner than I think maybe the original requirements or spec really kind of called for, and catching me flat-footed and I had to move quickly.
Again, that's fine when I have to do that. I'm absolutely fine to do that.
But that's kind of my shtick, I find. So, absolutely, we've got those AI LLMs and automation in the development work.
I think even if you were to just use it as a way to kind of very quickly prototype new screens in an existing code base, new functionality, to kind of have your agent go in and like operate the back end. For example, at [company], we have our full stack built on a Django application. Not our whole stack, sorry.
We have our actual the the application that I built for this company, which is the transcript intelligence analysis pipeline and personalized AI call coaching agency ops enablement pipeline, blah, blah, fucking blah. I gotta find a shorter way to say that. Um, let's see.
So we have a very like a we have a very opinionated point of view uh about how we approach different uh classes of analysis problems. For example, we have an accelerary integration, right, to to kind of handle um dedicated uh task management and work lanes.
Um, and you know, little note to self here, I should probably brush up on our code base uh and speak a little bit. I I think again, this we'll get there, we'll go into the technical interview and we'll talk about solutions, and I think I will totally be able to hold my own there.
And I need to make sure I don't get too lost in the weeds because he's just gonna say, I'm gonna stop you there. We're not talking about like code-based stuff as much here, we're talking about whether or not you're like a good culture fit and all this shit, like that's as a part of this email, right? So, um, no worries there.
Yeah, I absolutely have used it. Um, I need to kind of have you go through this chat and the last chat that I did and pull for me uh the actual um something really punchy, and obviously, at the end of this, this is a this is a general hanging to-do for you.
At the end of this, we will go through, and for every bullet point that I read off to you, you will give me like a fucking battle card, right? Like a three to five bullet points that I could go into based on all of this fucking rambly-ass shit that I told you, uh, where you know, you can give me kind of something really strong to say about it because, again, it's there's too much to say, even just sitting down here and talking about like things that I think could be relevant and coming up in the in the course of this, you know, I can sit there and talk forever. Uh, it's incredible.
I am amazed myself as someone who spent so much time just quiet in the dead of night, clickety-clacking, tickety-tacking on my mouse, uh and keyboard, I can also unequivocally fucking rant, bro. It's amazing. Um, so yeah, we have we have AI in the development work, baby, and so yeah, that that general, did we get that general to do sorry, yeah, the battle card. Good talk
Chapter 22 — AI's transformative potential and responsibility
Transformation with responsibility
This chapter pairs the sourcebook's sense of technological possibility with a harder insistence on responsibility, design discipline, and deliberate deployment. It remains close to the original because it already reads like a clean bridge between optimism and obligation.
Feb 9, 2026 at 4:18 PM
Measured, persuasive, and mission-oriented.
The chapter argues that scale and responsibility must rise together.
It strengthens the manuscript by refusing the false choice between excitement and caution.
The edited version keeps that balance and strips away little else.
Any my approach to solving complex business problems through software, right? Like, that's interesting because I think when I hear that, I really do think back um to my time at uh Aerie Software and kind of this funny environment where everything we built was through the lens of how many clicks that it imposed on our teachers and our counselors and our end users.
Because every time we would implement a new workflow, right, where they had to kind of go in and, like, let's say for each and every student in the classroom, they needed to kind of go through a flow. And you're talking about obviously exponential work creation, n plus one problem, or you know, whatever, whatever the actual verbiage I'm looking for with regards to this issue of like one person having to do like a lot of work for each and every uh entity uh in their in their context space, right?
Like, uh, teachers would complain about how many clicks it takes, they would like they would be sitting there counting that they would have to click this, click that, and that's obviously it betrays the fact that there are two separate classes of a technical user, right? There are there are the people who feel right at home, very comfortable in their own skin when they're using technology, be it you know, the phones, as you see with the younger people, be it obviously the PC generation of millennials and Gen X and above, right?
So, all the way back since the advent of the microprocessor, you are seeing that there are this sort of initial original fount from which all the understanding and implementation of technology comes from, and that needs to make its way downstream, and that happens that happens in between the place where the innovation is happening and the rest of the world. And I think sometimes we do lose a step, sometimes we do, I think, forget that not everyone is a first-class user of this technology or has equitable access to this technology, or even comprehends, or even cares about it at all, right?
And we should make sure that they are not completely disenfranchised, and that at the very least, we are painting a very compelling picture that you should join us on the AI side because this is the technology that is going to care for the sick and elderly. This is the technology that is going to cure us of all suffering and disease, right?
I think we can state that in no uncertain terms, fortunately enough, for us. So, we have this very Death Star-sized needle that we have to thread right now.
We have a Death Star-sized shot at Utopia, and I think it is the concerning consideration of everyone alive today to point our civilization towards that North Star and just demonstrate, if we can, a little bit of unity in this time and get from here to there, whatever that means. Make sure we don't something that's going to get a lot of people hurt, that's going to ruin a lot of lives, because I think it's entirely possible.
And again, we don't need to be talking about how some of the other maybe autocratic world governments are going to be deploying this technology for the purpose of pure evil. And we need to be a counterbalance, a counterweight against that here. It's our responsibility. You know, it makes me sick.
It's a double-edged sword, obviously, and so was the idea that we could split the atom and do all these other things, but it's just the reality. It's the harsh reality of the world, right?
So my approach to solving complex business problems is that I just believe that even before AI, technology exhibited and proved out this incredible capability and potential to include and cater to each and every one of us, to let us get our own slice of this digital world that we all can be ready, player one, and to figure out some entry vector that works for us in terms of participating in this digitization of our globe and of our human species. It's been fucking amazing. And here comes AI, right? And this, it's a Pandora's box. It's now open. We're never going back.
And I think it's going to amplify our ability to empower those who have not had that power and to boost, you know, obviously the intelligence of people who could benefit from this stuff. And I think there's so much solution fitting, there's so much greenfield, blue sky exploration of what we can do, what we should do, what we probably will do, right?
All of these features and facets of the next two, three, five, ten, twenty, fifty, hundred years, you gotta be thinking now, you gotta change the track because a lot of the assumptions are now invalid, and that's so, so exciting, but it's also so, so urgent at the same time. So it's just, you know, what a fucking time to be alive, man, right?
Like, this is fucking amazing. I love you.
Chapter 23 — Execution Proof And Operating Style
Execution proof and operating style
This chapter preserves the desire to show proof of execution while removing the literal pitch posture of the original source. The public edition turns a self-sales chapter into a more sober account of operating style, delivery discipline, and credible range.
Feb 9, 2026 at 4:25 PM
Assertive, but edited toward restraint.
Execution proof matters most when it clarifies how the work was approached, not just what was built.
The chapter benefits from replacing chest-thumping with cleaner statements of operating style.
The public edition keeps the signal and cuts the private hiring-performance frame.
And I think uh what I would say if I were to try to toot my own horn a little bit about uh this interview with [potential_employer] is, you know, we can kind of state undeniably that in this world where, you know, there's so much investor pressure to show that you are kind of building towards something, you're getting some amount of traction with your, with your implementation of AI and that your build out is not just token burn, right? Like burning tokens for the sake of it, but you're actually starting to kind of enjoy and benefit from the promise of tomorrow where we have the flywheel effect of our solutions today, right?
Where those solutions are getting better and better, and presumably, and in combination with the solutions, the better models, LLMs coming out in the future, you are going to be able to see some emergent effects, higher-order effects that you did not expect, prescribe, or anticipate that are actually going to change everything for you and give you a competitive advantage that you could kind of sit on for a while and then maybe even ultimately operationalize for other organizations if you really have something kind of salient and meta enough to be a compelling enough solution. And that's an example of what I am. I have done that.
I have, in no uncertain terms, stepped into a situation, and in it really was three or four months of strict agentic development, literally in the code mines, for about three months. I solo developed this platform that basically saved the company and that gave us any amount of a leg to stand on for this stuff.
So I think that's the kind of person that you're getting with me. You're getting somebody who is not speaking in theoretical terms about what they would do or speaking in terms of what they wish that they could kind of step into and experiment with in terms of this technology.
We have somebody who, you know, apropos of nothing and unexpectedly on every level of his organization, of his own volition and accord and autonomy, created this solution that completely changed the game for my organization. That actually looks like it might change the game for telephone sales in general, and that that could be kind of the start of that entire empire, right?
So I somehow I want to figure out how I can convey that aspect of my value on the call as well
Chapter 24 — Motivation, Ownership, And Sector Fit
Motivation, ownership, and fit
This chapter takes the underlying emotional logic of the original and recasts it as a public-facing statement about motivation, technical ownership, and why certain domains matter beyond career opportunism. The public edition removes intimate family specifics and interview-targeting while keeping the sense that the work must connect to something sincerely felt.
Feb 9, 2026 at 6:35 PM
Personal enough to feel real, but edited toward composure.
The chapter works when motivation is presented as conviction rather than disclosure.
Operational ownership matters here as much as vision or ambition.
The edited version keeps emotional legitimacy while avoiding private spillover.
This chapter takes the underlying emotional logic of the original and recasts it as a public-facing statement about motivation, technical ownership, and why certain domains matter beyond career opportunism. The public edition removes intimate family specifics and interview-targeting while keeping the sense that the work must connect to something sincerely felt.
Chapter 25 — Operational Proof Under Scrutiny
Operational proof under scrutiny
This chapter compresses interview-specific proof points into a public-facing account of operational credibility. The emphasis is on demonstrated execution, system ownership, and the ability to translate large claims into working artifacts.
Feb 10, 2026 at 7:01 AM
Professional, practical, and less exposed than the raw conversation.
The important thing is not the specific interview but the standard of proof it forced into the open.
Operational ownership remains the strongest kind of credibility in the sourcebook.
Named companies, interviewers, and role specifics have been removed from the public edition.
This chapter compresses interview-specific proof points into a public-facing account of operational credibility. The emphasis is on demonstrated execution, system ownership, and the ability to translate large claims into working artifacts.
Chapter 26 — Empowering DIY AI Solutions
DIY AI beats passive software consumption
This chapter argues that individuals can now build narrow, high-value software for themselves instead of waiting for packaged enterprise products to define the possible. The public edition stays close to the original because the chapter is already a strong, public-safe expression of the Computer 2.0 thesis.
Feb 11, 2026 at 1:50 PM
Pragmatic, empowering, and future-facing.
The chapter treats AI as a way to move from software consumer to software author.
Its power comes from concrete agency, not abstract platform ideology.
The edited version mainly compresses and clarifies an already strong lane.
SAS companies hate me because I give away all the AI secrets that lay bare that their solutions have no moat, and I represent the type of user that is equipped with AI in such a way that not having access to purchase or buy or install a pre-existing software solution for a specific case is no longer the deal-breaking blocker that it once was, and that I could simply sit down and, you know, with a reasonable amount of appropriation of compute to the level of complexity of the ask, I could have my own solution, especially given the fact that that solution is being developed for a customer of one. I think that paradigm allows, and of course, this is this context is crucial to computer two, my framework for the next generation of building and using software.
When you are building for a customer of one, and that customer is yourself, and I say customer of one, but I could also mean you and your family, you and your immediate friend group, you and your immediate community, right? You yourself, who has a very narrow slice of the actual requirements and needs of the highfalutin, high-level enterprise crap that previously found its way into the class of solutions category that someone like me could come across in the course of just conducting their day-to-day technologist work.
Now I can simply, again, it doesn't make sense for anything else to be the case other than I would simply consider first building my own solution. And of course, in the great open source spirit, I would make those tools available to to the rest of the world as well.
Um, but yeah, I think, you know, let's think about that. Let's be thinking about that. The software as a service companies hate me, they want to pay me to go away, but the offer hasn't been high enough yet.
Oh well, just for your reference, we can be moving forward now.
Chapter 27 — AI Narrative Control Concerns
Control of the story matters
This chapter focuses less on raw capability than on narrative control: who gets to define what AI is, what it is for, and how the public should imagine its future. The public edition keeps the media literacy argument while reducing some reactive edge in the original delivery.
Feb 11, 2026 at 2:09 PM
Watchful, media-savvy, and skeptical of gatekeeping narratives.
The chapter argues that story-shaping is part of infrastructure, not just commentary.
It warns against letting a few institutions define the public imagination of AI.
The edited version keeps the caution while sounding less reactive.
A lot of people are making a lot of noise about what the future is going to look like, and there's just a lot of hullabaloo about, you know, Clooley and how uh this whole ecosystem came up for the purpose of cheating on job interviews and uh um this uh uh idea of like open claw getting acquired, this idea of like um clawbook or whatever the social network got acquired by Meta, by OpenAS ridiculous, man. I think there's there's more attention being paid to uh managing how the public perceives the narrative of AI than ever before.
You have to carry that with you now in your brain. And that information, and and you know, we are entering a an era of very interestingly increasing info hazard risks, where you know, again, like it's jump-starting this era where it's a lot more about controlling how people feel or think or whatever about a certain thing than ever before.
But there's a precedent here, there's an American way to go about this. And there's a middle ground to be found once the dust settles.
And, you know, I think when once the average person has a better understanding of the ground truth reality we're talking about. But yeah, these high agency individual aqua hires are pretty ridiculous.
Chapter 28 — Open Tools And Strategic Restraint
Open tools and strategic restraint
This chapter has been reshaped around a cleaner public question: what should be shipped openly, what should be held back, and how should a serious AI project think about release timing without narrating its private deal context to the world. The public edition removes leverage talk, collaborator-sensitive texture, and acquisition-style framing while keeping the open-tools philosophy.
Feb 11, 2026 at 2:20 PM
Mission-driven, selective, and strategically controlled.
The chapter argues for shipping aggressively without pretending every asset belongs in public at once.
Open-source posture is treated as a strategic design question, not a purity contest.
The edited version keeps the product philosophy and removes live commercial signaling.
The big model providers, Google, OpenAI, Anthropic, they want you throwing their most expensive models at your highest volume, stupidest workflows that are most compatible with small models. They know that this is a class of money that could evaporate overnight if we kind of if there's more done to educate the average organization on the absolute fucking effectiveness of these small models. It's a risk.
You know, they want to keep the gravy train going a little bit longer, so they're hoping to kind of succeed, to be so, so clear, they are succeeding at keeping everyone distracted with the capabilities of some of the latest models, which means nobody is bothering to nobody's except the most cost-sensitive shops, which you know, maybe those increasingly will be more and more. But I think, again, I think the purpose of Rare Signal is to target a sub-enterprise class of technical power user.
The purpose of the organization of Rare Signal is to elevate the average user into obviously being a power user, elements of a power user, right? And then to target and expand the class of solutions available to those power users with regards to budget, mid-range, and maybe high-end consumer-grade hardware.
So if you have a MacBook Air, you can run this kind of a version, this kind of a class of the solution, but if you save up, if you're able to get a 30-90, you can do this class of solution, right? Like this, you know, like different price buckets, and part of that will be helping people understand those price buckets, to be clear as well.
That's going to be an important resource. So we are off to the fucking races right now, dude. I cannot overstate.
Like, I'm going to be shipping all today, all tonight, and I think probably by the end of this week, we are going to be in a place where we have such an expansive establishment of solutions out there that are so clear, and we can be iterating and refining on the messaging as we share out, but just to kind of send the base-level rare signal out to broadcast. So, talking about the Common Sense 2026 letter, happy 250th birthday, America, right?
And then, you know, moving on from there, we've got all these fucking tools, man. I don't know what to tell you.
Like, we need to spin up the GitHub. We need to get these tools out. We need to share them.
We need to make everybody aware of these tools. I'm not saying we need to go share, like, the Spotify clone. I don't think that I would.
But we can absolutely share out all of these ideas: the AI UI Schurtbugger system, the observatory system, the rare signal resource calculator, what class of tools you can look at. We could, you know, build out a sense of like a workbook system, the roll-up summarization system.
Probably not workbooks, right? Because that's part of Ada, so we'll keep that stealth.
We won't be publishing anything to do with workbooks because we want to kind of sit on that, see if we can turn that into more of an opportunity on the [product_that_i_built] side with a Merlife deal. But everything else, like, I need to show that this stuff is, I know it like the back of my hand.
I need to roll out the rare spells thing. And if you like it, you can, you know, vibe code these things with rare spells.
It's an example of software to software 2. 0, I guess, where not only the way that it was built was entirely agentic, but the way that it is intended to be used is entirely agentic. So there is only one kind of singular surface where your eyeballs and your mouse and your keyboard are meant to interact with anything that isn't the agent that is sort of looking and associated and attached to the code base for you to be able to operate the software in the way that you want.
That is that surface is just a basic UI for you to evaluate and assess the output of the agent and to experiment with the parameters in which it's established for you to explore and to find something cool, and to tell the agent that you found something cool and you want to iterate on it from there, not to be limited by whatever you see in terms of the user interface presented before you, but rather to be able to, again, step up to this more executive level of personal agency in saying, like, I'm going to take this concept and can you do this? And exploring what is possible, exploring what can be output and produced that might sort of match to your experimental needs, because again, we have an incredible gamut of things that I have been able to create that I will be open sourcing fully.
As a part of this, it's going to be sort of default spells. I mean, I'm going to pare down some of the absolute bullshit.
I'm going to look at some of the stuff that I've been rating lowly. And when you're setting up this software, I'm going to it so that you start with my spellbook as a default, and you can absolutely go from there, and my spellbook will be kind of this archived point-in-time example for you to expand if you want to on the sidebar there and go through my spells that I created.
But you should tell the AI what it is that you think is cool, and especially if you're finding that you're telling it about I want this spell, that spell, and you're getting mostly kind of derivative output that isn't that much different from the example spells that I provided. That is obviously context infiltration, so you can do, you can obviously address that by telling the AI to not put so much weight on the existing archived example setup, but that you would you should be sort of going out in a greenfield manner.
I mean, I'll be handling this, I'll be building this in right into the software, um, and I will be uh making sure that the readme is perfectly clear and I To the races to be able to use it, and the agent will tell you exactly what this software is for and exactly what you can do with it.
And you can do some really, really awesome stuff that you can then roll into if you wanted to create a visual portfolio. You can obviously immediately convince all your friends that you're some kind of crazy hacker, web god, who is wielding like ASCII powers beyond your comprehension.
And just, it's just fucking beautiful, man. I love it. So, God bless America. And let's get this, Brad, man.
Chapter 29 — AI UI authenticity detection tool
Authenticity in AI-made interfaces
This chapter explores a product concept for detecting when AI-generated interfaces feel thin, ungrounded, or obviously synthetic. The public edition preserves the underlying design critique and product instinct while smoothing out the raw brainstorm energy of the source conversation.
Feb 11, 2026 at 2:29 PM
Playful, design-literate, and sharply opinionated.
The chapter asks whether AI-made interfaces leave aesthetic and structural fingerprints.
Its strongest contribution is the insistence that taste and usability still matter under automation.
The edited version keeps the concept and trims the performative brainstorm texture.
And by the way, the shirt booger system, we can definitely spin out to also include like an analysis tools if you're really concerned that something you vibe coded doesn't pass the smell test. If it's reusing AI purple or some shit, you know, if it's very clearly vibe coded, based on our criteria that we use to check, and you can kind of get like an output analysis again, just by slotting in your own agent and giving it the specific rules and toolkit that we make available.
You know, I don't want to harp too much on the purple thing because I think that's going to disappear pretty quick here. And the latest thing I'm seeing is very, very prevalent in the Oh, yeah, it's like um kind of like a little dashboard style, very tight kind of text, uh title headers and overlays, lots of dark mode grid type stuff is another sort of, and again, aesthetics are aesthetics, it's just sort of where the where the AI chooses to unfortunately park its fat rumpus and start to provide all of these UIs to developers who are requesting generative UIs for their applications or for their quick mock-ups, right?
So I think uh it's an unfortunate fact, uh the preference and taste and likelihood and statistical average output of large language models when it comes to generating user interfaces is such that you know, like you can kind of tell on yourself, you know, insert the inglorious bastards meme here where you have three fingers up, and it's like that's not that that's not how Germans say three with their fingers, and uh, I, in an in an industry where I think increasingly it's so performative, and like, are you one of us, and like, especially they need to be more concerned about, like, are you a fraud who's just sort of using AI to skirt along? You need to be aware of this sort of stuff if you're going to succeed, you need to be self-aware, aware of yourself.
So, if this helps somebody out there in a context where they're sharing something in a high-stakes environment with maybe somebody who's like a little technical, a little smarmy, right, then they can use this tool to kind of check themselves to see if their AI gets a good rating in terms of, like, I'm sorry, their UI gets a good rating in terms of originality and overall sort of UI design decisions, because we can be talking about as granular as, like, did you use this very common header sidebar layout navigation? Did you use this very common icon library? Things like that, right?
And the ideas are flowing now that I'm talking a bit more, and I'm sure if I ran that by you, you would be able to take that kind of in your vector database and be able to spill out a bunch of other areas where we could be locking something really, really strong on. So, I just wanted to paint that out as a little bit extra for the shirt booger system, baby. And guess fucking what?
That can be kind of a through line that we start to push. Like, that's going to motivate and incentivize the younger generations, right?
We can be running this fucking country like a fucking business, dude. Let's hit our stretch goals. Where we have so many robots powering our society.
Everyone's weighted on hand and foot. You're given your first robot at, like, age 18, right? What the fuck, dude?
Like, this is, we can be talking about, dude. I don't even, like, just go. I want you to just go and think about that.
And think about, think laterally on the future picture I just painted for you. Think about that beautiful fucking utopia.
We've got a world where, you know, you, as it stands today, we have this sorry ass fucking showing where we're fucking apologizing. You don't get your benefits until later, actually.
Sorry.What a sad fucking state of affairs, man. And we will do better than that.
Because guess what happens when everyone has their own fucking medical doctor on staff, their own fleet of housekeeping dogs and robots that keeps everything fucking prim and proper, keeps everyone happy and executing on the highest levels of agency. There's going to be no fucking excuse left for sadness, for anything other than pure fucking signal.
What will be your influence on the blank fucking canvas of the future, motherfucker? What are you going to do?
Chapter 30 — Agentic development terminal pitch
Terminal-first agentic development
This chapter argues for a development environment where users operate at a higher level of intent and let agents work through the implementation substrate below. It remains a strong part of the public edition because the product vision is clear, legible, and not weighed down by privacy risk.
Feb 11, 2026 at 3:46 PM
Technical, product-driven, and speculative in a useful way.
The chapter treats the terminal as a launch surface for orchestration rather than command memorization.
Its core idea is that interface abstraction should rise with capability.
The edited version keeps the product thesis largely intact.
And let's not forget that the way that Enterprise works here is they are rarely the ones who innovate on the ground level, grassroots, bottom-up innovations that actually end up powering these fucking enterprises to begin with, right? So Amazon did not invent Elasticsearch, they went in when it breached this critical mass of usability and they forked an open source project, right?
It was very controversial, very, uh, very, you know, like, but it's within their purview, that's open source, right? So these people are not looking to necessarily get out there and spend all of their money iterating and validating these ground-level ideas to see if they would be able to get users, for example.
That's an expenditure of their people's time and money, an expenditure of their capital to increase their risk surface. They want other people to be out there doing the hard work, doing the heavy lifting, getting people excited, validating the ideas, being the one on the ground floor who's trying this shit out, which is why my ground-up idea for an agentic development terminal is it is so like it is at the cross-section of early niche, but like popular enough today and will be even more popular tomorrow and sort of useful.
And like, even the people who are using this today, I think, will be able to see immediately the picture of tomorrow and how they can kind of very quickly be interacting with these different tools and surfaces, and it's all rolled up and integrated, and we have very f strong first class pathways to be able to build with all the tools of the marketplace, whether it's Cursor Codex, Anti-Gravity, Windsor, Copilot, any tool that leaves a sort of forensic artifact file trace on your machine that can be ingested up, that can be configured and ingested up, is what we are going to want to be focused on here. What's up?
Chapter 31 — Building Computer 2.0 Vision
Computer 2.0 as a working model
This chapter makes the Computer 2.0 framing more explicit by describing software work as a conversation with systems that can reason, search, compose, and execute across multiple lanes at once. The public edition keeps the architecture of the claim and trims only the most transient model-specific texture.
Feb 11, 2026 at 10:36 PM
Big-picture, lucid, and conceptually central.
Computer 2.0 is presented as a practical operating model rather than a slogan.
The chapter is strongest when it ties abstraction directly to real workflow change.
The edited version keeps the conceptual center of the sourcebook intact.
Alright, this is hilarious. So, for computer 2. 0, right, for software 2. 0, um, this sort of agentic-based workflow to creating software is never going to go away, it will always be on rails to going like faster and faster and faster and faster.
Like, it's a race to the bottom until we can put in some input and then basically get instantaneous, you know, repo-level activity happening. You can flesh out an entire web application, you can fork an entire company or whatever, right?
So, in an instant, that's the future state. And right now, we are basically locking in our, we're like the people who will would eventually go on to become hardcore Vim users, right?
Where all the people using Vim with their hotkeys and their keybinds and whatever the fuck, you know, they are so opinionated about the fact that is the right way to do things and they will they will never let it go. And I want to tap into the market of people who are kind of getting born on that right now because it is so true.
People who are sort of getting addicted right now to vibe coding stuff and are having, like, you know, this sort of transcendent experience alongside me with GPT 5. 3, 5. 4, Opus 4. 6, Gemini 3. 1, etc., etc. I think you know, we're never going back.
So that's why I really do want to invest in Computer 2. 0, and I do want to invest in a very opinionated point of view sort of implementation on this stuff when I can say that I was a first mover, that I was building something really cool and doing it open source, because obviously a lot of people right now are seeking VC funding, and I am at the point where I'm just kind of trying to do this for visibility, and a part of getting that visibility is this idea that I'm going to be so like spunky and like American spirit and like grassroots ground up type vibes and approach to win over the hearts and minds of the sort of you know ground level hacker community, which I imagine is only going to kind of grow and grow and grow from here as we start to hit this critical mass of people realizing they need more more tokens, more compute. Um so yeah. That's why I'm building this thing that is like kind of a mix of an operating system level tweak, maybe like a custom Linux distro, right?
Where it comes with best practice defaults, it's a fork off of something sensible, you know, it would support you know maybe the custom workflow that I'm looking for.
But I'm just looking at like, you know, whatever it has to be, right? Like, if uh if there needs to be a version of this where you know we build in like a Mac integration, right?
Like people are buying the Mac minis right now just to be able to install clawed code. There's you know, like people will buy Mac minis just to be able to integrate with our shit too. Like, it doesn't matter.
Like, if all these Mac Macintosh applications, there's nothing that's necessarily out of reach. And this is a protype built in public product, very opinionated point of view in enabling a frictionless frictionless experience when it comes to agentic vibing.
It doesn't have to be for writing code, right? You can do anything. You can, there will be different canvases, different sort of interfaces that you can switch between, obviously.
And I really need to kind of get granular with this and kind of speak a little bit more with the authority and the pain points of myself and others. But we really are going to, obviously, as with anything, we're to jump in at the absolute deluge and fire hose of software engineering. So that's the idea. Computer 2, baby. Rare signal. Computer 2 by rare signal.
Chapter 32 — Execution Sprint Under Constraint
Execution sprint under constraint
This chapter is retained only in compressed editorial form because the raw transcript combined live commercial strategy, home-life interruption, and emotional exposure in ways that did not belong in a public artifact. What remains is the useful part: pressure clarified the message, forced prioritization, and turned diffuse ambition into a concrete delivery sprint.
Feb 12, 2026 at 12:59 PM
Pressurized, tactical, and deliberately compressed.
The chapter's public value is the picture of execution discipline under a short clock.
Private family detail and deal-sensitive texture were removed because they were not essential to the thesis.
The surviving lesson is that urgency can sharpen a message when it is channeled into disciplined preparation.
This chapter is retained only in compressed editorial form because the raw transcript combined live commercial strategy, home-life interruption, and emotional exposure in ways that did not belong in a public artifact. What remains is the useful part: pressure clarified the message, forced prioritization, and turned diffuse ambition into a concrete delivery sprint.
Chapter 33 — AI as America's Democratic Equalizer
AI as a democratic equalizer
This chapter gives one of the clearest civic arguments in the manuscript: AI can widen democratic capability if it is treated as public power rather than a luxury product. The public edition leaves it largely alone because it is already one of the cleanest bridges between the open letter and the sourcebook.
Feb 12, 2026 at 2:49 PM
Civic, ambitious, and unapologetically American.
The chapter frames AI as a tool for broader participation and competence.
Its strongest mode is constructive patriotism rather than partisan reflex.
The edited version preserves one of the most publishable arguments in the set.
With AI, we are in the age of the meta-thinker or the hyper-thinker, which is to say the idea that humans have an ability to break through barriers of cognitive understanding if only we could be extended such that the gaps were filled and we could traverse the limitations of the context size of our brain and our sort of fleshly physical limitations, if we could amplify our natural cognitive powers, then every American can be a superhuman thinker. And what that's going to mean for our nation is the continuation of our power, of our security, and of our authoritative place in the world as a bulwark against those that would seek to impose tyranny on the people.
AI is America's ultimate equalizer and toolkit in the arsenal of justice and democracy. And bottom-up people first, future there's nothing more American.
Chapter 34 — AI as learning tool debate
Learning still matters under AI
This chapter pushes back against the claim that AI-assisted building automatically destroys understanding. The public edition keeps the argument that tools can accelerate learning when used well, while reducing some of the raw back-and-forth texture of the original conversation.
Feb 13, 2026 at 1:04 PM
Debative, teacherly, and grounded in practice.
The chapter argues that assistance does not erase the need for judgment and depth.
Good tools can compress overhead and still leave room for real learning.
The edited version keeps the educational argument while making it less reactive.
I was thinking about this too for Common Sense 2026 AI in America. And you know what? They're not going to drill down and really learn kind of how this shit works. But guess what?
I don't, I think that's, I think that's absolute bullshit. I think that's pure, pure, coming from a pure place, okay, of a pure place of, oh no, um, let's see.
Oh no, I did it the hard way, and now everyone else is going to be able to fucking dance down Easy Street and become as good as I am. If you ever fucking hear that, if you hear that level of like, oh no, like everyone, like, they're not going to have to pay their dues the same way I did, it's horseshit, dude. Oh, sure, yeah. Give me one second. One second. Getting back on track let me get the transcript boop all right looking at this yeah okay yeah they're just like what we have today is the equivalent of if you were interested in let's see if you're interested in like chemistry and you know you were interested in like kind of getting dangerous with it right see seeing what you can do uh seeing like experimenting with different reactions right like what the natural course of curiosity of people who are interested in a specific thing technology uh chemistry science it doesn't fucking matter right you want to get into it you want to you want to test you want to you want to open the dangerous cabinet and and you you know you want some kind of a way to experiment with that without obviously blowing your face off i think that's what ai is for for young technologists who want to get into this learn how to maybe build software maybe operate the computer operate on a digital level uh knowing that that is of greater and greater consequence in our world i think it's absolute horseshit to say oh no they're not going to go learn that like if you if you drop a semicolon here the whole thing blows up guess what the AI will never forget a semicolon so shut your fuck up exponent me like shut your fuck up like so yeah it's the ultimate learning tool and by the way fucker it's a gateway drug to actually learning more about how to program and how to like yeah it is gonna drill people down into learning how that shit works because eventually they will need to get down into the nitty-gritty they won't want to just be doing top-level vibe coding and appreciating that kind of first-order level of solution that you can get out even for the average layperson who can just get their thoughts out and go back and forth a few turns, right? Fuck you. No, this is beautiful.
This is a gorgeous new frontier of education and there's nothing else to it. I will hear no other arguments. Period.The end.
Chapter 35 — Grassroots Empowerment Through Local AI
Grassroots empowerment through local AI
This chapter argues that local and small-scale AI access can widen human agency in practical, everyday ways. The public edition keeps the democratic through line while trimming domestic back-and-forth and overly intimate setup.
Feb 13, 2026 at 1:15 PM
Earnest, grounded, and oriented toward ordinary use.
The chapter treats AI as a practical empowerment tool, not an elite ceremonial object.
Local capability matters because it meets people where they actually live and work.
The public edition keeps the grassroots case while reducing private room-level texture.
Slip okay, so elevator picture Rare Signal is a company that aims to give average everyday Americans digital superpowers, right? Maybe we'll there. Yeah, we'll start there. That's that's what everyone, I think, can get behind, right?
We have an AI moment, we have a lot of stuff shuffling around, a lot of people like making bets and building infrastructure. Like, it's gonna be this way. They're gonna use my AI model.
This is mine, mine, mine, mine, mine, not yours, no, mine, yeah. That shit, like, and I say no because we are at a point where the small models of today that you can run off your 16-gigabyte MacBook Air, all right, are as good as the models were maybe, maybe even this time last year on chatgpt. com, okay?
And people were getting a lot of shit done with that. Yeah, all right, we have a lot of proven ground-level workflows, all right.
In the future, I think the AI companies again, I said this before, but AI companies want you doing what we do at [company], which is we throw the biggest fucking models at the highest volume, dumbest tasks that could be solved by the smallest, cheapest models for a one-thousandth of the price type ass cost, okay? They want you to making that mistake and not right-sizing our AI spend. All this is enterprise shit. I don't care, I'm bored.
But, America, for America, you ask? What does America need? America has the 16-gigabyte MacBook Air.
My fellow Americans, you may have even, if you are blessed by the Lord above, you may even have an RTX 4090 somewhere, okay, in your garage. Say, all right, there's so much you can do with that to like run your life and like for you and your community, your family, and the people you care about.
You can set that up with one fucking MacBook for your whole family. Like, we out the hood with this one, right?
[household_interruption]
I love to like fucking and like make was very formative for me and made me who I am. Like, gave me a love of like story and literature and narrative and like this country and like all the all this stuff, right?
It's technology is so, so crucial for equality, not in this country, but in this on this planet on this planet. Do you are you starting to see the picture I'm painting here?
Big token, big token wants you sucking on the teeth paying for the subscription until you are obviously fully like paying rent over a long period of time, and your customer lifetime value is such that you've far exceeded any amount that you could have purchased a fucking GPU. Yeah, and yeah, there are okay, I don't want to completely say there's no reason to jump in on that stuff, and right now, especially is very economical.
There are a lot of deals and bargains, and you need to be aware of the spread of solutions. But by God, AI is so crucial. AI literacy and the proliferation and implementation of AI is so crucial right now that I'm looking back at the nuclear age and how we could have fucking like maybe just done it a little bit better and we could not be paying 300, 400, 500 fucking dollars a month on electricity bill.
We could have clean, free, unlimited energy for our AI and not be doing all this stuff, right? If we just had done it better, for God's sake, right?
But now we have a chance to kind of course correct, right? Turning point, USA, but for real this time, not for jokes, not for jokes. No, no.
And I want to empower my fellow Americans because, again, I want to target Americans. Obviously, the whole world is party to this message.
And I think, but the fact is, I need to have a conversation right now with my fellow Americans about what's going on in the world in this technology moment. They don't have the MacBooks in Somalia.
They do not have the MacBooks in Serbia. They have the MacBooks in America.
We need to talk about what a grassroots, ground-up implementation of this solar punker utopia. Coming back around to that term that I will claim and make and coin right now.
[household_interruption]
Dude, that's what we're building towards. That's what we have a shot at.
And we need to set shit up so that we're shooting for the fucking stars here. We land on the moon somewhere with a practical, beneficial next chapter of America, of human history.
And that big token, Big Token's going to Big Token. Nobody's going to stop Big Token at this point.
But we can push back and be a separate party, not just some group of people that AI is happening to, but that we are all participating. Because the idea is, like the Manhattan Project took up a massive all of the world's scientists.
It's not a stretch to say that all of American scientists were working on the Manhattan Project. They just didn't know what they were contributing to back then. They were just sucked up.
They said, government, this, blah, blah, blah. You need to help. Right?And they did. But now that this is that, but not, you know, to make a bomb.
It's to make the future with AI. And what does that look like for you and your community?
Don't let Big Token prescribe for you and your community, your family, what AI needs to do or what shape it needs to take. You need to decide for you what's going to make a difference.
I'm not even like paying anything for this. I'm using trials periods right now. It's a crazy time right now. Like, right now, it's crazy.
I don't know how long I can keep saying that, but it's true. Look what you've done. Anyone can use that. My cluster, my app, my token. I'll have some pipeline. People can yeah, exactly. But everything's open source, you can install it yourself.
[household_interruption]
We legitimately have a leg up because we legitimately have more resources than anyone else. Or if you don't have a MacBook, get one, save up, get one used. I don't care.
[household_interruption]
My hard drive would go bad and click, click, click, and it would mean I wouldn't have a computer for eight months, literally, until whenever I could save up and get one again and be reunited with the internet. Like, is it starting to make sense?
Like, but it was a good computer that I got ultimately. And there are ways. I'm just saying, like, it's not, that's not a ceiling for anyone.
You can save up, you can get one from your uncle. Like, they're sloshing around, there's gonna be more, and more you can do with less.
Small, like, phone-based AI is the next thing, too. And I'll build solutions targeted for that, too, on-device computing, like that.
But as it stands, I have to target, like, if you happen to have a fucking MacBook, 16 gig or 24 gig, because that's just like a class of like a class of power of compute that you can do certain things, larger-scale summarization tasks, and be able to do stuff like, here's like literally all of my notes from college, summarize it up, and like, let me ask questions of this massive corpus. And you can sit around and wait for that to get done.
And they're open source tools. You can pay me to use them, or you can download them and install them on your own GPU if you have it. No, no, I got it. And I'm so mad to hear.
I said this, I said this in my previous rant to my watch, my bracelet here. I'm so mad when I hear people say, like, it is such a tragedy that kids today won't have to learn programming the hard way and learn that when you miss a semicolon, your whole application blows up. Do understand?
It's like the calculator thing. Exactly.There is no word, like, it's the beautiful educational utopia of the future.
So I don't want to hear about all that. Like, it's a shame they don't.
[household_interruption]
Okay.So we've got a point of view on AI. Open source. Targeting these specific bands of compute capabilities for local deployment for you and your family. Talking about, right?
Job application thing will be in there. A lot of other stuff I can't even get into right now because it would take too long. Let's go
Chapter 36 — Democratization Reflection
Democratization reflection
This short chapter takes media-level rhetoric about AI competition and brings it back down to the question of who should actually benefit from the technology. The public edition removes in-room chatter and private anecdotal texture so the chapter can function as a clean bridge back to the democratization thesis.
Feb 13, 2026 at 1:37 PM
Reflective, concise, and morally focused.
The chapter works by translating geopolitical noise into a question of ordinary human benefit.
Its strongest contribution is a return to the basic democratization argument.
The edited version pares away room-level context and keeps the central point.
Hilarious, the proliferation of artificial intelligence and the continuation of American hegemony through the grassroots round-up adoption of AI in America. China ain't playing. Starts out like Obama. What?Starts out like Obama. So is that how Obama would come at it?
Obama hasn't really chived in on AI yet, surprisingly, but it's funny. That is so funny. Can you put those screen on the button?
Yeah, I just wanted to say, my mom would have cried if in the year 2002, I was like, Mom, look what I set up on my mid-range little PC for you. I have your resume here. You're going to get some calls, all right?
She would have cried tears of joy at my wizardry. Do you understand? I'm going to put this power into the hands of every kid in America who's interested in this shit, and nobody's going to stop me.
And Rare Signal is how we're going to do it. All right, over and out. Love you.
Chapter 37 — Pressure, Focus, And Presentation Discipline
Pressure, focus, and presentation discipline
This chapter originally mixed real organizational turbulence with high personal intensity and household spillover. The public edition keeps only the durable lesson: moments of pressure reveal what matters, force sharper framing, and can push a builder to convert diffuse ideas into a tighter public artifact.
Feb 13, 2026 at 1:48 PM
Charged, ambitious, and edited toward professionalism.
The chapter is most useful as a record of focus under pressure, not as a window into private instability.
Presentation discipline emerges here as a real technical and strategic skill.
The edited version removes company turmoil and home-context spillover while preserving urgency.
This chapter originally mixed real organizational turbulence with high personal intensity and household spillover. The public edition keeps only the durable lesson: moments of pressure reveal what matters, force sharper framing, and can push a builder to convert diffuse ideas into a tighter public artifact.
Chapter 38 — Near-Term Obligations Versus Long-Term Architecture
Near-term obligations versus long-term architecture
This chapter has been rebuilt around a core strategic fork: how to handle immediate obligations without abandoning the larger architecture one actually wants to build. The public edition removes internal politics, leverage talk, and named coworkers so the remaining chapter reads as a thoughtful reflection on sequencing, optionality, and ambition.
Feb 13, 2026 at 2:21 PM
Strategic, pressured, and architecturally serious.
The chapter is strongest when it treats leverage as a sequencing problem rather than a negotiation performance.
Long-horizon build paths still require near-term discipline and realism.
The edited version preserves the strategic shape while stripping confidential texture.
This chapter has been rebuilt around a core strategic fork: how to handle immediate obligations without abandoning the larger architecture one actually wants to build. The public edition removes internal politics, leverage talk, and named coworkers so the remaining chapter reads as a thoughtful reflection on sequencing, optionality, and ambition.
Chapter 39 — AI Advocacy With Lived Stakes
AI advocacy with lived stakes
This chapter remains one of the more emotionally charged pieces in the sourcebook, but the public edition cools it slightly so it reads as conviction rather than overshare. The personal stakes remain visible without making intimate family material do too much work.
Feb 13, 2026 at 2:59 PM
Passionate, humane, and editorially cooled.
The chapter works best when lived stakes support the argument rather than overwhelm it.
Its strongest public value is linking AI to dignity, care, and expanded possibility.
The public edition keeps the emotional legitimacy while reducing overexposed detail.
And I want to talk directly to people who are sort of dissuaded from AI because of what's happening with the current economic environment, you know, our present circumstance, right? All the money men doing their money man thing.
I would suggest to you, my fellow Americans, that you fucking ignore that noise. Because just like the money men who were scrambling when the internet was invented, and just like the money men who will scramble when the tricorder is invented in the year 2050, the money men will always be scrambling, and we need to look past that because if it takes us forever to get over the fact that the first movers and the people who brought us this technology are looking to profit from it and looking to take a controlling stake in the initial sort of outcome and implementation of this technology as it relates to rolling it out, then my brother, my sister, I don't know what to tell you.
That's simply the nature of business, and business is simply the nature of humanity. Business is, I won't, I business on the whole, obviously, is necessary.
I won't even call it a necessary evil. But a lot of people are saying that Sam Altman is the devil and that Elon Musk.
Like, I have nothing but reverence and respect for the efforts of everyone who brought us to this moment, just like I have nothing but reverence and respect for the people who who who bring us the technology to extend our lives, to deliver food and water, and medicine, and help and resources to human beings, to suffering consciousness that needs it, and that everyone, every step that has ever been taken, and insofar that AI is going to be the transformative force of this century and shaping the next chapter of human history, I would like to thank the likes of Sam Altman. I would like to thank the likes of Elon Musk, Ilya Sutsgev, Feifei Lee, Alec Radford, Demi, Hassabis, Andrew NG, John Jumper, and enormous teams whose names will never even make headlines.
Like, thank you, thank you, Babbage, thank you, Ada Lovelace, thank you, thank you, Aristotle and Euclid, for fuck's sake, thank you, thank you, Bays and Laplace. I just want to, I want to take this moment and just like, I want to shake the average person who thinks like I want to shake the average person.
I want to grab them and throttle them and ask them if they really think that it's important, you know, what everyone's kind of pissing and moaning about right now. And I'm sorry. I really do apologize if this comes across as like an attitude of like everyone needs to shut the hell up because I don't feel that way at all.
I understand like the way that I'm communicating, the way that it comes across sometimes it dances and glances towards feeling that way. Because, you know, as we run the gamut of different reactions to technology and transformative moments, you're going to get every single opinion, and it's important that those opinions are expressed and enter the public consciousness such that we can promote the good and discard the bad on our own personal levels.
Again, there is no amount of top-down brainwashing or getting everyone on the same AI system so you can control the way that it tells influence responses or whatever kind of thing that people expect is going to slowly start to happen, but that is probably actually not going to happen at all because it's way too observed, right? Like they would never, again, they would never get away with it.
So we need to think bigger, obviously. We need to, you know, again, I don't want people getting caught up and fixated on big token.
I want people getting caught up and fixated on little token, local token, community token, your personal village, and how AI can transform those landscapes and for you on a personal level. She didn't have anyone. My dad, you know, he was suffering.
He was going through his own sh stuff. But my mom, she worked two to three jobs for like 20 years, more than 20 years.
She had no time to even sit down and enjoy anything. And she didn't have the time to go be ambitious and to put herself out there in any meaningful way.
And I'm look I'm sitting down, I'm looking at the fact that we're still in this old world way of like people need to drop off job applications. It's all online now, so what are you going to do, right?
If you're not tech literate, you're not getting your application out there in your local area, and all the businesses who could be using your help and you could be finding a way to make more money and get a better job, right? Get interviews, be moving and shaking in mobile.
I think about how she struggled and how she suffered. And if I were to walk up to her one sunny Sunday morning and I said, Mom, look at my laptop. Look what I have.
It's my little, you know, they didn't have this, it wasn't this good when I was a kid, but if, if, you know, in today's terms, right? If I said, Mom, look at my little MacBook sixteen gig thing, it's it's sitting here and it's it's it's applying to jobs for you, Mom, it's customizing your resume for each and every job listing in our local area.
She would simply break down and cry, okay? So, this is a superpower, this is a force for good.
This is, we're talking about actual, real ground-level capabilities to help people and to put stuff out there that's going to raise the sea level and raise all ships. Do you understand? Let's not let's not squander the moment.
That's my most important message to you right now, and my most important plea is that you just please use this technology. Do not banish it. Understand that it is a tool, just like the ability to split the atom.
What are you going to do with it? Are you going to give your village clean energy, clean intelligence, a watchful mind, a considerate assistant in a trying time, moving towards embodiment here in the next five to ten years, robots walking around your house, around your premises, tending to you, taking care of your mom, bringing her water and medication, and logging, you know, like doing all like do I have to spell this out for you, America?
Do I really have to spell this out? I'm sorry, but I am literally begging you.
I will do anything for you, America, if you just understand that we are in this position. Those are in the hands. I remember when I was a kid. I remember when I was a kid, and a computer was my entire life, and the hard drive, the fucking hard disk drive, would start to click, and I would literally, well, I would start to cry because I knew that time was limited.
You know, if I shut this down, I might not be able to boot it back up, and I'm not going to have I'm literally, I went through spells in my life as a kid where I didn't have a computer for eight months, over a year. So, do you understand where I'm coming from when I'm fucking so passionate and manic about this?
Do you understand the weight of the moment? Do you understand that if we let people go on thinking, if we let this be another nuclear moment and we have some three-mile island or whatever the fuck, if we have another Chernobyl and people's lives are ruined, and people are casting away the key to the future, and that this technology in the hand like we can't let it happen.
We need to build this from the ground up. I don't give a shit that people are mad that data centers are springing up so they can make the next best version of this, the next best smaller version of this that can run on your phone instead, you fuckers. Like, do you get it? Please, America, please.
Chapter 40 — Grassroots Tech Accelerationism Discussion
Grassroots accelerationism with civic purpose
This chapter argues for moving faster on technology precisely because ordinary people need the benefits, not because the future belongs only to institutions that can already afford it. It remains one of the cleaner supporting chapters and needed only a light editorial polish.
Feb 13, 2026 at 4:16 PM
Forward-leaning, civic, and conversational.
The chapter ties speed to public empowerment rather than elite advantage.
Its stronger mode is civic accelerationism, not abstract disruption language.
The edited version keeps it as a supportive argument with minimal intervention.
Just because people have people express like this incredible tension between tech accelerationism and sity concerns, but I think tech accelerationism from a grassroots ground-up level, which is always going to rain down, you know, unlike the irony of trickle-down economics, trickle-down technologics is very, very real. And as you know, the mainline sort of hot, molten core of technology continues to advance and iterate, what sheds off on the outside around it is incredibly usable by everyday people, and that will continue to be the case to an increasing degree, more and more, especially way more than we have today.
We should expect that what today seems to require an entire data center is going to be runnable on consumer home labs or even, you know, God willing, eventually like a one-off GPU, even. Like, right, we can be talking about optimizations on both sides to get to where we need to be with that.
But again, personally, I see technology as the answer, and I am frustrated when people try to catch people out when they say that, like, oh, you're, you know, you just don't understand. I think I do understand. I understand that it's the responsibility of local and ground-level communities, as it always has been, you know, as we the people, right?
To understand our role in this, right? I hate this idea that we're sitting back and letting it happen to us and that there's no possibility of unity, there's no possibility of rallying around these problems that are going to underpin the next century, the next millennium, rather, sorry, is what I mean.
But we gotta we gotta we gotta make it, man. We're all gonna make it, we're not gonna make it.
I'm I'm in the camp, like, can we please try to make it? Can we please seize the day and make it here?
Chapter 41 — Consumer Hardware And Workflow Thresholds
Consumer hardware and workflow thresholds
This chapter turns time-sensitive hardware chatter into a more durable discussion of workflow thresholds, memory bands, and what level of machine ordinary people actually need to do meaningful AI work. The public edition keeps the practical hardware guidance and removes stray podcast-like clutter.
Feb 13, 2026 at 5:21 PM
Technical, concrete, and more timeless than the raw source.
Hardware guidance matters most when it connects directly to real workflow classes.
The chapter argues for practical thresholds, not aspirational maximalism.
The public edition trims external-conversation residue so the advice lands more cleanly.
Rare signal targets what we call the sixteen twenty-four thirty-six range of consumer hardware, which is sort of medium-high to high, and I mean, I'm going to be honest with you, it goes extremely high-end these days. You can easily you can spend ten thousand dollars if you want five hundred and twelve gigabytes today, which seems to be this the kind of state-of-the-art ceiling.
Uh, and that's obviously the Mac Studio that you cannot buy any sooner than six weeks from today, it's so backordered for everything, but um, that being said, we are we are actively experimenting with um with capabilities at lower levels, but we generally, we currently, while we're open to our opinions and viewpoints changing on this, uh, we we currently believe it's more worthwhile to pursue, for example, at least sixteen sixteen is still incredibly resource-starved, to be clear. It is what we consider kind of the bare minimum entry point for to be able to consider uh some of these workflows as long-tail workflows that you sort of leave running over the course of maybe a day, whereas others are getting richer analysis back in the course of one to two hours, right?
So, that is the difference between, for example, a MacBook Air 16 gig and a 3090 or a 4090 is like kind of an order of magnitude of like 10x faster, right? So, you can, in one hour, you can get what on the MacBook will give you the same throughput in 10 hours, right?
So, that's why we target that those upper bends generally with our solutions, because we think it's entirely possible that, you know. The average American can at least achieve 16.
So, we try to prove out our solutions on as low as 16 gigs, but we're waiting for obviously the shortage to turn into a glut at least briefly, so we can maybe move up to the next level here and consider maybe 24 gigs as the bare minimum entry point because things will really get interesting when we can say that is the case.
Chapter 42 — David's AI Independence Manifesto
AI independence manifesto
This closing chapter returns to the manuscript's governing instinct: people should build their plans around increasing access to capable, local, and practical intelligence rather than wait for permission from gatekeepers. The public edition keeps the final note forceful but slightly more composed, so it lands as a manifesto rather than a rant.
Feb 13, 2026 at 6:45 PM
Resolute, public-facing, and summative.
The ending reasserts independence, local capability, and democratic access as the manuscript's core values.
Its best tone is steady conviction rather than sheer volume.
The edited version preserves the closing force while making it easier to republish confidently.
A lot of what I'm going to be talking about in my open letter, Common Sense 2026 AI in America, is going to be scoffed at by technical people who think that it is not sort of, I don't know, I think they can find every possible thing to say about me that I'm not thinking big enough, that I'm thinking too big, that I'm radical, I'm not radical enough, I don't really care anymore, I'm speaking in terms of what I feel needs to be advised and prescribed. Now I'm waiting for any other experts to step up and chime in here.
I'm going to simply start to affect my influence on the overall signal, the incredible signal that's about to come down and come through in terms of the impact of AI on our society. I will be a simple drop in the bucket.
So don't even think about it, Empy. A drop in the bucket compared to what's going to come from here, just like, you know, just like America's origins were a drop in the bucket compared to where America went from there.
So we need to inflect here, America. We need to inflect. I don't really care anymore, and I don't think you should care anymore.
I don't think you should care what I think. Okay, take that to heart. If there's one thing that I can tell you, maybe more important than anything else, it is that you should think for yourself in this moment because we're going into a weird moment of techno cerebral spells that you can cast on yourself and society to give yourself superpowers, but to open Pandora's boxes of torment, and you know, it's a double-edged sword completely, and that means you need to fit the solutions to your needs, and for God's sake, that's what AI is for.
That's that's the that's what it does, and again, don't get wrapped up in the idea that the money men are going to try to shove their products down your throat. No, they're going to make amazing products that you cannot say no to, and that's a gift, that's a gift to you and me and to society because we get to benefit from that downstream.
It's simply the fact of this technology, and you might say, yeah, eventually it'll break away to emergent capabilities only possible in massive data centers, and that there will be a K-shaped separation in intelligence capabilities. But again, I don't know how many times I need to fucking say it.
If we stopped at GPT 3. 5 Turbo and that was the state-of-the-art forever more, we could still build Utopia. Again, we could still take care of mom and dad when they get old.
We could still do this, we can still do that. I don't want to spell it out every time we get around to this point, but yeah, we kept innovating from there, and we will continue to do so.
So make sure your plans fucking incorporate that. I don't know what else to say.
You're going to have a lot more intelligence that fits on a lot less silicon in the future. Straight up. Straight up fact. Plan for that. That's why we're targeting the range of compute class that we are with our solutions, with our specific grade of 16 to 24 to 36 gigabytes of memory for our particular class of solution, right? And that's what it's about. That is why.