<?xml version="1.0" encoding="utf-8"?><feed xmlns="http://www.w3.org/2005/Atom" ><generator uri="https://jekyllrb.com/" version="4.4.1">Jekyll</generator><link href="https://brianyu43.github.io/feed.xml" rel="self" type="application/atom+xml" /><link href="https://brianyu43.github.io/" rel="alternate" type="text/html" /><updated>2026-04-12T15:10:46+00:00</updated><id>https://brianyu43.github.io/feed.xml</id><title type="html">Brian’s Blog</title><subtitle>A personal blog by Brian.</subtitle><entry><title type="html">I Built Everything I Could Think Of</title><link href="https://brianyu43.github.io/essay/building/2026/04/11/1-i-built-everything-i-could-think-of.html" rel="alternate" type="text/html" title="I Built Everything I Could Think Of" /><published>2026-04-11T15:00:00+00:00</published><updated>2026-04-11T15:00:00+00:00</updated><id>https://brianyu43.github.io/essay/building/2026/04/11/1-i-built-everything-i-could-think-of</id><content type="html" xml:base="https://brianyu43.github.io/essay/building/2026/04/11/1-i-built-everything-i-could-think-of.html"><![CDATA[<blockquote>
  <p>In a single month, I built twenty repositories. A RISC-like OS from NAND gates. A Bitcoin arbitrage engine. A quantitative trading system. An FPS zombie game. A Touhou-style bullet hell. A mobile antivirus. A WiFi speed lawsuit app. An automatic textbook translator. A macroeconomic dashboard. A knowledge graph of my own conversations. An agent-based simulation of SpaceX’s IPO. A fine-tuned biology tutor on cloud A100s.</p>

  <p>Most of them are dead. This is a post about what I learned from the wreckage.</p>
</blockquote>

<hr />

<h2 id="1-nand-to-pong">1. NAND to Pong</h2>

<p>I wanted to understand a computer from the ground up — literally from NAND gates. Stack logic into an ALU, wire it into a CPU, build a RISC-like instruction set, bootstrap an OS, and run Pong on it.</p>

<p>I got further than I expected, but nowhere near where I wanted. The gap between a working ALU and a functioning terminal is enormous. File systems are a civilization’s worth of abstraction away from transistor logic. I vibe-coded most of it, which means I got the result without fully earning the understanding. That still bothers me.</p>

<p>The lesson wasn’t about hardware. It was about how deep “simple” things actually go.</p>

<h2 id="2-bitcoin-arbitrage">2. Bitcoin Arbitrage</h2>

<p>The idea was clean: spot price discrepancies across exchanges, execute automatically, pocket the spread. I set up a server, ran paper trades, and watched it crash. Repeatedly. The system never stayed alive long enough to validate the concept in practice.</p>

<p>The economics made sense on a whiteboard. The engineering did not make sense in production. Latency, API rate limits, session management, reconnection logic — every boring infrastructure problem I hadn’t studied became a wall. I came away knowing that arbitrage isn’t a strategy problem. It’s a plumbing problem.</p>

<h2 id="3-alpacastock">3. AlpacaStock</h2>

<p>This was supposed to be my serious quant infrastructure. Alpaca’s API for US equities, local-first architecture, a proper pipeline from signal generation to risk management to order routing. I designed the package structure, the data layers, the daemon lifecycle.</p>

<p>Then I hit the algorithm wall. Not the engineering — the alpha. I had a system that could execute any strategy cleanly, but I didn’t have a strategy worth executing. Building the factory was the easy part. Knowing what to manufacture was the hard part.</p>

<h2 id="4-personal-hts--openclaw">4. Personal HTS + OpenClaw</h2>

<p>I built a web-based home trading system on top of Korea Investment &amp; Securities’ API, modeled after Toss Securities’ UI. It actually worked — live quotes, order execution, portfolio tracking. Then I connected OpenClaw so I could buy and sell stocks in natural language.</p>

<p>It was technically impressive and practically useless. The AI could parse “buy 10 shares of Samsung” just fine. But I couldn’t find a single scenario where talking to a chatbot was faster or safer than tapping two buttons on a screen. The interface was a solution looking for a problem that didn’t exist. I shut it down.</p>

<h2 id="5-macroeconomic-dashboard">5. Macroeconomic Dashboard</h2>

<p>KDI, OpenDART, Statistics Korea — I pulled public APIs and built a dashboard to track macro trends. GDP, interest rates, corporate filings, demographic shifts, all in one place.</p>

<p>This one was quietly useful. No drama, no failure, no pivot. Just a clean window into numbers that move slowly. Not everything needs to be a platform.</p>

<h2 id="67-wifi-speed-app--mobile-antivirus">6–7. WiFi Speed App / Mobile Antivirus</h2>

<p>I built these. They worked. There isn’t much more to say. Sometimes you make things just to prove you can make things. The WiFi app compared your actual speed against what your carrier promised — confrontational by design. The antivirus was a standard mobile security tool. Neither changed how I think about building.</p>

<h2 id="8-left-4-dead-alone">8. Left 4 Dead, Alone</h2>

<p>I tried to build a single-player FPS inspired by Left 4 Dead 2. Zombie AI that follows the player. Stage transitions. Background music. Gun skins. Special zombie logic — boomers, hunters, tanks. All vibe-coded.</p>

<p>It got surprisingly far. The mechanics worked. Zombies chased you. Stages loaded. Music played. But the skins and animations were garbage, and in a first-person shooter, that’s not a cosmetic problem — it’s the entire experience. I couldn’t push past the uncanny valley of procedurally generated character models. The project died looking like a PS1 fever dream.</p>

<h2 id="9-touhou-danmaku">9. Touhou Danmaku</h2>

<p>I tried to build a Touhou-style bullet hell — six stages plus an Extra, the full format. BGM was pulled straight from Embodiment of Scarlet Devil. For everything else — character sprites, boss portraits, dialogue scripts — I pointed computer use at GPT’s web interface and had it generate all of it, then uploaded the assets directly.</p>

<p>Structurally, it was Touhou. Stages loaded. Bosses appeared. Spell cards fired. Dialogue played between phases. The format was there. The soul was not. The danmaku patterns had no rhythm — no visual poetry, no escalation, just projectiles filling the screen without intent. The bosses had names but no presence. The sprites were clean enough to render but too generic to remember. It was spoiled Touhou — same shape, wrong taste. Thirty seconds of play and you could tell no one who loved the original had been involved.</p>

<h2 id="10-automatic-textbook-translation">10. Automatic Textbook Translation</h2>

<p>My information theory textbook was in English, and I was tired of context-switching between languages while studying. So I pointed Codex at it and told it to translate the whole thing into Korean.</p>

<p>It took almost an hour. The equations didn’t survive the round trip — LaTeX formatting broke in unpredictable ways. The introductory sections were readable. The technical sections were not. I ended up with a document that was faster to fix by hand than to have generated in the first place.</p>

<hr />

<h2 id="what-i-actually-learned">What I Actually Learned</h2>

<p>Ten projects. Ten different domains. Not a single one became something I use daily.</p>

<p>The pattern is obvious in retrospect. Every project where I didn’t already understand the domain — arbitrage mechanics, alpha generation, game graphics, typesetting — the AI filled the gap with confident garbage. The code compiled. The outputs looked plausible. But plausible isn’t correct, and in domains where I couldn’t tell the difference, I was just generating waste at high speed.</p>

<p>Vibe coding has a brutal failure mode: it feels like progress. You’re shipping commits, seeing outputs, watching things move on screen. But if you don’t know what “right” looks like, you’re building a factory that produces defective parts — and you’re the last person who’d notice.</p>

<p>The projects that worked — the HTS, the macro dashboard — were the ones where I already knew what I was looking at. I understood the domain. The AI accelerated my hands, not my judgment. That’s the difference.</p>

<p><strong>Part 2 continues with what happened when I stopped building for humans and started building for machines.</strong></p>]]></content><author><name></name></author><category term="essay" /><category term="building" /><summary type="html"><![CDATA[In a single month, I built twenty repositories. A RISC-like OS from NAND gates. A Bitcoin arbitrage engine. A quantitative trading system. An FPS zombie game. A Touhou-style bullet hell. A mobile antivirus. A WiFi speed lawsuit app. An automatic textbook translator. A macroeconomic dashboard. A knowledge graph of my own conversations. An agent-based simulation of SpaceX’s IPO. A fine-tuned biology tutor on cloud A100s. Most of them are dead. This is a post about what I learned from the wreckage.]]></summary></entry><entry><title type="html">Then I Started Building for Machines</title><link href="https://brianyu43.github.io/essay/building/2026/04/11/2-then-i-started-building-for-machines.html" rel="alternate" type="text/html" title="Then I Started Building for Machines" /><published>2026-04-11T15:00:00+00:00</published><updated>2026-04-11T15:00:00+00:00</updated><id>https://brianyu43.github.io/essay/building/2026/04/11/2-then-i-started-building-for-machines</id><content type="html" xml:base="https://brianyu43.github.io/essay/building/2026/04/11/2-then-i-started-building-for-machines.html"><![CDATA[<blockquote>
  <p>At some point, the user I was building for stopped being a person and started being an agent. I didn’t plan this. It just happened — project after project, the interface kept pointing away from screens and toward protocols.</p>

  <p>This is the story of that shift, and why I eventually walked away from it too.</p>
</blockquote>

<hr />

<h2 id="11-codex-on-a-phone">11. Codex on a Phone</h2>

<p>I spent four days on this. The idea: run Codex from your phone. Conversations are stored as <code class="language-plaintext highlighter-rouge">.jsonl</code>, which is easy enough to parse. Reading history on mobile wasn’t the hard part. The hard part was everything else — real-time sync, sending commands remotely (which technically worked), and keeping a persistent server connection that didn’t drop every time the phone went to sleep.</p>

<p>The app functioned. But it functioned the way a house of cards functions — impressive until someone breathes. I lacked the server infrastructure intuition to make it robust. Four days of building taught me that “it works on my desk” and “it works in production” are separated by an ocean I hadn’t learned to cross.</p>

<h2 id="12-clizationer">12. Clizationer</h2>

<p>This one started from a genuine frustration: web GUIs are not built for agents. A button that says “Submit” means something to a human looking at a screen. To an LLM, it’s noise. I wanted to build a compiler that takes any website — or any API — and converts it into a CLI that agents could call directly.</p>

<p>The vision was ambitious. The execution was not. I hadn’t thought through the core mechanics deeply enough — how to represent a website’s interaction model as structured data, how to handle state, what “compiling the web” actually means at a technical level. The prototype ended up crawling HTML and extracting links. Which is to say, it ended up being a scraper with a fancy name.</p>

<p>I learned something useful, though: having the right problem doesn’t mean you have the right solution. The instinct was correct — the gap between GUI and agent is real. But instinct without architecture is just a README with no code behind it.</p>

<h2 id="13-kiwoom-trade-mcp-server">13. Kiwoom Trade MCP Server</h2>

<p>I wrapped Kiwoom Securities’ trading API as an MCP server — Model Context Protocol, the standard for letting AI clients call external tools. Resources, tools, structured responses, the whole spec.</p>

<p>The surprising lesson: it wasn’t hard. Once you understand the protocol, turning an API into an MCP server is mechanical work. The abstraction layer that lets an AI buy stocks through a standardized interface took less time to build than the HTS frontend that let a human do the same thing.</p>

<p>That asymmetry stuck with me. Making software agent-readable might actually be easier than making it human-readable. The hard part was never the protocol. It was knowing whether the agent should be doing the thing at all.</p>

<h2 id="14-conversation-collector">14. Conversation Collector</h2>

<p>I built a pipeline to automatically extract ChatGPT conversations — the idea being that past context could be injected into future sessions. A long-term memory system, essentially.</p>

<p>It worked. I just didn’t need it. The conversations I wanted to remember, I remembered. The ones I forgot, I forgot for a reason. Automated context retrieval sounds transformative in theory. In practice, most of what I said to ChatGPT last Tuesday is not worth retrieving.</p>

<h2 id="15-knowledge-map--taste-map">15. Knowledge Map / Taste Map</h2>

<p>This was more personal. I took my entire conversation history, embedded it, clustered it, and laid it out as a 2D graph — a map of my own interests, weighted by frequency and semantic proximity.</p>

<p>The graph worked. I could see the clusters: finance here, biology there, infrastructure in the corner, existential rambling scattered everywhere. It was a mirror made of vectors.</p>

<p>But once I looked at it, I didn’t need to look again. The map confirmed what I already knew about myself. It didn’t reveal anything I hadn’t felt. The tool was technically sound and existentially redundant.</p>

<h2 id="16-mirofish">16. Mirofish</h2>

<p>SpaceX IPO speculation was everywhere, so I built an agent-based simulation. Twenty agents — regulators, Elon, retail investors, meme consumers, institutional funds — debating across forty rounds whether SpaceX would go public, and when.</p>

<p>Most simulations converged on the same conclusion: IPO delayed. The agents, playing their roles faithfully, kept finding reasons to postpone. Regulators raised concerns. Elon resisted dilution. Retail investors got distracted. It was a surprisingly believable model of institutional inertia.</p>

<p>Whether it was <em>accurate</em> is a different question. But as a way to stress-test a thesis by forcing multiple perspectives to argue, it was more interesting than reading five analysts say the same thing.</p>

<h2 id="17-arxiv-daily-pipeline">17. arXiv Daily Pipeline</h2>

<p>Every day, 1,900 papers hit arXiv. I built a pipeline that ingested all of them, ran a first-pass filter for relevance, summarized the abstracts of survivors, and published the results to a website.</p>

<p>I used it for about a week. It was genuinely convenient — a curated daily feed of papers I might care about, with summaries I could scan in minutes. But then I made a decision that killed the project from a completely different direction.</p>

<p>I decided to focus entirely on biology and pharmacology. And once I committed to that, I didn’t want summaries anymore. I wanted to read the actual papers, slowly, one at a time. The pipeline was a tool for breadth. I was choosing depth.</p>

<hr />

<h2 id="the-pattern-again">The Pattern, Again</h2>

<p>Seven projects. Every one of them built for machines — agents, protocols, pipelines, automated reasoning. And every one of them either worked but didn’t matter, or mattered but I couldn’t finish it.</p>

<p>The lesson from Part 1 was: AI can’t substitute for domain knowledge. This round added a corollary: <strong>AI can’t substitute for knowing what’s worth doing.</strong></p>

<p>I could build an MCP server in a day. I could wire agents to simulate an IPO debate. I could map my own mind as a graph. None of it was hard. All of it was fast. And almost none of it lasted, because speed without direction is just expensive wandering.</p>

<p>The arXiv pipeline was the turning point. I built a system to process 1,900 papers a day, and then I realized I’d rather read one paper well than skim fifty. That’s not a technical insight. It’s a personal one.</p>

<p><strong>Part 3 is about where I landed.</strong></p>]]></content><author><name></name></author><category term="essay" /><category term="building" /><summary type="html"><![CDATA[At some point, the user I was building for stopped being a person and started being an agent. I didn’t plan this. It just happened — project after project, the interface kept pointing away from screens and toward protocols. This is the story of that shift, and why I eventually walked away from it too.]]></summary></entry><entry><title type="html">Now It’s Biology</title><link href="https://brianyu43.github.io/essay/building/biology/2026/04/11/3-now-its-biology.html" rel="alternate" type="text/html" title="Now It’s Biology" /><published>2026-04-11T15:00:00+00:00</published><updated>2026-04-11T15:00:00+00:00</updated><id>https://brianyu43.github.io/essay/building/biology/2026/04/11/3-now-its-biology</id><content type="html" xml:base="https://brianyu43.github.io/essay/building/biology/2026/04/11/3-now-its-biology.html"><![CDATA[<blockquote>
  <p>Twenty repositories in thirty days, and now I’m writing code for one domain. Not because I ran out of ideas. Because I finally figured out which ideas were mine.</p>
</blockquote>

<hr />

<h2 id="18-localbio">18. LocalBio</h2>

<p>I tried to fine-tune a biology tutor. Not a general-purpose model that happens to know biology — a model with a specific pedagogical voice. Patient explanations. Gentle correction of misconceptions. Hints before answers. “I don’t know” when it didn’t know.</p>

<p>The base was Qwen 10B, quantized, running on cloud A100s. I burned through $30 in free credits across seven training runs. Each run, I adjusted the dataset, tuned the prompts, tweaked the reward signals. Each run, the output stayed flat. The model didn’t get better at teaching. It just got better at pattern-matching my formatting.</p>

<p>Then the infrastructure fell apart. Spot instances interrupted mid-training. A100 availability dried up. Checkpoints corrupted. I was fighting two wars at once — one against the model’s stubbornness, one against the cloud’s unreliability — and losing both.</p>

<p>I stopped. Not because the idea was wrong, but because the approach was premature. I didn’t yet know enough about what makes a good biology explanation to encode it as training signal. I was trying to teach a model something I hadn’t fully articulated to myself.</p>

<h2 id="19-biology-researcher-network">19. Biology Researcher Network</h2>

<p>I pulled data from Google Scholar and mapped Korean biology researchers into a network graph. Who publishes with whom. Which institutions cluster together. Where the bridges are between subfields.</p>

<p>The graph visualization itself was fine — nodes, edges, force-directed layout, the usual. But the real value wasn’t the output. It was the process. Crawling publication data forced me to read hundreds of abstracts I wouldn’t have found otherwise. Building the edges forced me to understand how subfields relate. The graph was a byproduct. The education was the product.</p>

<h2 id="20-bionews">20. BioNews</h2>

<p>I built a biology news aggregator. It collected, it summarized, it displayed.</p>

<p>And then I realized I could just ask ChatGPT the same questions and get better answers with more context. The aggregator added a layer of automation to something that didn’t need automating. Some problems are better solved by a conversation than a pipeline.</p>

<hr />

<h2 id="what-twenty-repositories-taught-me">What Twenty Repositories Taught Me</h2>

<p>Here’s the uncomfortable truth about vibe coding.</p>

<p>When I didn’t understand a domain, AI didn’t fill the gap — it wallpapered over it. The Bitcoin arbitrage system produced plausible-looking code that crashed in production because I didn’t understand exchange infrastructure. The FPS game had working zombie logic but unusable character models because I didn’t understand 3D animation. The fine-tuned biology model reproduced my formatting without learning my pedagogy because I hadn’t defined what good teaching looks like.</p>

<p>In every case, the AI was doing exactly what I asked. The problem was that I was asking the wrong things, and I didn’t know enough to notice.</p>

<p>The projects that worked — really worked, not just compiled — were the ones where I brought the domain knowledge and the AI brought the speed. The HTS system worked because I understood trading interfaces. The macro dashboard worked because I understood what the numbers meant. The Kiwoom MCP server worked because I already knew both the API and the protocol.</p>

<p><strong>AI as a factory is only as good as the blueprint you hand it. And the blueprint has to come from you.</strong></p>

<p>This is the part that the “anyone can build anything now” narrative gets wrong. Yes, anyone can generate code. But generating code for a domain you don’t understand is not building — it’s producing artifacts. It’s the software equivalent of AI-generated 20-second videos: technically impressive, semantically empty, and indistinguishable from noise if you don’t already know what signal looks like.</p>

<p>Twenty repositories. The ones I’m proud of are the ones where I knew what I was doing before I opened the terminal.</p>

<hr />

<h2 id="why-biology">Why Biology</h2>

<p>I studied pharmacology. Biology isn’t a new interest — it’s the oldest one. Every other domain I wandered into over the past month was a detour. Finance, agent protocols, game development, infrastructure tooling — I learned something from each of them, but none of them were <em>mine</em> the way biology is.</p>

<p>The arXiv pipeline was the moment it clicked. I built a system to skim 1,900 papers a day and realized I didn’t want to skim. I wanted to sit with one paper about gene regulation or protein folding and actually understand it. Not summarize it. Understand it.</p>

<p>So that’s where I am now. Writing code for biology. Reading papers slowly. Building tools that help me learn, not tools that help me produce.</p>

<p>The repositories will keep coming. But they’ll be about one thing. Machine learning applied to biological questions — not because it’s trendy, but because it’s the intersection where I actually know what “right” looks like. Where I can tell the difference between signal and garbage. Where the AI accelerates my understanding instead of replacing it.</p>

<p>Twenty projects in thirty days taught me that the most productive thing I can do with AI is to use it in the one place where I don’t need AI to tell me if the answer is correct.</p>

<p>That place, for me, is biology.</p>]]></content><author><name></name></author><category term="essay" /><category term="building" /><category term="biology" /><summary type="html"><![CDATA[Twenty repositories in thirty days, and now I’m writing code for one domain. Not because I ran out of ideas. Because I finally figured out which ideas were mine.]]></summary></entry><entry><title type="html">The Door That Wouldn’t Open</title><link href="https://brianyu43.github.io/essay/2026/04/11/the-door-that-wouldnt-open.html" rel="alternate" type="text/html" title="The Door That Wouldn’t Open" /><published>2026-04-11T15:00:00+00:00</published><updated>2026-04-11T15:00:00+00:00</updated><id>https://brianyu43.github.io/essay/2026/04/11/the-door-that-wouldnt-open</id><content type="html" xml:base="https://brianyu43.github.io/essay/2026/04/11/the-door-that-wouldnt-open.html"><![CDATA[<blockquote>
  <p>I’ve wanted to do this for four years. Not vibe coding. Not building twenty repositories in a month. This — applying deep learning to drug discovery. The difference is that four years ago, I couldn’t. This is about what changed.</p>
</blockquote>

<hr />

<h2 id="the-sentence">The Sentence</h2>

<p>In 2023, Jensen Huang said something at GTC that I haven’t been able to shake:</p>

<p><em>“Biology is arguably the most important engineering in our time, and it’s fundamentally an information science problem.”</em></p>

<p>I was a pharmacy student when I heard this. Not a computer science student who happened to be interested in biology — a pharmacy student who’d spent years memorizing drug mechanisms, pharmacokinetics, receptor binding profiles. I understood the biology. What I didn’t understand was the math behind the sentence.</p>

<p>I tried. I opened deep learning papers. I watched lecture series. I downloaded PyTorch tutorials. Every time, I hit the same two walls: the mathematics I hadn’t studied, and the code I couldn’t write. Linear algebra wasn’t intuition for me — it was notation I could parse but not think in. Python wasn’t a tool — it was a foreign language where I could read signs but not hold conversations.</p>

<p>So I closed the tabs, went back to pharmacology, and told myself I’d come back later.</p>

<p>That was four years ago.</p>

<hr />

<h2 id="four-years-of-closed-tabs">Four Years of Closed Tabs</h2>

<p>I want to be honest about what those four years felt like, because the narrative of “I was always destined to do this” would be a lie.</p>

<p>I wasn’t building toward anything. I was a pharmacy student doing pharmacy things. The interest in computational drug discovery didn’t go away — it sat in the background like a browser tab I’d pinned but never revisited. Every few months I’d see a paper about AlphaFold or a new molecular generation model, feel a pang of something between excitement and frustration, and close the tab again.</p>

<p>The frustration wasn’t about intelligence. It was about access. I could read a paper’s introduction and understand <em>why</em> the problem mattered — I knew what ADMET failure meant for a drug candidate, I knew why polypharmacy interactions were dangerous, I knew the clinical stakes. But the moment the paper shifted to “we parameterize the molecular graph as…” I was locked out.</p>

<p>It’s a specific kind of helplessness: understanding the destination but not the road. Knowing that the bridge between pharmacology and machine learning existed, seeing other people cross it, and not being able to find the on-ramp.</p>

<hr />

<h2 id="the-internship">The Internship</h2>

<p>Last year, during a lab rotation, something shifted.</p>

<p>The lab was working on cocrystal prediction — whether two molecules would form a cocrystal when combined. The approach was straightforward by ML standards: a CNN that takes SMILES strings as input, trained on a public dataset, and outputs a probability. There was an existing codebase. A GitHub repo. Real code solving a real chemistry problem.</p>

<p>I opened it in Cursor.</p>

<p>This is the part that’s hard to explain to someone who hasn’t experienced it. For four years, code had been a wall. I could look at it, but I couldn’t <em>read</em> it — not in the way you read something and understand the author’s intent. Cursor changed that. Not because it wrote the code for me, but because it could explain what each block was doing, in context, in response to my specific questions.</p>

<p>I spent a week with that repo. Not building anything new — just reading. Following the data pipeline from raw SMILES to tensor representation. Understanding why the convolutional layers were shaped the way they were. Asking “why this activation function?” and getting answers I could actually evaluate against my chemistry knowledge.</p>

<p>For the first time, the Methods section of a paper wasn’t a wall. It was a map.</p>

<p>I didn’t build anything from that experience. I didn’t publish anything. I didn’t add a line to my CV. But I walked out knowing that the door I’d been pushing against for four years wasn’t locked anymore. The tools had changed. The barrier between “understanding the problem” and “understanding the solution” had gotten thin enough to break through.</p>

<hr />

<h2 id="the-detour">The Detour</h2>

<p>Then I passed the pharmacist licensing exam, and something unhinged happened.</p>

<p>With the exam behind me and AI coding tools in my hands, I went on a building spree. Twenty repositories in thirty days. A Bitcoin arbitrage engine. An FPS zombie game. A Touhou-style bullet hell. A macroeconomic dashboard. An agent-based simulation of SpaceX’s IPO. Everything I could think of, as fast as I could think of it.</p>

<p>I’ve written about this elsewhere — what worked, what didn’t, and why most of it died. The short version: AI can accelerate your hands, but it can’t substitute for domain knowledge. The projects that worked were the ones where I already knew what “right” looked like. The ones that failed were the ones where I was generating confident garbage at high speed without realizing it.</p>

<p>But the building spree did something I didn’t expect. It burned through the novelty of “I can build anything” fast enough that I had to confront a harder question: “What should I build?”</p>

<p>And the answer, once the noise cleared, was obvious. It had been obvious for four years.</p>

<hr />

<h2 id="going-back-to-the-beginning">Going Back to the Beginning</h2>

<p>I stopped building random things and started studying.</p>

<p>Not coding — studying. The drug discovery pipeline, end to end. Not the version you learn in pharmacy school, where you memorize approved drugs and their mechanisms. The version that matters for computational approaches: how targets are selected through omics data, how hits are identified through virtual screening, how leads are optimized through structure-activity relationships, and how candidates fail — overwhelmingly — at the ADMET and toxicity stage.</p>

<p>This was the education I’d been missing. Pharmacy school taught me what happens after a drug exists. It didn’t teach me where the computational bottlenecks are in making one. I didn’t know that ADMET failure accounts for the majority of late-stage drug candidate attrition. I didn’t know that multi-drug interactions beyond pairwise combinations are essentially uncharted territory in existing databases. I didn’t know that the gap between a molecular property prediction model and an actionable clinical tool is wider than most ML papers acknowledge.</p>

<p>I know now. And knowing the problem space properly — not just the biology, not just the code, but where they meet and where they fail to meet — is what makes the difference between building something useful and generating another artifact.</p>

<hr />

<h2 id="what-deep-learning-actually-is-now-that-i-can-see-it">What Deep Learning Actually Is, Now That I Can See It</h2>

<p>Here’s what I understand now that I didn’t understand four years ago.</p>

<p>Deep learning applied to drug discovery isn’t a single technique. It’s a constellation of approaches mapped onto different stages of the pipeline. Graph neural networks for molecular property prediction. Generative models for <em>de novo</em> molecule design. Sequence models for protein structure and function. Diffusion models for structure-based drug design. Each one addresses a specific bottleneck, with specific data requirements, specific failure modes, and specific gaps between benchmark performance and clinical utility.</p>

<p>Four years ago, this looked like one monolithic field I couldn’t enter. Now it looks like a landscape with distinct regions, some well-explored, some barely touched. And I can see — because of the pharmacology, not in spite of it — which regions matter most and which are overcrowded.</p>

<p>The polypharmacy interaction space, for instance. Pairwise drug-drug interactions are relatively well-studied. But the moment you move to three or more concurrent drugs — which is the reality for most elderly patients, most chronic disease patients, most of the people I’ll see across a pharmacy counter — the data becomes sparse, the models become uncertain, and the clinical tools become essentially nonexistent.</p>

<p>That’s not a gap I identified by reading ML papers. It’s a gap I know exists because I studied pharmacology for six years.</p>

<hr />

<h2 id="where-i-am-now">Where I Am Now</h2>

<p>I’m a pharmacist. I graduated a month ago. I haven’t stood behind a counter yet.</p>

<p>I’m also someone who has spent the last several months going deeper into machine learning for drug discovery than I ever thought I’d be able to. Not because I suddenly became a better mathematician or a better programmer — but because the tools changed, and the barrier that kept me out for four years dissolved fast enough that I could finally cross.</p>

<p>I want to contribute to what I think is the most important intersection in science right now: computation and biology. Not at the surface level — not “I used ChatGPT to summarize a paper” — but at the level where you understand both the biological question and the mathematical machinery well enough to know when the model is wrong and why.</p>

<p>Jensen Huang was right. Biology is becoming a computational science. The question is who gets to participate in that transformation. For a long time, the answer was: people with CS degrees and access to GPU clusters. I think that’s changing. I think domain experts — pharmacists, biologists, clinicians — who can now learn the computational tools are going to see things that pure ML researchers miss. Because they know what the numbers are supposed to mean.</p>

<p>I have a long way to go. The mathematics I skipped four years ago still needs to be learned — properly, not vibed. The engineering intuition that separates a prototype from a production system still needs to be built. The gap between “I can read a GNN paper” and “I can design a novel architecture” is real, and I’m closer to the first end than the second.</p>

<p>But the door is open now. And I’m not closing the tab this time.</p>]]></content><author><name></name></author><category term="essay" /><summary type="html"><![CDATA[I’ve wanted to do this for four years. Not vibe coding. Not building twenty repositories in a month. This — applying deep learning to drug discovery. The difference is that four years ago, I couldn’t. This is about what changed.]]></summary></entry><entry><title type="html">The AI Era, the Pissing Contests, and the Next New World</title><link href="https://brianyu43.github.io/essay/ai/2026/04/08/the-ai-era-the-pissing-contests-and-the-next-new-world.html" rel="alternate" type="text/html" title="The AI Era, the Pissing Contests, and the Next New World" /><published>2026-04-08T15:00:00+00:00</published><updated>2026-04-08T15:00:00+00:00</updated><id>https://brianyu43.github.io/essay/ai/2026/04/08/the-ai-era-the-pissing-contests-and-the-next-new-world</id><content type="html" xml:base="https://brianyu43.github.io/essay/ai/2026/04/08/the-ai-era-the-pissing-contests-and-the-next-new-world.html"><![CDATA[<blockquote>
  <p>Five years from now, everyone uses AI. Machines run on APIs. Spreadsheets run on Claude. Kiosks are fully automated. Productivity? Who knows. People scroll Reels for eight hours a day. Everything feels easy, and everyone is arguing about nothing.</p>

  <p>This essay started with “what happens to the economy?” and ended somewhere near “what happens to us?”</p>
</blockquote>

<hr />

<h2 id="1-deflation-no-an-asymmetric-economy">1. Deflation? No. An Asymmetric Economy.</h2>

<p>When AI handles operations, customer service, and administration, marginal costs collapse. The cost of labor, inventory management, and order processing behind a single cup of coffee all plummet. Supply-side deflation pressure is real.</p>

<p>But the real question is: where does the surplus go?</p>

<p>If it accumulates as corporate profit, you get asset inflation (real estate, equities) alongside real-economy deflation — a dual structure. If it’s redistributed through wages or subsidies, prices might stay surprisingly stable. What’s most likely isn’t pure deflation but an <strong>asymmetric economy</strong>: prices for physical goods and services fall, prices for attention, emotion, and scarce experiences rise, and asset prices move on their own separate logic entirely.</p>

<p>When people pour their time into content consumption, attention becomes the ultimate scarce resource. The advertising-subscription-creator economy swells as a share of GDP. Meanwhile, felt consumption is “free content + cheap stuff,” and price indices face perpetual downward pressure.</p>

<h2 id="2-everyone-becomes-a-telecom-company">2. Everyone Becomes a Telecom Company</h2>

<p>When AI flattens the implementation layer, most businesses converge toward the same structure as telecom carriers. Infrastructure is identical. Technical differentiation is negligible. You compete on pricing plans, bundling, marketing, and distribution.</p>

<p>“Our tech is better” stops working. Your competitor asks the same AI to optimize, and gets the same result.</p>

<p>Engineering shrinks to skeleton crews — what took fifty people, two to five now handle through AI orchestration. The bottleneck shifts to sales and relationships. But unlike telecoms, there’s no physical infrastructure (cell towers, cables) acting as a barrier to entry. Instead of oligopoly, you get hyper-competition: thousands of companies with zero margin, scrambling over nothing.</p>

<p>It settles one of two ways. Either platform concentration — a handful of AI infrastructure owners become the carriers, everyone else becomes a reseller. Or brand-and-community competition — where technology is identical, “I use this because of who made it” becomes the only differentiator.</p>

<h2 id="3-the-geeks-will-weep">3. The Geeks Will Weep</h2>

<p>The last twenty years were the golden age of “if you can code, you’re king.” Technical skill was scarce. AI equalizes implementation, and that scarcity evaporates.</p>

<p>It’s not just about jobs disappearing. It’s about the foundation of identity crumbling. “I’m someone who understands and builds complex things” — that’s the core of geek culture. When the market value of that ability collapses, what follows is an existential crisis, not a career pivot.</p>

<p>What makes it worse: the new rules of the game are precisely what geeks have spent their entire lives avoiding. Relationship-building. Small talk. Reading emotions. Sales. The world suddenly demands the one thing they were never wired for.</p>

<p>Smart people in crisis behave differently. They can analyze their own situation with painful precision — “I understand exactly why this happened, and I can’t change it.” Sophisticated grievance narratives emerge online. High-functioning depression spreads silently. Inside organizations, cynical sabotage becomes a quiet drag on progress.</p>

<p>Winning and losing start to split along temperament rather than ability. You could learn to code. But learning to smile comfortably in front of strangers is a matter of rewiring thirty years of personality. For those who believed in meritocracy, this feels like the world breaking its promise.</p>

<h2 id="4-the-pissing-contests">4. The Pissing Contests</h2>

<p>The median response isn’t apathy. It’s noise.</p>

<p>When you can no longer prove your existence through what you’ve made, all that’s left is what you’ve said. Not what you produced, but what you argued, what you criticized, whose side you took. Your opinions become the only evidence that you exist.</p>

<p>AI has solved everything with a clear answer. So the only territory left for humans is <strong>arguing about things that have no answer.</strong> Is AI-brewed coffee real coffee? Is that kiosk UX humane? Is AI-written text real writing? Everything becomes a debate. Not because the questions matter, but because debating is the last activity that feels like being alive.</p>

<p>And people will deliberately avoid using AI in these arguments. If you ask Claude, it gives you an answer, and the fight ends. But the fight can’t end — the fight is the point. So people argue with emotion, with anecdote, with “trust me, I’ve been through it.” Not using AI becomes a badge of authenticity. “I don’t hide behind machines.”</p>

<p>Sartre’s “hell is other people” maps precisely onto this structure. It was never about bad people — it was about being trapped in a world where you can only define yourself through others’ eyes. When AI takes over <em>doing</em>, all that’s left is <em>being seen</em>. Every human interaction becomes a cry of “notice me.”</p>

<p><strong>The real cost of the AI era won’t be subscription fees. It’ll be the friction cost of pissing contests.</strong> AI multiplies efficiency by ten; humans burn nine of those gains on micro-politics.</p>

<h2 id="5-ai-is-the-printing-press-so-whats-the-religion">5. AI Is the Printing Press. So What’s the Religion?</h2>

<p>Gutenberg’s printing press democratized information. Once anyone could read the Bible, interpretive authority fractured. The Reformation erupted. Hundreds of denominations splintered. They fought endless pissing contests over whose reading was correct, until the losers said, “Forget it, we’re leaving,” and boarded ships for the New World.</p>

<p>AI democratizes capability. The structure is identical.</p>

<p>AI isn’t the religion itself. It’s the mirror. The printing press reflected “the Word of God” to everyone; AI reflects “human ability” to everyone. The religion is the question that follows: <strong>“Then what are humans for?”</strong> In the sixteenth century, the central question was your relationship with God. In the twenty-first, it’s your relationship with the machine.</p>

<p>New denominations are already forming. Pure Humanism — “No AI, hands only, that’s what’s real.” An Amish revival for the twenty-first century. Full Integration — “Merging with AI is evolution.” Transhumanism. Pragmatic Moderates — “Use it as a tool, but protect your identity.” Most people claim to be here, but the boundary keeps blurring.</p>

<p>And between these factions: more pissing contests.</p>

<h2 id="6-london-didnt-fall-but-this-time-is-different">6. London Didn’t Fall. But This Time Is Different.</h2>

<p>After the misfits sailed for the New World, London thrived. With the noisy dissenters gone, those who remained focused on commerce and built the British Empire.</p>

<p>But the analogy breaks down on closer inspection. London had India. Africa. China. An inexhaustible exterior to extract from. That external engine sustained the empire even as its soul hollowed out.</p>

<p>Future Earth has no exterior. AI has squeezed out every efficiency. Markets are saturated. Population growth has stalled. People scroll Reels and wage pissing contests. An empire with no colonies to exploit is just an expensive retirement home.</p>

<p>And critically, the ones leaving this time are the geeks. The Mayflower carried farmers and carpenters — people who could produce on arrival. The Mars ship carries the people who build systems, control environments, and orchestrate AI. It’s not misfits leaving. It’s the engine leaving.</p>

<p>This isn’t London. It’s Rome. London had extraction to sustain centuries of dominance. Rome collapsed into internal politics the moment expansion stopped. Bread and circuses. AI is the mercenary army. Reels is the Colosseum.</p>

<p>The more likely scenario is darker still: the spaceship seats are allocated by capital and power. The geeks aren’t passengers; they’re crew. Mars isn’t a New World — it’s a gated community at planetary scale. And the people left on Earth don’t even realize they’re living in a colony. They’re too busy with their pissing contests to notice.</p>

<h2 id="7-what-if-fusion-energy-changes-everything">7. What If Fusion Energy Changes Everything?</h2>

<p>Here’s one more variable. If fusion energy matures and quantum computing gets serious resource allocation, energy costs approach zero.</p>

<p>Free energy changes everything. Every stage of production — extraction, transport, processing, manufacturing — runs on energy at its core. Desalination becomes free. Food production converges on free. Recycling becomes perfect. Material scarcity itself could vanish. AI plus unlimited energy plus quantum computing means “production” as a concept loses meaning.</p>

<p>But the pissing contests don’t stop. They intensify. When material scarcity disappears, only one scarcity remains: recognition, attention, status. You can’t manufacture that with a hundred fusion reactors.</p>

<p>History offers a parallel: aristocratic society. People with no survival concerns spent their days in court — competing over etiquette, jockeying for rank, fighting over who sits closer to the king. Versailles was a purpose-built arena for pissing contests. Louis XIV designed it that way deliberately, channeling the nobility’s rebellious energy into ceremonial one-upmanship.</p>

<p><strong>All of humanity becomes aristocracy. Reels becomes Versailles.</strong></p>

<p>The majority wage status wars in the palace. A minority, freed from material concern, pursue something genuinely deep. Even with infinite energy, human attention remains finite. What you spend that finite attention on becomes the defining choice of your life.</p>

<hr />

<h2 id="epilogue">Epilogue</h2>

<p>The final bottleneck of the AI era is not technology. Not compute. Not energy.</p>

<p>It’s human ego. And no model can optimize that.</p>

<p>Those who can step out of the pissing contests and protect their own attention — physically or mentally — are the ones who will reach the next New World.</p>

<p>The spaceship isn’t a rocket. It’s self-awareness.</p>]]></content><author><name></name></author><category term="essay" /><category term="ai" /><summary type="html"><![CDATA[Five years from now, everyone uses AI. Machines run on APIs. Spreadsheets run on Claude. Kiosks are fully automated. Productivity? Who knows. People scroll Reels for eight hours a day. Everything feels easy, and everyone is arguing about nothing. This essay started with “what happens to the economy?” and ended somewhere near “what happens to us?”]]></summary></entry></feed>