Garden Your Attention
When I was young, my mother told me something I hated: 'If you stop reacting, they’ll stop trying.'
I wasn’t bullied—I had friends, laughed at lunch tables, loved school. But probably like every kid in public school, I faced low-stakes friction: the classmate who mocked your shoes to entertain others, the friend who ‘playfully’ needled your insecurities to test social boundaries.
Harmless? Mostly. Annoying? Relentlessly.
I hated my mother's advice—yet it held true for the rest of my life: reactions amplify distractions, whether in middle school hallways or algorithm-driven feeds.
Today, this principle transcends public school dynamics. The battleground is your screen.
History Repeats Itself
Observe the rhythm:
- 1844: Skeptics branded the telegraph a frivolity—it collapsed continents into moments.
- 1876: Critics dismissed Bell’s telephone as a parlor trick. Within decades, it rewired human connection.
- 1920: Early radio broadcasts were deemed trivial noise. They soon became civilization’s pulse.
- 1980s: Personal computers seen as hobbyist toys. Now they underpin existence.
- 2000s: Once questioned, the internet now dictates global elections. The true disruptor? Not weapons—mass-distributed freedom of speech.
Each revolution followed the same trajectory: dismissal → experimentation → transformation.
It makes sense in hindsight, but why do we resist innovation? I believe there are a few leading reasons:
- “This works fine” syndrome (Google searches feel “good enough”).
- Zero clout (No one cares if you use AI better than your coworkers).
- Fear (I refuse to accept this is better than me at some of the things I do because I don't know my meaning without it).
Contrast this with 2021’s crypto fervor: speculative gains created community validation. You talk about a token with your friends, you and your friends invest and make money, you now develop a community of shared interests. You feel productive and derive meaning because you're playing games to win money.
AI lacks this allure. It works through whispers: no Lamborghini memes, just incremental empowerment. Yet in five years, regret will sting less over missed crypto trends than neglected "AI literacy".
Here lies the pivot: Distribution isn’t king anymore—curation is.
All personal content of the future will be curated content. To be honest it should have always been that way, but clickbait eventually became the dominating factor of the internet.
I'll break it down from first principles: the soul of a tech product algorithm is to provide a user with content that they will interact with, constantly. The success of this algorithm is thus defined by whether you interact with the application. It's a pass/fail grade; if you do interact, even if it's out of pure disagreement, it will take that as a success.
When this is the soul of an algorithm, it is clear that even half a reaction at something you don't like will yield more of that content that you don't like. This is the essence of ragebait. The catch is that things that enrage you don't immediately feel dismissible and that's why they win your attention.
Cognitive Gardening Framework
AI is a cognitive gardener and it needs to be utilized as such. Curation isn’t passive—it’s an act of defiance against chaos.
- Summarize strategically. Feed books, articles, news headlines to your favorite LLM with: “Distill this for someone obsessed with [X].”
- Ask like a child. “Why do stars twinkle?” → “How do mRNA vaccines mimic nature?” → “Explain quantum superposition using cooking metaphors.”
- Keep asking stupidly until you answer your question. “wait that makes no sense?” → “wait but why it be like that?” → “I don't understand any of what you just said...” → "wtf???" (all literal input prompts I've used with an LLM more than once)
Copy. Paste. Ask Like An Idiot. Comprehend.
Yes, AI can make us lazy. But so did calculators, spellcheck, and GPS. The trap isn’t the tool—it’s outsourcing curiosity. When we let TikTok think for us, we atrophy. When we weaponize LLM to ask ‘stupid’ questions, we rebuild muscles atrophied by algorithmic spoon-feeding. This isn’t a shortcut—it’s cognitive reconditioning. A disciplined recalibration for an overstimulated age. Spend less time swiping TikToks and swipe through your obsessions.
Truth often terrifies before it emancipates. Copernicus, with his heliocentric model claiming that the universe doesn't revolve around our divine selves, threatened humanity’s cosmic significance. Outrage followed. We cling to comfortable falsehoods (algorithmic echo chambers, viral nihilism) because unlearning feels like loss.
AI disrupts this inertia. You don't have to read each line of Kant’s Critique or neural network architectures. Just paste the damn context and ask your question, stupidly if necessary.
Generations of Knowledge Gathering
- Pre-internet: 45+ minutes hunting library stacks for fragments.
- Search era: 10+ minutes sifting SEO-optimized noise.
-
AI era: 5 seconds in 5 steps:
- Cmd+A (Highlight All Text on Page)
- Cmd+C (Copy)
- Cmd+V (Paste)
- Synthesize or Ask Direct Question
- Move On
I often think about this line from the film Inception:
An idea is like a virus—resilient, highly contagious. The smallest seed of an idea can grow. It can grow to define or destroy you.
Manifestation isn’t mystical—it’s the compound return on focused attention.
(I notice it's ironic for me to quote a movie that premises around using hallucinations to feed a false truth to people. I address counterarguments below, but I will say here that unlike Inception’s deceptive dreams, AI hallucinations are navigable—if you learn to steer them. You can't learn to steer them if you just dismiss them all as bad.)
My Daily Mandate
- Filter, mute, or permanently delete one distraction today. Mute a conversation, mute an account, block that person, hide stories and posts from being seen from people you don't care about. Reclaim your minutes.
- Pose one “childish” question to an LLM on anything you're curious about. Find a news article you're interested in and ask yourself "what's one question I am directly interested in knowing from this article?"
- Repeat until curiosity becomes reflex.
From Sumerian scribes carving clay tablets to Gutenberg’s press, humans have always fought to distill signal from noise. AI is just the latest chapter in this 5,000-year war for attention. The stakes are higher, but the principle remains: tools don’t master us unless we surrender.
Counterarguments: No Tool is Flawless
I get it. AI gets a lot of hate and still will. I'll probably get hate now for advocating that you use it, and that's okay.
Here are some likely counterarguments and my rebuttals:
1. Overestimating AI’s Reliability and Ignoring Bias
The essay assumes AI tools like ChatGPT can reliably distill complex information without significant errors. However, AI hallucinations, inherent biases in training data, and oversimplification of nuanced topics (e.g., Kant’s philosophy or quantum physics) are glaring issues. The author briefly mentions “steering” hallucinations but doesn’t address how non-experts can discern inaccuracies. Reliance on flawed outputs risks propagating misinformation, undermining the essay’s premise of AI as a trustworthy “filter.”
REBUTTAL: Every tool demands mastery. The printing press birthed pamphlets of truth and propaganda. Did we abandon it? No—we sharpened literacy. AI’s “hallucinations” are not bugs but invitations: learn to interrogate. The essay’s core thesis is intentionality, not blind faith. A chef doesn’t blame the knife for a bad cut; they refine their grip. Your job isn’t to trust the machine—it’s to demand clarity, poke holes, and think. Hallucinations crumble under scrutiny. Distraction thrives on passivity; curation dies without rigor.
Learning to use AI isn’t about appeasing algorithms—it’s about cognitive self-defense. Just as you wouldn’t swallow unlabeled pills from a pharmacy, you shouldn’t ingest unvetted content from feeds. AI gives you the scalpel to dissect the chaos, but you decide what to cut out.
2. Privacy, Copyright, and Ethical Concerns
The framework advises users to copy/paste articles into AI tools, ignoring critical questions about data privacy (e.g., feeding sensitive or proprietary content into third-party platforms) and copyright infringement. Many publishers prohibit automated scraping, and users may unknowingly violate terms of service. The essay’s utilitarian approach glosses over these ethical and legal pitfalls.
REBUTTAL: Knowledge has always been plundered. Medieval monks copied manuscripts without permission. Students highlight textbooks they don’t own. The internet itself is a library of borrowed code. The essay’s framework isn’t a manifesto for piracy—it’s a call to prioritize your curiosity over bureaucratic inertia. Revolutions aren’t forged by lawyers. When the payoff is reclaiming your mind, obsessing over terms of service is like refusing to breathe until air is FDA-approved.
3. Oversimplification of Attention Economics
The essay reduces modern distraction to algorithmic manipulation, ignoring systemic drivers like economic precarity, mental health crises, or workplace demands. Framing the solution as individual “curation” via AI ignores structural forces (e.g., gig economy pressures, social media monopolies) that shape attention. Personal responsibility alone cannot counter systemic exploitation of human cognition.
REBUTTAL: Systems amplify human frailty, but they don’t invent it. The printing press didn’t create greed—it exposed it. Yes, capitalism monetizes attention, but you still choose the currency. The essay’s focus on individual agency isn’t naivety—it’s pragmatism. You can’t dismantle Silicon Valley’s engines today, but you can refuse to fuel them. Every muted notification, every pasted article, every “stupid” question is a micro-rebellion. History’s tides turn when enough individuals redirect their drops.
Every muted distraction isn’t just a personal victory—it’s a strike against attention capitalism. When millions reclaim their focus, platforms lose power. AI curation isn’t selfish; it’s solidarity. The more we demand better tools, the harder corporations must work to earn our time.
4. Risk of Self-Imposed Echo Chambers
While criticizing algorithmic echo chambers, the essay ironically advocates for AI-driven curation based on personal “obsessions.” Over-reliance on self-selected filters could isolate users in intellectual silos, stifling exposure to diverse perspectives. The author’s framework (“Distill this for someone obsessed with [X]”) risks replacing Silicon Valley’s engagement metrics with self-curated myopia.
REBUTTAL: Obsession ≠ ignorance. A physicist obsessed with quantum mechanics still reads poetry. The essay’s “filter” isn’t a wall—it’s a lens. When you demand AI distill articles for your current obsession, you’re not narrowing—you’re deepening. Depth creates bridges: mastery in one field reveals patterns in others. TikTok’s algorithm imposes chaos; intentional curation orchestrates it. The difference? You’re the conductor, not the audience.
For the AI-Wary
- Challenge the perspective: After exploring [X], you can easily ask for counterarguments. Probe it to be impartial, or biased, and you'll still learn something either way.
- Cross-check summaries: Review the original sources alongside AI-summarized articles. You can verify for yourself that it's good at summarizing complex concepts you're familiar with.
Conclusion
AI won’t save us from the darkness of technology. But neither will Luddism. Salvation lies in the messy middle: tools that amplify our agency, not amputate it.
Choose your quest:
1) cultivate your own meaning in life
2) farm for someone else
Gardening your attention isn’t about full control—it’s learning to thrive in the weeds. The goal isn’t to ‘win’ against distraction, but to choose, daily, what deserves to root in your soil.