AI has always been a topic close to my heart. My first real "wow" moment came years ago when I was working in tech and saw a machine learning algorithm distinguish between cats and dogs—it was a glimpse into the possible.
With the launch of GPT-5, it’s amazing to think how, from such humble beginnings, today’s AI capabilities have soared far beyond anything I could have imagined.
Today, I’d like to share some of my reflections on how AI is altering the work of knowledge workers, especially in investment research, which is the focus of my own practice.
With the boundaries defined, let’s take a step back and start by exploring 1) what current AI excels at and 2) the areas where it still stumbles. Like teamwork, understanding each other’s strengths and weaknesses is key to effective collaboration.
What AI Excels At
The things AI excels at are probably not the skills we’d want our children to devote their youth to.
To quote Charlie Munger: “If I knew where I was going to die, I’d avoid going there.”
AI’s magic is that it never gets bogged down with human emotions. If you’ve managed teams or worked in a team, you’ll know where I’m coming from—AI never gets tired, never complains, and never deceives (at least, not intentionally—but we’ll discuss hallucinations later). And best of all, the cost is often a fraction of a human’s—sometimes less than a third, depending on the specific task.
AI is phenomenal at tasks with clear right or wrong answers. Think of Go, for example—a classic use case. The rules provide black-and-white clarity, and AI’s rise here has been nothing short of legendary. AlphaGo Zero started off playing absolutely random moves against itself, but in just three days, it could defeat the version that once bested Lee Se-dol 100-0. Within three weeks, it outperformed AlphaGo Master, who’d won 60 consecutive games against top professionals. By day 40, it was at the top of the game—literally.
Programming is another powerful example: the code either works or it doesn’t, and there’s no arguing with the results. Back in April, Google CEO Sundar Pichai shared that more than 30% of Google’s code is now generated by AI—a stat that still makes me stop and think. The company even released best practice guidelines for engineers to incorporate AI into their workflows.
To avoid turning this into a TL;DR, I’ll quickly sum up:
LLM-based AIs are currently incredible at dealing with structured tasks that have clear, measurable outcomes, efficiently handling large volumes of repetitive tasks without any complaints, and performing language-related work.
Where AI Falls Short
The areas where AI falls short are, not coincidentally, often where we want our kids to focus their energy—at least for now.
Carl Jacob once famously said: “Always think in reverse, always think in reverse.”
From my experience and reflections, there remain several domains in which current AI still struggles with:
Interacting with the physical world: Despite impressive progress in AI algorithms and data processing, breakthroughs in robotics—which would allow AI to effectively operate in the physical world—are still in early stages or “waiting room” territory. Developing robots that can navigate unpredictable physical conditions, perform fine motor tasks, or safely interact with humans requires innovation far beyond current AI software capabilities. Thus, for now, AI’s awe-inspiring achievements are confined mostly to the virtual realm of ones and zeros—bits, rather than atoms.
Tasks where success is subjective: AI struggles considerably with domains rooted in subjective human experience, such as art, music, and other forms of creativity. Unlike problems with clearly defined criteria, creative fields rely heavily on cultural, emotional, and contextual factors that vary widely among people. While AI can generate artworks and music inspired by learned patterns, the deeply personal and often intangible qualities that make creative works meaningful to humans remain beyond its full grasp. Moreover, the value and success of creative endeavors are often judged by individual perception and evolving social trends, making it difficult for AI to consistently meet human standards of artistic expression. As such, these fields remain vibrant spaces where humans lead, innovate, and evoke emotion in ways AI cannot yet replicate.
Deep, human-centric interaction: Although AI has made strides in facilitating conversations and providing automated assistance, it still falls short in roles requiring profound human connection—like therapy, executive coaching, or even sports mentorship. These interactions depend on nuanced emotional intelligence, trust-building, and shared lived experiences that AI cannot authentically replicate. While AI agents can simulate empathy and offer scripted support, they lack true understanding or the dynamic capacity to respond creatively to complex individual needs. The unique psychological and social dimensions of these human-centric roles make them especially resistant to automation or replacement. People naturally gravitate toward fellow humans for empathy, motivation, and nuanced guidance, preserving these interactions as distinctly human domains.
Frontier intellect: Although AI can process and analyze vast datasets, it currently does not possess the autonomous intellectual capacity to discover fundamentally new scientific paradigms or solve the most challenging unsolved mathematical problems. Breakthroughs in physics—such as formulating novel laws or radically new models of the universe—require creative insight, conceptual leaps, and an intuitive grasp of nature that AI, at least so far, does not exhibit. Similarly, the highest forms of mathematical innovation—those that require intuition beyond logical computation—remain out of AI’s reach. While AI can assist researchers by mining data and testing hypotheses, the spark of original discovery and deep theoretical innovation remains a frontier dominated by human intellect, creativity, and curiosity so far.
More on this topic can be found in Lex Fridman’s three-hour interview with Terence Tao conducted last month, where they discuss some of the hardest problems in mathematics, physics, and the future of AI.
And let’s not forget one of the quirkiest behaviors—hallucination.
This one deserves an extra word. Many who’ve worked seriously with AI know this all too well: the models can sometimes confidently make things up out of thin air. If you haven’t yet experienced this, I’d say it’s just a matter of time (so be on the lookout). For knowledge workers in fields where factuality is paramount—like investing—this is reason enough for caution.
Why does this happen? It stems from a mix of limited training data, architectural quirks (like the way Transformers “think”), and absent fact-checking systems. Solving these problems will take time, but as users, we can already change how we use AI to minimize risk.
To put it simply, the relationship between user and AI is a bit like a duel in a Generative Adversarial Network (GAN): you have a Generator (AI) creating plausible but sometimes false information (the “counterfeiter”), and a Discriminator (us, the vigilant “inspectors”) constantly sharpening its ability to catch mistakes.
For me, building this judgment muscle is a must.
Personally, I turn to AI most in areas where my own expertise lets me recognize red flags instantly. When I tread into new territory, I spend time learning and building my baseline knowledge, so I can spot if something seems 'off' before relying heavily on AI, or I double-check with trusted experts.
It takes longer, sure, but if quality matters, it’s worth the effort.
On the other hand, if you’ve got a solid knowledge base and want AI to only answer using that, Retrieval-Augmented Generation (RAG) is my go-to. RAG combines information retrieval and generative capability so that AI can hook into a knowledge base (like your personal encyclopedia) and provide answers only from trusted data—drastically reducing hallucinations.
Google’s NotebookLM is a simple way to do this: feed it your documents (e.g., company fillings), and let the AI respond strictly within those boundaries.
How to Work with AI
Now that we’ve explored the wins and weak spots, let’s talk about how I actually “team up” with AI.
In my mind, I imagine a diverse “AI coworker team,” where each member brings distinct talents—one’s a genius at logic, another is a coding wizard, and yet another is an illustrator (check out my illustrator’s Ghibli-style work below).
The trick here is weaving their strengths into my workflow at exactly the right times.
Designing your workflow with AI assistance is an ongoing adventure—one full of experimentation and highly personalized discovery. There’s no perfect, one-size-fits-all formula.
As you experiment, keep the following in mind to help make the partnership smoother:
AI loves clear instructions. The more precise you are about what you want, the better the results will be. (Of course, sometimes you can use AI as a creative “thinking partner,” which can be wonderfully productive depending on the field and the task.)
AI brings the most value when results are measurable and clear. If you’re uncertain about task quality, it pays to sharpen your own judgment first.
A Final Thought
A year from now, much of what I share today will likely feel old-fashioned—and that’s something I genuinely look forward to.
For now, I hope this post sparks some ideas, and I’d love to hear how AI is reshaping your work too.
As always, thanks for your reading and I hope you enjoy it.