It's like "fish"
Here's the thing: "Artificial Intelligence" is about as specific as "fish."
Think about it. When someone says "fish," they could mean a goldfish, a shark, a salmon, or a manta ray. They're all "fish," but they're wildly different creatures with completely different behaviors, habitats, and—let's be honest—threat levels.
Here's the kicker: you are more closely related to a salmon than a salmon is to a shark. That's not a typo. Bony fish (like salmon) and cartilaginous fish (like sharks) split apart so long ago that we mammals—who evolved from the bony fish branch—are actually closer cousins to your dinner than your dinner is to a great white.
"Fish" isn't even a real scientific category. It's a folk taxonomy—a word we made up because "things that swim and have fins" seemed like a useful grouping.
"AI" is exactly like this. The term has become so overloaded that it's basically meaningless without context. When someone says they "use AI" or "build with AI," you have to ask: which kind?
What most people actually mean
When your neighbor, your CEO, or that guy at the coffee shop talks about "AI," they almost always mean one specific thing:
Large Language Models (LLMs)
ChatGPT. Claude. Gemini. The chatbots you type questions into and get surprisingly coherent answers back.
These are "generative" models—they generate text (or images, or code) based on patterns they learned from massive amounts of training data. They're impressive, genuinely useful, and also just one species in a much larger ecosystem.
It's like if everyone suddenly started calling all ocean creatures "dolphins." Sure, dolphins are cool. But you're missing out on the coral, the jellyfish, the entire kelp forest situation.
When AI takes action
Here's where things get interesting. LLMs by themselves just generate text. But what if they could do things?
Tool calling (or "function calling") is when you give an AI model access to actual tools—APIs, databases, calculators, web browsers, code executors. Instead of just telling you how to check the weather, it can actually check the weather.
Without tools:
"To check the weather, you could visit weather.com or use the weather app on your phone..."
With tools:
"It's currently 72°F and sunny in Boise. Rain expected Thursday."
This is where "AI assistants" become genuinely useful—not just answering questions, but actually helping you accomplish things. Booking appointments. Analyzing spreadsheets. Writing and running code.
The memory problem (and how we're solving it)
LLMs have a fundamental limitation: they only know what they were trained on. They don't know about your company's internal docs, your product catalog, or that email thread from last Tuesday.
Enter a bunch of techniques with alphabet-soup names:
RAG — Retrieval-Augmented Generation
Instead of relying only on what the model was trained on, RAG fetches relevant documents from your own data and includes them in the prompt. It's like giving the model a cheat sheet before it answers.
Embeddings — Semantic fingerprints
A way to convert text into numbers that capture meaning. "Happy dog" and "joyful puppy" would have similar embeddings, even though the words are different. This is how RAG knows which documents are relevant.
Vector databases — Finding similar things, fast
Special databases optimized for storing embeddings and finding "similar" items quickly. Ask a question, find the documents most semantically related to it, feed those to the LLM.
This is how you build AI systems that know about your stuff—without retraining the entire model. It's not magic. It's clever plumbing.
So why does this matter?
Because when someone says "we need AI," they might mean:
- • A chatbot that answers customer questions
- • A system that searches their documentation semantically
- • An assistant that can actually book appointments and send emails
- • A recommendation engine that suggests products
- • Something that detects fraud in real-time
- • All of the above, somehow working together
These are completely different technical challenges with different architectures, different costs, and different trade-offs. Lumping them all under "AI" is like saying "we need a vehicle" and expecting a skateboard and a cargo ship to be interchangeable.
The real question isn't "do you do AI?"
It's: what problem are you actually trying to solve?
Once we know that, we can figure out which tools—AI or otherwise—actually fit.
Same game, next level
Here's the thing: AI isn't new. It's code. That's it.
Code that, if you did it right and you're lucky, will probably give you most of the answer you want. There's just a lot more of it now.
People love saying AI is just "fancy autocomplete." And they're not wrong. But you could say any computer is just a watch and a calculator—transistors, simple logic gates. With that foundation, we built everything you see around you.
What we're experiencing now isn't a new game. It's achievement unlocked in the same game we've been playing since the eighties. Same rules. Same principles. The token count went up, the processing power expanded, but the fundamentals haven't changed.
Most people see AI as something completely different. I see it as progression. And that difference in perspective? That's everything.
Discernment vs. judgment
I've always been happy working with computers because I'm the human with the discernment. I can tell whether what's coming out is good or bad. I can see how to make it better.
Notice I said discernment, not judgment.
Judgment
Quick. A verdict. You pass judgment, render judgment. It's got finality to it, and it can be rash.
Discernment
Earned. Decades of tinkering, failing, solving problems. Pattern recognition at scale. You can actually see what matters—not just react to it.
Someone without decades in technology can still have strong opinions about AI. They can make snap judgments about what it will or won't do. But that's not discernment. That's just noise.
The real shift isn't in what the technology can do—it's awareness and accessibility. Suddenly millions of people who've never thought about how to work with code have a powerful tool in their hands. No barrier to entry anymore. And that's both exciting and kind of the problem: awareness without discernment.
AI generates. Humans discern.
That's the relationship. The tool isn't going to tell you if its output is good. It's not going to tell you where it fits into a real workflow. It's not going to tell you what actually matters.
That's your job. That's always been your job.