Skills: The Best of Both Worlds

Skills: The Best of Both Worlds

There's a fundamental tension in how we build things with AI.

On one side, you've got traditional code. If-then-that. Deterministic. Predictable. It does exactly what you tell it, every single time. Set up an automation that says "when a new lead comes in, send this email" and it will dutifully send that exact email until the heat death of the universe.

On the other side, you've got large language models. They reason. They adapt. They understand context. Ask them to "write a response to this customer complaint" and they'll craft something appropriate - but it might be wildly different each time, depending on mood, context, and which way the probabilistic wind is blowing.

Both approaches have obvious limitations.

Pure code can't handle ambiguity. It breaks when reality doesn't match its predetermined paths. It can't exercise judgment.

Pure AI can be inconsistent. It forgets your preferences between sessions. It might decide to emoji-bomb your LinkedIn post today even though you've told it a hundred times you hate emojis.

Skills Solve This

What Skills Actually Are

A skill is a markdown file with instructions that your AI agent loads when needed. That's it. Dead simple.

But here's why that simple idea is powerful: skills give you the reliability of code with the adaptability of AI.

Inside the brain, you might have a skill for how you write newsletter content - your tone, the structure you prefer, what to avoid. When you ask it to write something, it doesn't produce generic copy. It writes copy that sounds like you, that follows your principles, that avoids what you hate.

That skill fires the same way every time (deterministic). But the output adapts to whatever you're actually asking for (reasoning).

You could have research skills with multiple levels - ask for "deep research" and it spawns agents across different providers. Each run follows the same process (deterministic), but the agents reason about the actual content and synthesize findings intelligently (adaptive).

The Reusable Workflow Frame

Think of skills as "reusable workflows you use over and over again."

You've probably got these patterns already, even if you haven't formalized them:

  • How you like emails drafted
  • The questions you ask when analyzing a competitor
  • The format you want for meeting notes
  • The structure of your weekly reports

Every time you sit down to do one of these tasks, you're essentially running an algorithm in your head. You follow a similar process. You apply similar judgment. You produce a similar output format.

A skill just writes that down so your AI can do it the same way.

Why This Matters More Than You Think

Here's what most people miss: the scaffolding is more important than the model.

Claude Code exploded not because Opus 4.5 is dramatically better than Gemini or OpenAI's latest - they're all pretty close. It exploded because the scaffolding is better. The way it wraps around the model. The way it manages context. The way it allows you to customize behavior.

Skills are scaffolding.

When you invest time in building a skill, you're not just automating a task. You're teaching your AI how you think about a domain. You're encoding your judgment into a format it can apply consistently.

The brain is built on this foundation: markdown files, skills, and context files. It's portable - you could move it to any model provider tomorrow. The value isn't in the model. It's in the accumulated context and workflows you've built around it.

The Progressive Disclosure Trick

Skills use something called "frontmatter" - metadata at the top of the file that tells the agent when to use it:

---
name: newsletter-writer
description: Use this skill when writing newsletter content
---

The agent reads these descriptions and loads skills proactively when they're relevant. You don't have to remember to invoke them. Say "write my newsletter intro" and it knows to pull in your newsletter-writing skill automatically.

This is progressive disclosure. The agent starts with its base knowledge, sees it has access to ten skills, and knows which one to grab for which task. Your preferences and processes are always there, but they're not cluttering up every interaction.

Getting Started

You don't need a complex system to start.

Pick one thing you do repeatedly. Write down:

  1. What inputs you need
  2. What process you follow
  3. What output format you want
  4. What preferences and constraints apply

That's a skill.

Put it in a markdown file. Tell your agent where to find it. Now that workflow is encoded - the deterministic structure you follow, with AI reasoning filling in the gaps.

The pizza shop automation that emails every customer the same birthday message is code. The customer service rep who adapts their response to each situation is human judgment. A skill that follows your customer communication framework but reasons about each individual case is something new.

It's why this approach feels like having a team that actually understands how you work. Not because the AI is magical - but because you've taught it your patterns, and now it applies them consistently while still thinking.


The agents that replace human knowledge workers won't win on raw intelligence alone. They'll win on scaffolding - on how well they encode the patterns, preferences, and workflows of the people they're helping.

Skills are how you build that scaffolding.

Start with one. See how it feels to have your process run the same way every time, but with genuine reasoning applied to each case.

That's the best of both worlds.

Share: