The Moment Everything Changed

I’ll never forget the first time an enemy in a video game genuinely outsmarted me.

It was 2005. I was playing F.E.A.R., crouched behind a concrete barrier, thinking I was being clever. I’d just taken down two soldiers, and I was waiting for the third to walk into my trap like every other game enemy I’d ever fought.

He didn’t.

Instead, he went silent. Thirty seconds passed. Then I heard boots—behind me. He’d flanked around, used my own tactic against me, and I was dead before I could react.

That wasn’t scripted. That was AI making a decision.

That’s when I realized: game characters were getting smart. And as someone who’d go on to spend the next fifteen years building AI systems for indie and mid-size game studios, I became obsessed with one question: How do we make pixels think?

In this article, I’m going to show you exactly how we got from Pac-Man ghosts following simple patterns to NPCs that learn, adapt, and create those unforgettable “Did that just happen?” moments. No computer science degree needed—just curiosity.

Chapter 1: The Light Switch with Personality (Where It All Began)

ai in npcs

Let’s start with the simplest AI you’ve ever fought: a video game guard.

Picture this: You’re sneaking through a castle. There’s a guard walking back and forth. He hasn’t seen you yet. What’s happening in his pixelated brain?

He’s running something called a Finite State Machine (FSM).

Think of an FSM like a flowchart your brain follows when you’re driving:

  • Green light? → Go
  • Yellow light? → Decide (speed up or slow down)
  • Red light? → Stop

Game AI works the same way. Our castle guard has just a few “states”:

  1. PATROL → Walking his route, humming to himself ↓ (Hears noise)
  2. INVESTIGATE → “What was that?” Moves toward sound ↓ (Sees player)
  3. ATTACK → “Intruder!” Charges at you ↓ (Player hides, guard can’t find you)
  4. SEARCH → Looking around suspiciously ↓ (Gives up after 30 seconds)
  5. PATROL → Back to the boring route.

That’s it. Four or five states. Simple rules connecting them. But here’s the magic: it works.

The Ghosts That Started It All

The first brilliant example? Pac-Man’s ghosts from 1980.

Those four colorful ghosts feel like they have personalities, right? Blinky seems aggressive, Pinky tries to cut you off, Inky is unpredictable, and Clyde is… well, Clyde does his own thing.

Each ghost is just running an FSM with different rules:

  • Blinky (red): Always chases Pac-Man directly
  • Pinky (pink): Aims for where Pac-Man is going
  • Inky (cyan): Uses Blinky’s position + Pac-Man’s position in his calculation
  • Clyde (orange): Chases Pac-Man until close, then retreats

Four simple state machines. Four distinct personalities. Players still talk about these ghosts 45 years later.

Why FSMs Are Like Training Wheels (That We Never Fully Remove)

When I built my first tower defense game, I used FSMs for everything. Enemy spots a tower? Attack state. Takes too much damage? Flee state. Path blocked? Find new route state.

It worked… until I had fifteen enemy types, each with eight states, and every time I wanted to add a new behavior, I’d break three other things.

FSMs are amazing for:

  • Simple, predictable behaviors
  • Teaching yourself AI programming
  • Mobile games (they’re super efficient)

FSMs are terrible for:

  • Complex, reactive behaviors
  • Enemies that need to make decisions based on multiple factors
  • Keeping your sanity when you hit 50+ enemy types

But here’s the thing: even today’s massive AAA games use FSMs as building blocks. They’re the foundation everything else is built on. Master FSMs, and you understand game AI at its core.

Chapter 2: When Game AI Grew Up (Behavior Trees Take Over)

behavour tree

Around 2004-2005, game developers hit a wall.

Games were getting bigger. Open worlds were becoming standard. Players expected enemies to do more than just patrol and attack. We needed smarter NPCs, but our FSM spaghetti code was becoming unmaintainable.

Enter Behavior Trees.

Think of a behavior tree like a decision-making flowchart, but way more flexible. Instead of being stuck in one state at a time, enemies can now think through multiple options and choose the best one every single frame.

How Your Brain Already Uses Behavior Trees

Let’s say you’re hungry. Your brain doesn’t lock into “EATING MODE” and refuse to do anything else. Instead, it runs through a tree of decisions:

Am I hungry?

  • No → Continue what I’m doing
  • Yes → Next question

Do I have food at home?

  • Yes → Make food
  • No → Next question

Can I afford takeout?

  • Yes → Order food
  • No → Go grocery shopping

Is the grocery store open?

  • Yes → Go shopping
  • No → Raid the pantry for crackers

See how it flows? Each question leads to another question or an action. That’s a behavior tree.

The Magic Sauce: Composite Nodes

Behavior trees have two types of decision-makers that make them incredibly powerful:

SEQUENCES (Do all these things in order):

  • Hear noise → Turn toward noise → Draw weapon → Investigate

If ANY step fails, stop the whole sequence. This is why game enemies don’t keep investigating after you’ve killed them.

SELECTORS (Try these options until one works):

  • Can I see the player? → Shoot them
  • Do I know where they were? → Move there
  • Am I taking damage? → Find cover
  • Nothing else? → Go back to patrol

The first option that succeeds wins. This is why enemies feel reactive—they’re constantly trying different strategies.

Real Games, Real Intelligence

Halo’s Covenant Enemies

When you fight Elites in Halo, they don’t just charge at you like zombies. They:

  • Communicate with each other (you can hear them)
  • Cover each other while flanking
  • Retreat when their shields break
  • Coordinate grenade throws

All of this is behavior trees. The Elite’s AI is constantly asking:

  • Is my shield down? → Yes → Find cover
  • Is the player exposed? → Yes → Signal allies and attack
  • Are allies attacking? → Yes → Flank from the other side

Bungie’s AI programmers created trees so sophisticated that Elites feel like they’re having a bad day when you’re winning. They panic. They run. They make desperate last stands.

The Last of Us: Clickers

Clickers can’t see. They navigate by sound. This seems simple, but the behavior tree has to juggle:

  • Is there noise? → Move toward it
  • Did I bump into something? → Is it alive? → Attack
  • Haven’t sensed anything? → Wander randomly (but stay near walls)

The genius? The behavior tree makes them unpredictable. You never know exactly what a Clicker will do, even though it’s following simple rules. That’s terrifying game design.

When I Learned Behavior Trees Changed Everything

My second game project was an open-world zombie survival game. We had 50+ enemy types—fast zombies, tank zombies, spitter zombies, zombies with armor, zombies that exploded when killed.

With FSMs, each enemy needed its own spaghetti code. With behavior trees, I built a library of behaviors:

  • “Take cover”
  • “Flank player”
  • “Protect allies”
  • “Flee when hurt”

Then I mixed and matched them. Fast zombies got “aggressive pursuit” + “no self-preservation.” Tank zombies got “slow movement” + “guard other zombies.” Spitters got “ranged attack” + “maintain distance.”

One system, infinite combinations. Behavior trees didn’t just make enemies smarter—they saved me hundreds of hours of programming.

The Industry Standard

Today, if you open Unity or Unreal Engine, behavior trees are built right in. Every game studio I’ve worked with or consulted for uses them. They’re not the newest tech anymore—they’re the foundation of modern game AI.

But they’re not perfect. And that’s where things get really interesting.

Chapter 3: When Good Enough Isn’t Good Enough (Utility AI & GOAP)

behavour trees by ai

Here’s a problem I ran into that behavior trees couldn’t solve:

I was making a stealth game. The enemy guard had perfect behavior trees—he’d patrol, investigate sounds, and attack when he saw the player. But players noticed something:

He was too predictable.

Throw a rock left, he’d go left. Every time. Hide in the same closet, he’d check it. Every time. After a few deaths, players had memorized his entire decision tree.

The AI wasn’t dumb. It was robotic. And that’s the problem with binary decision trees—they’re all-or-nothing. The AI picks the “best” option and ignores everything else, even if the second-best option is almost as good.

Enter Utility AI, the system that makes NPCs feel like they have opinions.

How Utility AI Works (The Sims Approach)

Instead of making one decision, Utility AI scores every possible action and picks the highest score.

Let’s say you’re an NPC guard in a game:

Current situation:

  • Hunger: 70/100 (getting hungry)
  • Alert: 30/100 (heard something earlier)
  • Energy: 50/100 (a bit tired)

Action scores:

  • Investigate noise: 30 points (alert level)
  • Eat lunch: 70 points (hunger level)
  • Take a nap: 50 points (energy level)
  • Continue patrol: 20 points (default behavior)

The guard eats lunch. Makes sense, right?

But here’s where it gets interesting. Five minutes later:

New situation:

  • Hunger: 30/100 (just ate)
  • Alert: 80/100 (just saw the player!)
  • Energy: 45/100 (still tired)

Action scores:

  • Investigate noise: 80 points
  • Eat lunch: 30 points
  • Take a nap: 45 points
  • Continue patrol: 20 points

Now the guard investigates. Same AI, different priorities, and it feels natural.

The Magic of Weighted Decisions

The real power? You can add personality through weights.

Lazy Guard:

  • Sleep value × 2 (really values rest)
  • Investigation value × 0.5 (not very motivated)

Paranoid Guard:

  • Investigation value × 3 (hyper-alert)
  • Sleep value × 0.5 (doesn’t trust being vulnerable)

Same utility AI system. Completely different behaviors. One guard naps at his post and barely responds to sounds. The other one investigates every tiny noise and never rests.

Players can’t memorize this. They can only learn tendencies.

Real Games That Nailed It

The Sims: Why Your Sim Won’t Stop Playing Video Games

Every Sim is running utility AI constantly. They’re scoring:

  • Bladder need
  • Hunger need
  • Social need
  • Fun need
  • Hygiene need
  • Energy need

That Sim playing video games at 3 AM while desperately needing to pee? The fun score is outweighing the bladder score. It’s not a bug—it’s utility AI working exactly as designed. (Okay, maybe the weights need adjusting.)

Far Cry’s Wildlife: The Food Chain

Predators hunt prey. Prey flee predators. But sometimes a deer fights back. Sometimes a tiger ignores you because it’s already eaten.

All utility AI. The tiger is scoring:

  • Hunger level
  • Threat level of target (you with a shotgun vs. a deer)
  • Distance to target
  • Current health

A hungry tiger with full health sees you as food. A wounded tiger that just ate sees you as a threat to avoid. Same AI, different circumstances, organic behavior.

GOAP: When NPCs Think Backwards

Goal-Oriented Action Planning is utility AI’s nerdy cousin. Instead of scoring actions, the AI defines a goal and works backward to figure out how to achieve it.

Goal: Kill the player

Working backwards:

  • To kill player, I need: Line of sight + weapon loaded
  • To get line of sight, I need: Clear path or high ground
  • To get high ground, I need: Climb that ladder
  • To climb ladder, I need: Get to the ladder’s base

The AI chains actions together dynamically. If you block the ladder, it finds a different route. If you destroy its cover, it finds new cover. It’s not following a script—it’s solving problems.

F.E.A.R.’s Soldiers: The AI That Traumatized Me

Remember my story from the intro? That flanking soldier was using GOAP.

His goal: Kill the player (me) His assessment: Player is behind cover, waiting His plan: Suppress player with covering fire while I relocate behind them

I didn’t program that specific strategy. The AI figured it out by combining basic actions:

  • Move to position
  • Provide covering fire
  • Communicate with allies
  • Use available cover

F.E.A.R. came out in 2005, and its AI is still better than most shooters today. That’s the power of GOAP.

The Trade-Off Nobody Talks About

Utility AI and GOAP create amazing, unpredictable behaviors. They also:

  • Require way more processing power than behavior trees
  • Are harder to debug (“Why did the guard do that?!”)
  • Can create emergent bugs you never anticipated

I once had a utility AI guard who kept jumping off cliffs. Turns out, his “get to high ground” action was scoring so high that he’d pathfind to any elevated position—including ones he couldn’t survive reaching.

Most studios use a hybrid: behavior trees for structure, utility AI for interesting decisions within that structure.

Chapter 4: The Future Is Here (Sort Of): Machine Learning & Neural Networks

ai in games

Let’s talk about the elephant in the room: Can we use actual AI—like ChatGPT-level AI—to control game enemies?

Short answer: Yes, but not the way you think.

Long answer: Let me show you what’s actually happening in game studios right now.

What Hollywood Gets Wrong

In movies, evil AI takes over robots and becomes unstoppable. In real game development, we train an AI to play our platformer game and it discovers it can get infinite points by pausing at the edge of a cliff because the pause menu gives 1 point per frame.

Machine learning in games is less “Skynet” and more “toddler with a laser pointer.”

What We’re Actually Using ML For

1. Learning From Players (Forza’s Drivatars)

Forza Motorsport doesn’t just have generic AI racers. It trains “Drivatars” by watching how you drive:

  • Do you brake early or late into corners?
  • Do you take aggressive racing lines?
  • Do you bump other cars or drive clean?

Then it creates an AI version of you that races like you, even when you’re offline. Your friends race against your Drivatar. It’s not perfect, but it’s eerily close to your actual driving style.

This is supervised learning—the AI watches thousands of examples (your races) and learns to mimic patterns.

2. Difficulty That Adapts to You (Left 4 Dead’s AI Director)

Left 4 Dead has an “AI Director” that watches how you play:

  • Team doing too well? → Spawn a Tank
  • Team struggling? → Reduce zombie spawns, leave better weapons
  • Tension getting stale? → Create a crescendo moment

It’s not exactly neural networks (it’s closer to fancy utility AI), but modern versions are using ML to predict when players are getting bored or frustrated, then adjusting on the fly.

The goal: keep you in the “flow state” where the game feels challenging but fair.

3. Smarter Pathfinding and Animation

Here’s where ML really shines in games right now:

Traditional pathfinding: “Calculate shortest path from A to B” ML pathfinding: “Calculate path from A to B that looks natural for this character’s personality, avoids being too predictable, and uses cover when appropriate”

Traditional animation: “Blend between these 50 animation clips” ML animation: “Generate natural-looking movement for this exact situation, even if we never animated it”

Ubisoft and EA are both using neural networks to generate more realistic animations and movement. That’s why characters in recent games don’t have that “snapping between animations” feeling anymore.

The Big Experiment: Reinforcement Learning

This is where it gets wild.

Reinforcement Learning (RL) means you create a simulated environment, put an AI in it, and let it play millions of times, rewarding it when it does well.

DeepMind’s AlphaStar learned to play StarCraft II at grandmaster level by playing against itself for the equivalent of 200 years of game time. It developed strategies human players had never thought of.

I tried this with a simple 2D action game. I created an RL agent and let it play for 72 hours straight. Here’s what happened:

  1. Hour 1: The AI runs into walls
  2. Hour 10: The AI learns to avoid obstacles
  3. Hour 30: The AI learns to collect power-ups
  4. Hour 50: The AI discovers it can jump on enemy heads to kill them
  5. Hour 72: The AI discovers a bug in my collision code and exploits it to clip through walls.

Reinforcement learning finds optimal strategies, not fun strategies. My AI became unbeatable but in the most boring way possible—by exploiting glitches.

Why We’re Not Seeing ML Enemies Everywhere

Here’s the honest truth from someone who’s tried:

1. Training Takes Forever

Training an RL agent can take days or weeks. Every time you change your game (new weapon, new level layout, new mechanic), you have to retrain. In active game development, you’re changing things constantly.

2. Black Box Problem

Traditional AI: “The guard attacks because his behavior tree says ‘if player visible, attack'” ML AI: “The guard attacks because… uh… neuron 4,782 activated? Look, math happened.”

When your AI does something broken, you need to fix it. ML makes that incredibly hard.

3. Unpredictability Isn’t Always Good

I trained an AI to fight players in a dungeon crawler. Sometimes it would:

  • Play perfectly and be impossible to beat
  • Stand in a corner doing nothing
  • Run in circles
  • Develop a “strategy” of just jumping constantly

Players don’t want chaotic randomness. They want learnable challenge.

4. Performance Cost

Running neural networks in real-time while rendering graphics, playing audio, and simulating physics? That’s a lot to ask of a console or PC.

What’s Actually Coming

The realistic near-future of ML in games:

Smarter NPCs that remember you: Imagine an RPG where NPCs actually remember your past actions across playthroughs. “Didn’t you steal from me in your last save file?”

Dynamic dialogue generation: NPCs that can have natural conversations about anything, not just pre-written lines. This is already starting with some experimental indie games.

Procedural personalities: Instead of hand-crafting 50 different NPC behaviors, train an ML system to generate unique personality profiles automatically.

Better testing: ML agents that play your game millions of times before release, finding bugs and balance issues human testers would take months to discover.

The revolution won’t be “ML replaces all game AI.” It’ll be “ML handles the tedious parts so human designers can focus on the fun parts.”

Chapter 5: So… Which AI Should You Actually Use?

ai in npcs

After fifteen years building game AI, here’s the advice I wish someone had given me on day one:

The best AI is the one that serves your game design, not the one that sounds coolest.

Let me break it down by what you’re actually making:

If You’re Making a Mobile Puzzle Game

Use: Finite State Machines

Why: They’re fast, battery-efficient, and your enemies probably don’t need to be that smart anyway. Angry Birds doesn’t need neural networks.

Example: Match-3 puzzle game AI just needs states like “calculate next move,” “execute move,” “check for combos,” “wait for player turn.” Simple. Effective.

If You’re Making a 2D Platformer or Action Game

Use: FSMs + Basic Behavior Trees

Why: You want enemy patterns that are learnable but varied enough to stay interesting.

Example: Hollow Knight enemies. Each bug has predictable patterns (FSM), but they react to your position and actions (simple behavior tree). Players learn the patterns, master them, feel skilled.

Pro tip: Make your jump timing inconsistent by ±0.2 seconds. Makes enemies feel organic instead of robotic.

If You’re Making an Open-World or RPG

Use: Behavior Trees + Utility AI

Why: You have many AI characters with different roles, and they need to make contextual decisions based on their environment, needs, and the player’s actions.

Example: Skyrim NPCs have schedules (behavior trees) but also react dynamically to theft, aggression, reputation (utility AI scoring different responses).

Reality check: Even Skyrim’s “advanced” NPC AI is mostly smoke and mirrors. Guards have about 10 decision points total. It feels complex because the world is big and reactive.

If You’re Making a Competitive Multiplayer Game

Use: Whatever Lets You Balance It Easily

Why: Multiplayer AI (bots) needs to be tunable from “baby mode” to “decent practice partner,” and you’ll be tweaking constantly based on player feedback.

Example: Fortnite bots use scaled behavior trees. Lower difficulty = remove decision nodes, slow reaction time. Higher difficulty = add more nodes, faster reactions. Simple, tunable, balanced.

Critical lesson: Your bots should lose in ways that feel fair, not stupid. A bot that stands still is boring to fight. A bot that misses shots but moves well is good practice.

If You’re Making a Stealth Game

Use: Behavior Trees + GOAP

Why: Stealth relies on enemies feeling smart and reactive. They need to investigate disturbances, search methodically, and coordinate with other guards.

Example: Metal Gear Solid V guards use GOAP. They radio for backup, search last-known positions, and establish perimeters. It creates tension because you can’t predict every outcome.

Design wisdom: Give players tools to manipulate the AI. Distraction mechanics only work if the AI responds believably.

If You’re Making a Strategy or Simulation Game

Use: Utility AI + GOAP

Why: You need AI that makes long-term plans and weighs multiple factors (economy, military strength, diplomacy).

Example: Civilization AI is constantly scoring hundreds of possible actions: build this unit, research this tech, attack this neighbor, propose this trade. The highest score wins, but scores change every turn.

Warning: Strategy game AI is brutally hard to get right. Players will find and exploit any pattern. That’s why even after 30 years, Civilization AI still gets criticized.

If You’re Doing Something Experimental

Use: Whatever excites you, but start simple

Why: Innovation is great, but you need a working game first.

My approach: Build a basic version with FSMs. Get it playable. Then experiment with fancier AI. If the fancy AI fails, you still have a working game.

Real talk: I’ve seen indie devs spend six months building ML systems for enemies when an FSM would’ve worked fine. Ship your game first. Iterate second.

Chapter 6: The Secret Nobody Tells You About Game AI

ai in npcs

Here’s the uncomfortable truth I learned after years of building “smart” AI:

Players don’t actually want perfect AI.

Let me explain with a story:

I once built an enemy for a roguelike that used perfect geometric calculations. It would:

  • Predict exactly where you’d be based on your movement speed
  • Lead its shots with mathematical precision
  • Never miss if you moved predictably

It was unbeatable if you didn’t constantly move erratically. Players hated it.

So I added randomness. The enemy now:

  • Predicts where you’ll be
  • Then aims 15 degrees off target (randomly left or right)
  • Shoots

Suddenly, players loved fighting it. They’d dodge, feel skilled, win sometimes, lose sometimes. The AI got dumber, and the game got better.

Why “Dumb” AI Creates Better Games

Dark Souls: The Choreographed Dance

Souls enemies don’t adapt to your strategy. They have fixed attack patterns. You learn the patterns, master the timing, beat the boss.

If Souls bosses used ML to adapt to your strategy, you’d never learn. Every attempt would be different. You’d never feel that amazing moment of mastery.

The AI is predictable on purpose. That’s not bad design—that’s brilliant design.

XCOM: The 85% That Feels Like 50%

Developers confirmed XCOM secretly boosts player hit chances because an actual 85% hit rate feels like you miss too often. Human brains are bad at probability.

The AI doesn’t just calculate odds—it fakes them to create the experience players expect.

Resident Evil: Why Zombies Shamble

Zombies in Resident Evil are slow and dumb. If they used GOAP and flanked you intelligently, the game would be a frustrating tactical shooter.

Instead, they shamble. They’re predictable. But there are many of them in tight spaces. The horror comes from resource management and panic, not AI intelligence.

The Real Goal: The Illusion of Intelligence

Players need to believe the AI is smart, even if it isn’t.

Tricks I use constantly:

1. Make AI “notice” the player before attacking Bad: Enemy shoots the instant you’re in range Good: Enemy sees you → pauses 0.5 seconds → yells → attacks

Players feel like they had a fair warning. The AI feels more “human.”

2. Bark callouts even when alone “Reloading!” “Throwing grenade!” “Flanking left!”

Players hear this and think enemies are coordinating. Usually, it’s just randomized voice lines on timers.

3. Intentional mistakes Perfect AI never misses. Realistic AI misses the first shot, then adjusts.

I literally add a “miss the first shot” modifier to enemies. Players think they got lucky. They feel skilled.

4. Visible decision-making Bad: Enemy instantly picks best cover Good: Enemy looks at two cover spots → “makes decision” → moves

Adding a 0.3-second delay and a head-turn animation makes AI feel thoughtful instead of robotic.

What Players Actually Want

Through years of player testing, I’ve learned people want AI that:

1. Telegraphs its intentions Let players see the wind-up before the punch. It’s not dumbing down—it’s giving them information to make decisions.

2. Makes readable mistakes An enemy that charges recklessly feels aggressive. An enemy that calculates the perfect attack frame-perfectly feels like a robot.

3. Appears to learn (even if it doesn’t) After dying to the same player tactic twice, have the AI use a counter-tactic. It doesn’t need ML—just trigger a different behavior tree branch after certain conditions.

4. Can be mastered The joy of games is getting better. If the AI is random or constantly adapting, players can never master the game. They’ll quit.

The Designer’s Dilemma

Here’s my biggest lesson:

Your job isn’t to make smart AI. It’s to make fun AI.

Sometimes that means smart AI (F.E.A.R.’s flanking soldiers). Sometimes that means dumb AI (Minecraft zombies). Usually it means AI that’s just smart enough to create the experience you want.

I’ve deleted hundreds of hours of “impressive” AI code because it made the game less fun. That hurt. But shipping a fun game mattered more than showing off my programming skills.

Chapter 7: Your Journey Starts Here

We’ve covered a lot of ground:

From Pac-Man’s ghosts following simple rules to neural networks training for weeks to master a game. From FSMs to behavior trees to utility AI to machine learning.

But here’s what matters most:

Game AI isn’t about making computer-controlled characters smart. It’s about making players feel something.

Fear. Tension. Triumph. Challenge. Flow.

The Tools Are Yours Now

When I started making games, behavior trees were proprietary tech only big studios could access. Now they’re built into free engines.

When I tried machine learning, I needed expensive hardware and weeks of experimentation. Now you can use Unity ML-Agents on a laptop.

The barrier to entry has never been lower. And that excites me.

What Happens Next

The next generation of game developers won’t just use existing AI systems—they’ll invent new ones we haven’t imagined yet.

Maybe someone figures out how to make ML enemies that are fun to fight, not just optimal.

Maybe someone creates a hybrid system that combines all these approaches seamlessly.

Maybe someone reads this article and builds something that makes me think, “Why didn’t I think of that?”

Parting Wisdom

Start simple. Master FSMs before touching neural networks.

Playtest constantly. Your AI might be technically impressive but frustrating to fight.

Embrace imperfection. Glitchy AI that creates memorable moments is better than perfect AI that’s boring.

Study the games you love. Every time an enemy surprises you, ask: “How did they program that?”

The Question I Want You to Answer

What’s the smartest enemy you’ve ever faced in a game?

Not “most difficult.” Not “highest stats.” But the enemy that made you think, “Wow, that was clever.”

Was it a Halo Elite coordinating with its squad? A F.E.A.R. soldier outmaneuvering you? A Dark Souls boss you finally learned to read? Or maybe something surprising from an indie game nobody else played?

Leave a comment. Tell me the story. I read every one, and I love learning what made different AI memorable to different people.

Resources to Go Deeper

Free Tools:

  • Unity ML-Agents Toolkit (train your own RL agents)
  • Behavior Designer for Unity (visual behavior tree editor)
  • Unreal Engine’s built-in Behavior Trees

GDC Talks Worth Watching:

  • “The AI of The Last of Us” (perfect for understanding utility AI)
  • “Building the AI of F.E.A.R. with Goal-Oriented Action Planning”
  • “Killing the Pigs: Angry Birds AI” (FSMs in practice)

Books That Changed How I Think:

  • Programming Game AI by Example by Mat Buckland
  • Behavioral Mathematics for Game AI by Dave Mark

About the Author

I’ve spent fifteen years building AI systems for indie, mid-size, and occasionally AAA game studios. I’ve shipped stealth games, roguelikes, strategy games, and a couple of projects I’m legally required not to talk about.

I’ve trained neural networks that discovered game-breaking bugs, built behavior trees that made players feel genuinely outsmarted, and written FSM code so spaghetti-tangled that I gave up and rewrote it from scratch.

I learned game AI the hard way—through hundreds of hours of broken code, player complaints, and NPCs doing hilariously wrong things. Now I write about it so you don’t have to make all the same mistakes.

When I’m not coding AI, I’m dying repeatedly to Souls bosses and taking notes on their attack patterns. For research. Obviously.


This article was last updated November 2025. Game AI evolves fast—if you’re reading this in the distant future of 2027, some of this might be charmingly outdated. Or we’re all serving our AI overlords. Either way.