Course navigation Course overview

All lessons

18 total

In lesson 3, the rule base was mostly fixed and explicit. In this lesson, we remove that comfort and ask a harder question: how does an algorithm find a path through many possible states when it does not know the answer in advance?

That is the core search problem in AI.

Before the formulas, two words matter. A state is one complete description of the situation the algorithm currently cares about, for example “vitals collected but diagnosis not yet confirmed.” The state space is the full set of such possible situations the algorithm might move through. A transition is one legal move from one state to another.

  • A search problem can be written precisely as states, actions, a start state, and a goal test.
  • Uninformed algorithms differ mainly by exploration order, not by domain knowledge.
  • DFS is memory-light but can miss short solutions; BFS is complete and shallow-optimal for uniform costs but memory-heavy.
  • Iterative deepening (IDS) combines BFS-style guarantees with DFS-style memory use.

Search in one mathematical frame

We write a search problem as

P=(S,s0,A,Goal)P = (S, s_0, A, Goal)

where:

  • SS is the state space
  • s0s_0 is the initial state
  • AA is the set of actions
  • Goal(s)Goal(s) is a test that returns true when state ss is a solution

Read this tuple as a checklist before coding:

  • What states are allowed (SS)?
  • Where do we start (s0s_0)?
  • What moves are legal (AA)?
  • How do we detect success (GoalGoal)?

If this notation is unfamiliar, use Function notation for AI and then continue.

A solution is a sequence of actions

π=(a1,a2,,ak)\pi = (a_1, a_2, \dots, a_k)

In this expression, index 1..k1..k is temporal order in the plan: a1a_1 happens first, aka_k is the final action before reaching the goal.

that transforms s0s_0 into some goal state.

Why search grows quickly

If each state has about bb successors and we explore to depth dd, total generated nodes are roughly

N=1+b+b2++bdN = 1 + b + b^2 + \dots + b^d

and summarized as

NO(bd)N \in O(b^d)

In this equation, bb is average branching factor (choices per state), dd is depth explored, and NN is total generated states. The exponential dependence on dd is the central reason uninformed search becomes expensive quickly.

If Big-O is still unclear, pause here for Big-O growth intuition for search.

This is why strategy matters: two algorithms can solve the same task but consume very different memory and time.

DFS, BFS, and IDS in plain language

Depth-first search (DFS)

DFS goes deep along one branch before backtracking.

  • time: O(bm)O(b^m) where mm is maximum explored depth
  • space: O(bm)O(bm)
  • complete: not guaranteed in infinite/cyclic spaces
  • optimal: no

Breadth-first search (BFS)

BFS explores level by level.

  • time: O(bd)O(b^d) where dd is shallowest solution depth
  • space: O(bd)O(b^d)
  • complete: yes (finite branching, finite goal depth)
  • optimal: yes for uniform action costs

Iterative deepening search (IDS)

IDS runs depth-limited DFS repeatedly with limits 1, 2, 3, … until a goal is found.

  • time: O(bd)O(b^d)
  • space: O(bd)O(bd)
  • complete: yes
  • optimal: yes for uniform costs

In practice, IDS is often the best uninformed baseline when memory is limited.

Use the simulator to compare traversal behavior directly.

  • select a scenario (goal)
  • choose BFS, DFS, or IDS
  • click Step to inspect every expansion
  • click Run to watch full traversal

Walkthrough: Find Bacterial Meningitis Workup

Try this one full run first:

  1. Select Find: Bacterial Meningitis Workup.
  2. Choose BFS and click Run.
  3. Observe that BFS expands all shallow alternatives before committing deeper.
  4. Repeat with DFS and compare how quickly it dives into one branch.
  5. Repeat with IDS and watch depth limits restart while memory remains controlled.

What this shows: algorithm choice changes search behavior even when the tree and goal are identical.

Diagnostic State Space: Uninformed Search

Select a goal scenario. Compare BFS, DFS, and IDS with Step or Run.

Relation to earlier lessons

  • Lesson 2 already used structured search through AND/OR goals.
  • Lesson 3 used rule chains that can also be seen as search over derivations.
  • This lesson removes most domain structure and studies general traversal order (BFS/DFS/IDS) directly.

So the continuity is: same search theme, but progressively less built-in structure and more explicit algorithmic tradeoffs.

Notation quick reference

SymbolMeaningDetailed Explanation
SSstate spaceSearch in one mathematical frame
s0s_0initial stateSearch in one mathematical frame
AAaction setSearch in one mathematical frame
Goal(s)Goal(s)goal test on state ssSearch in one mathematical frame
π\piaction sequence (candidate solution)Search in one mathematical frame
bbbranching factorWhy search grows quickly
ddshallowest solution depthDFS, BFS, and IDS in plain language
mmmaximum explored depthDFS, BFS, and IDS in plain language
O()O(\cdot)growth-rate notationWhy search grows quickly

Concept deep dives

What comes next

In lesson 5, we add heuristic information so search no longer treats every branch as equally promising. That is where A* enters and dramatically reduces expansions in many practical tasks.


References and Further Reading

  • Russell, S. and Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed. Pearson, 2020. Chapter 3.
  • Winston, Patrick H. Artificial Intelligence, 3rd ed. Addison-Wesley, 1992. Chapter 4.
  • Korf, R.E. “Depth-First Iterative-Deepening: An Optimal Admissible Tree Search.” Artificial Intelligence 27, 1985.
  • Nilsson, Nils J. Principles of Artificial Intelligence. Tioga Publishing, 1980.

This is Lesson 4 of 18 in the AI Starter Course.