In lesson 3, the rule base was mostly fixed and explicit. In this lesson, we remove that comfort and ask a harder question: how does an algorithm find a path through many possible states when it does not know the answer in advance?
That is the core search problem in AI.
Before the formulas, two words matter. A state is one complete description of the situation the algorithm currently cares about, for example “vitals collected but diagnosis not yet confirmed.” The state space is the full set of such possible situations the algorithm might move through. A transition is one legal move from one state to another.
Core learnings about uninformed search
- A search problem can be written precisely as states, actions, a start state, and a goal test.
- Uninformed algorithms differ mainly by exploration order, not by domain knowledge.
- DFS is memory-light but can miss short solutions; BFS is complete and shallow-optimal for uniform costs but memory-heavy.
- Iterative deepening (IDS) combines BFS-style guarantees with DFS-style memory use.
Search in one mathematical frame
We write a search problem as
where:
- is the state space
- is the initial state
- is the set of actions
- is a test that returns true when state is a solution
Read this tuple as a checklist before coding:
- What states are allowed ()?
- Where do we start ()?
- What moves are legal ()?
- How do we detect success ()?
If this notation is unfamiliar, use Function notation for AI and then continue.
A solution is a sequence of actions
In this expression, index is temporal order in the plan: happens first, is the final action before reaching the goal.
that transforms into some goal state.
Why search grows quickly
If each state has about successors and we explore to depth , total generated nodes are roughly
and summarized as
In this equation, is average branching factor (choices per state), is depth explored, and is total generated states. The exponential dependence on is the central reason uninformed search becomes expensive quickly.
If Big-O is still unclear, pause here for Big-O growth intuition for search.
This is why strategy matters: two algorithms can solve the same task but consume very different memory and time.
DFS, BFS, and IDS in plain language
Depth-first search (DFS)
DFS goes deep along one branch before backtracking.
- time: where is maximum explored depth
- space:
- complete: not guaranteed in infinite/cyclic spaces
- optimal: no
Breadth-first search (BFS)
BFS explores level by level.
- time: where is shallowest solution depth
- space:
- complete: yes (finite branching, finite goal depth)
- optimal: yes for uniform action costs
Iterative deepening search (IDS)
IDS runs depth-limited DFS repeatedly with limits 1, 2, 3, … until a goal is found.
- time:
- space:
- complete: yes
- optimal: yes for uniform costs
In practice, IDS is often the best uninformed baseline when memory is limited.
Explore uninformed search
Use the simulator to compare traversal behavior directly.
- select a scenario (goal)
- choose BFS, DFS, or IDS
- click Step to inspect every expansion
- click Run to watch full traversal
Walkthrough: Find Bacterial Meningitis Workup
Try this one full run first:
- Select Find: Bacterial Meningitis Workup.
- Choose BFS and click Run.
- Observe that BFS expands all shallow alternatives before committing deeper.
- Repeat with DFS and compare how quickly it dives into one branch.
- Repeat with IDS and watch depth limits restart while memory remains controlled.
What this shows: algorithm choice changes search behavior even when the tree and goal are identical.
Relation to earlier lessons
- Lesson 2 already used structured search through AND/OR goals.
- Lesson 3 used rule chains that can also be seen as search over derivations.
- This lesson removes most domain structure and studies general traversal order (BFS/DFS/IDS) directly.
So the continuity is: same search theme, but progressively less built-in structure and more explicit algorithmic tradeoffs.
Notation quick reference
| Symbol | Meaning | Detailed Explanation |
|---|---|---|
| state space | Search in one mathematical frame | |
| initial state | Search in one mathematical frame | |
| action set | Search in one mathematical frame | |
| goal test on state | Search in one mathematical frame | |
| action sequence (candidate solution) | Search in one mathematical frame | |
| branching factor | Why search grows quickly | |
| shallowest solution depth | DFS, BFS, and IDS in plain language | |
| maximum explored depth | DFS, BFS, and IDS in plain language | |
| growth-rate notation | Why search grows quickly |
Concept deep dives
What comes next
In lesson 5, we add heuristic information so search no longer treats every branch as equally promising. That is where A* enters and dramatically reduces expansions in many practical tasks.
References and Further Reading
- Russell, S. and Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed. Pearson, 2020. Chapter 3.
- Winston, Patrick H. Artificial Intelligence, 3rd ed. Addison-Wesley, 1992. Chapter 4.
- Korf, R.E. “Depth-First Iterative-Deepening: An Optimal Admissible Tree Search.” Artificial Intelligence 27, 1985.
- Nilsson, Nils J. Principles of Artificial Intelligence. Tioga Publishing, 1980.
This is Lesson 4 of 18 in the AI Starter Course.