Course navigation Course overview

All lessons

18 total

Lesson 4 showed the limit of uninformed search: even simple branching structures explode quickly. In this lesson, we add guidance so the algorithm explores promising paths first instead of treating every branch as equally likely.

That is the central idea of heuristic search.

Two quick terms before we use them everywhere: the frontier is the set of nodes the algorithm has already discovered but not expanded yet, and an admissible heuristic is a heuristic that never overestimates the true remaining cost to the goal.

  • A heuristic h(n)h(n) is an estimate of remaining cost from node nn to the goal.
  • Greedy best-first uses only h(n)h(n) and can be fast but non-optimal.
  • Dijkstra (uniform-cost search) uses only g(n)g(n) and is optimal but often explores too much.
  • A* combines both signals through f(n)=g(n)+h(n)f(n)=g(n)+h(n) and is optimal with an admissible heuristic.

The A* frame in one line

The evaluation function for A* is

f(n)=g(n)+h(n)f(n) = g(n) + h(n)

where:

  • g(n)g(n) is the path cost from start to node nn
  • h(n)h(n) is estimated remaining cost from nn to goal
  • f(n)f(n) is estimated total path cost through nn

Read this expression operationally: A* computes all three terms for each frontier node, then expands the node with smallest f(n)f(n) next.

In the diagnostic graph below, nn is one concrete clinical state such as CBC Panel, Blood Culture, or Lumbar Puncture. The value g(n)g(n) is the work already spent to get there from Intake Vitals, which you can think of as the tests, time, and cost already committed. The value h(n)h(n) is the heuristic number stored on the node itself: a compact estimate of how much diagnostic work still seems to remain before the system reaches Diagnosis Confirmed. So f(n)f(n) is the algorithm’s combined judgment of “cost already paid + cost still expected.” In the visualization, that means A* does not merely chase the lowest heuristic node; it prefers the frontier state with the best total balance.

If this notation style is new, use Function notation for AI first and then continue.

Why admissibility matters

To preserve optimality, we require

h(n)h(n)h(n) \le h^*(n)

for every node nn, where h(n)h^*(n) is the true optimal remaining cost.

This condition is called admissibility. Intuition: the heuristic may be optimistic (underestimate), but must not overestimate.

In this inequality, h(n)h^*(n) is not estimated by the algorithm; it is the true optimal remaining cost used as a theoretical reference to define safe heuristics.

If growth notation feels fuzzy, refresh with Big-O growth intuition for search, then return to this section.

Greedy vs Dijkstra vs A*

Greedy best-first

  • priority: smallest h(n)h(n)
  • strength: often reaches a goal quickly
  • weakness: not guaranteed optimal

Dijkstra (uniform-cost)

  • priority: smallest g(n)g(n)
  • strength: optimal for nonnegative costs
  • weakness: can explore many irrelevant nodes

A*

  • priority: smallest f(n)=g(n)+h(n)f(n)=g(n)+h(n)
  • strength: balances known cost and estimated remaining cost
  • guarantee: optimal with admissible heuristic

Explore the diagnostic graph

Use the simulator to compare algorithm behavior on the same clinical pathway.

  • Keep start at Intake Vitals and goal at Diagnosis Confirmed.
  • Run A* once to see balanced expansion.
  • Run Greedy to see faster but less cost-aware behavior.
  • Run Dijkstra to see fully cost-driven exploration.

Walkthrough: A* on the triage graph

Try this sequence once:

  1. Select A* and click Run.
  2. Watch open-set priorities update using f=g+hf=g+h.
  3. Observe how nodes with small hh are not chosen blindly if their gg is already high.
  4. Compare with Greedy and note where it commits early based only on hh.
  5. Compare with Dijkstra and note broader exploration before convergence.

What this means in human terms: A* behaves like a clinician balancing current effort already spent and estimated effort remaining, not just chasing the apparently closest state.

Diagnostic Pathway Graph: A* Search

Compare A*, Greedy, and Dijkstra by stepping through open-set decisions and path costs.

Algorithm
Open Set (sorted by f = g + h)
Closed Set
Node Info
Click a node for details.

Why heuristic quality dominates performance

Algorithm choice matters, but heuristic quality often matters more. A weak admissible heuristic behaves close to Dijkstra, while a tight admissible heuristic sharply reduces explored nodes.

So practical A* work is usually: design a heuristic that stays admissible while becoming as informative as possible.

Relation to earlier lessons

  • Lesson 2: search structure and traversal order
  • Lesson 3: explicit rule-based reasoning
  • Lesson 4: uninformed exploration limits
  • Lesson 5 (this lesson): informed exploration with cost estimates

Concrete bridge: lesson 4 asked “how do we search?”; this lesson asks “how do we search with guidance?”

This keeps the same course pattern: define representation, pick traversal policy, evaluate computational tradeoffs.

Notation quick reference

SymbolMeaningDetailed Explanation
g(n)g(n)cost from start to node nnThe A* frame in one line
h(n)h(n)estimated remaining cost to goalThe A* frame in one line
f(n)f(n)estimated total path cost through nnThe A* frame in one line
h(n)h^*(n)true optimal remaining costWhy admissibility matters
\leless than or equal toWhy admissibility matters

Concept deep dives

What comes next

In lesson 6, we switch from pathfinding to local optimization methods where the quality of the current state matters more than the exact path, introducing hill climbing and simulated annealing.

Next: Lesson 6, Local Search and Optimization


References and Further Reading

  • Hart, P.E., Nilsson, N.J., and Raphael, B. “A Formal Basis for the Heuristic Determination of Minimum Cost Paths.” IEEE Transactions on Systems Science and Cybernetics 4(2), 1968.
  • Russell, S. and Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed. Pearson, 2020. Chapter 3.
  • Pearl, Judea. Heuristics: Intelligent Search Strategies for Computer Problem Solving. Addison-Wesley, 1984.
  • Korf, R.E. “Linear-Space Best-First Search.” Artificial Intelligence 62(1), 1993.

This is Lesson 5 of 18 in the AI Starter Course.