Course navigation Course overview

All lessons

18 total

In lesson 1, we described AI as a mapping from inputs to outputs. That view is still useful here, but now a more practical question appears: how much work does it take to reach the output at all? Before a machine can learn from data, it often has to reason over structure first. Goal trees are one of the clearest ways to make that structure visible.

If lesson 1 gave you the broad frame, this lesson opens up the inside of one reasoning process. You will see how one difficult decision can be split into smaller checks, why some branches require everything to be true, and why others only need one successful route.

Core learnings about goal trees and machine reasoning

  • How can one hard reasoning problem be split into smaller sub-decisions?
  • What is the practical difference between an AND node and an OR node?
  • Why do backward chaining and forward chaining feel different even on the same tree?
  • How does a tree help you estimate reasoning cost instead of only correctness?

Why decomposition matters

In lesson 1, we described AI as a mapping from inputs to outputs. In this lesson, we keep that same view, but add a new question: how much work does it take to compute that mapping?

If each reasoning step has bb alternatives and the search goes down to depth dd, then the number of explored nodes is roughly

N=1+b+b2++bdN = 1 + b + b^2 + \dots + b^d

Here:

  • NN is the total number of nodes we examine
  • bb is the branching factor, meaning the average number of children per node
  • dd is the depth, meaning how many reasoning layers we explore
  • \dots means the same pattern continues

For large depth, the last term dominates, so we summarize the growth as

NO(bd)N \in O(b^d)

If Big-O notation is still fuzzy, pause here and open Big-O growth intuition for search.

The symbols in that line also deserve explanation:

  • \in means “is in” or “belongs to”
  • O()O(\cdot) is Big-O notation, a way to describe growth rate
  • bdb^d means bb multiplied by itself dd times

This is the practical reason decomposition matters. Without structure, the amount of work can grow extremely fast.

The goal tree as a mathematical object

To make the idea precise, we describe a goal tree as

T=(V,E,r,type)T = (V, E, r, type)

You can read this as “a tree is made of these four parts”:

  • VV: the set of nodes
  • EE: the set of edges from parent to child
  • rr: the root node, meaning the top-level goal
  • typetype: a function that tells us what kind of node each node is

That node-type function is written as

type(v){AND,OR,LEAF}type(v) \in \{AND, OR, LEAF\}

The extra symbols here mean:

  • vv is one specific node
  • type(v)type(v) means “the type of node vv
  • {\{ and }\} mark a set of allowed values
  • ANDAND means all required child goals must hold
  • OROR means at least one child goal must hold
  • LEAFLEAF means we check the fact directly

How the tree is evaluated

Now we need a rule for deciding whether a node is true or false.

Let SS be the set of known facts, and let value(v,S)value(v, S) mean the truth value of node vv under the known facts SS.

For a leaf node, we write:

value(v,S)=true, if vSvalue(v, S) = true,\ \text{if } v \in S

If set notation symbols like \in and function-style notation are still unfamiliar, use Function notation primer.

value(v,S)=false, otherwisevalue(v, S) = false,\ \text{otherwise}

For an AND node with children c1,c2,,ckc_1, c_2, \dots, c_k, we write:

value(v,S)=value(c1,S)value(c2,S)value(ck,S)value(v, S) = value(c_1, S) \land value(c_2, S) \land \dots \land value(c_k, S)

For an OR node, we write:

value(v,S)=value(c1,S)value(c2,S)value(ck,S)value(v, S) = value(c_1, S) \lor value(c_2, S) \lor \dots \lor value(c_k, S)

If complexity growth and tree size are the confusing part here, cross-check with Big-O growth intuition for search.

The new symbols here are important too:

  • \land means logical “and”
  • \lor means logical “or”
  • the subscript in c1c_1 or ckc_k labels the first child, second child, and so on
  • kk is the total number of children for the current node

This is the exact logic the interactive visualization uses.

Backward and forward chaining

Once we have a tree, we still need a strategy for walking through it.

Backward chaining starts from the top goal and asks: what would I need to prove for this to be true?

Forward chaining starts from known facts and asks: what follows from what I already know?

If the tree has n=Vn = |V| nodes, then one full pass is O(n)O(n).

That vertical-bar notation also needs a quick explanation:

  • V|V| means the size of the set VV, so here it means the number of nodes in the tree

In practice, the better strategy depends on workload:

  • many queries and few updates: backward chaining is often better
  • few queries and many updates: forward chaining is often better

The triage example in action

Take our triage example. The tree makes the abstract rules concrete.

  • the root goal is to choose a care plan
  • one branch asks whether antibiotics are justified
  • the other branch asks whether simple symptom relief is appropriate

What you are looking at is therefore not just a diagram. It is a compact decision process.

  • If the antibiotics branch is satisfied, the system can recommend antibiotics.
  • If that branch fails but the symptom-relief branch succeeds, the system can still return a valid supportive-care plan.
  • If both branches fail, the tree cannot prove a treatment plan from the known facts.

We can write part of that logic as:

M=HighFeverStiffNeckM = HighFever \land StiffNeck A=InfectionConfirmedNoAllergyA = InfectionConfirmed \land NoAllergy Plan=ASymptomReliefPlan = A \lor SymptomRelief

Even comparison symbols inside the medical facts should be read carefully. For example, the label >38.5C> 38.5^\circ C means “greater than 38.5 degrees Celsius.” That simple symbol >> is already a decision rule.

Explore the goal tree

The visualization below is the same logic written in a more intuitive form. Instead of only reading formulas, you can inspect the nodes, switch between chaining strategies, and watch truth values move through the tree.

The three preset scenarios are designed to answer three different questions:

  • Clinical signs route: can symptoms alone justify the antibiotics branch?
  • Test-confirmed route: can a single strong test result justify the same branch more directly?
  • Supportive care route: what happens when the antibiotics branch fails, but symptom relief is still valid?

So the result of the visualization is not just “true” or “false.” The more useful result is understanding why one branch succeeds and another fails.

Walkthrough: Clinical signs route

Use the Clinical signs route preset once from start to finish:

  1. Keep mode on Backward and click Run.
  2. Watch the root node (Choose Care Plan) ask whether one major branch can be proven.
  3. The tree then checks Start Antibiotics, which requires both infection evidence and no allergy.
  4. Under infection evidence, the Classic Signs path is satisfied because high fever and stiff neck are both present.
  5. Because No Allergy is also present, the antibiotics branch becomes true, and the root plan is proven.

What this means: the algorithm did not guess. It proved the recommendation through AND/OR semantics step by step.

Chaining mode
AND node — all children must be satisfied OR node — any one child suffices Leaf — primitive fact

Reasoning trace

Run the algorithm to see the reasoning steps.

Once you use the tree for a minute or two, the equations above usually become much easier to understand. That is exactly the goal: the math and the interaction should support each other.

This is the first time the course really opens up the inside of the mapping from lesson 1. Instead of only saying that an AI system turns input into output, we now look at one possible internal structure that can perform that transformation step by step.

What comes next

In lesson 3, the tree structure stays in the background, but we add explicit rules and uncertainty on top of it. That is where symbolic reasoning starts to feel closer to real expert systems.

Notation quick reference

SymbolMeaningDetailed Explanation
bbbranching factorWhy decomposition matters
ddsearch depthWhy decomposition matters
O(bd)O(b^d)exponential growth summaryWhy decomposition matters
VV (node set)all nodes in the treeThe goal tree as a mathematical object
EE (edge set)all parent-child linksThe goal tree as a mathematical object
rrroot nodeThe goal tree as a mathematical object
type(v)type(v)type of node vvThe goal tree as a mathematical object
\inbelongs toWhy decomposition matters
\landlogical andHow the tree is evaluated
\lorlogical orHow the tree is evaluated
$V$
>>greater thanThe triage example in action

For standalone math deep dives:

Why this still matters in modern AI

Even modern AI agents still rely on some version of this decomposition idea. A model may generate candidate steps, but those steps still have to be checked, combined, and satisfied in a structured way.

That is why goal trees are not just historical background. They are part of the logic that still sits underneath modern reasoning systems.


References and Further Reading

  • Winston, Patrick H. Artificial Intelligence, 3rd ed. Addison-Wesley, 1992. Chapter 4.
  • Nilsson, Nils J. Principles of Artificial Intelligence. Tioga Publishing, 1980.
  • Shortliffe, E.H. Computer-Based Medical Consultations: MYCIN. Elsevier, 1976.
  • Russell, S. and Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed. Pearson, 2020. Chapter 3.

This is Lesson 2 of 18 in the AI Starter Course.