In lesson 1, we described AI as a mapping from inputs to outputs. That view is still useful here, but now a more practical question appears: how much work does it take to reach the output at all? Before a machine can learn from data, it often has to reason over structure first. Goal trees are one of the clearest ways to make that structure visible.
If lesson 1 gave you the broad frame, this lesson opens up the inside of one reasoning process. You will see how one difficult decision can be split into smaller checks, why some branches require everything to be true, and why others only need one successful route.
Core learnings about goal trees and machine reasoning
- How can one hard reasoning problem be split into smaller sub-decisions?
- What is the practical difference between an AND node and an OR node?
- Why do backward chaining and forward chaining feel different even on the same tree?
- How does a tree help you estimate reasoning cost instead of only correctness?
Why decomposition matters
In lesson 1, we described AI as a mapping from inputs to outputs. In this lesson, we keep that same view, but add a new question: how much work does it take to compute that mapping?
If each reasoning step has alternatives and the search goes down to depth , then the number of explored nodes is roughly
Here:
- is the total number of nodes we examine
- is the branching factor, meaning the average number of children per node
- is the depth, meaning how many reasoning layers we explore
- means the same pattern continues
For large depth, the last term dominates, so we summarize the growth as
If Big-O notation is still fuzzy, pause here and open Big-O growth intuition for search.
The symbols in that line also deserve explanation:
- means “is in” or “belongs to”
- is Big-O notation, a way to describe growth rate
- means multiplied by itself times
This is the practical reason decomposition matters. Without structure, the amount of work can grow extremely fast.
The goal tree as a mathematical object
To make the idea precise, we describe a goal tree as
You can read this as “a tree is made of these four parts”:
- : the set of nodes
- : the set of edges from parent to child
- : the root node, meaning the top-level goal
- : a function that tells us what kind of node each node is
That node-type function is written as
The extra symbols here mean:
- is one specific node
- means “the type of node ”
- and mark a set of allowed values
- means all required child goals must hold
- means at least one child goal must hold
- means we check the fact directly
How the tree is evaluated
Now we need a rule for deciding whether a node is true or false.
Let be the set of known facts, and let mean the truth value of node under the known facts .
For a leaf node, we write:
If set notation symbols like and function-style notation are still unfamiliar, use Function notation primer.
For an AND node with children , we write:
For an OR node, we write:
If complexity growth and tree size are the confusing part here, cross-check with Big-O growth intuition for search.
The new symbols here are important too:
- means logical “and”
- means logical “or”
- the subscript in or labels the first child, second child, and so on
- is the total number of children for the current node
This is the exact logic the interactive visualization uses.
Backward and forward chaining
Once we have a tree, we still need a strategy for walking through it.
Backward chaining starts from the top goal and asks: what would I need to prove for this to be true?
Forward chaining starts from known facts and asks: what follows from what I already know?
If the tree has nodes, then one full pass is .
That vertical-bar notation also needs a quick explanation:
- means the size of the set , so here it means the number of nodes in the tree
In practice, the better strategy depends on workload:
- many queries and few updates: backward chaining is often better
- few queries and many updates: forward chaining is often better
The triage example in action
Take our triage example. The tree makes the abstract rules concrete.
- the root goal is to choose a care plan
- one branch asks whether antibiotics are justified
- the other branch asks whether simple symptom relief is appropriate
What you are looking at is therefore not just a diagram. It is a compact decision process.
- If the antibiotics branch is satisfied, the system can recommend antibiotics.
- If that branch fails but the symptom-relief branch succeeds, the system can still return a valid supportive-care plan.
- If both branches fail, the tree cannot prove a treatment plan from the known facts.
We can write part of that logic as:
Even comparison symbols inside the medical facts should be read carefully. For example, the label means “greater than 38.5 degrees Celsius.” That simple symbol is already a decision rule.
Explore the goal tree
The visualization below is the same logic written in a more intuitive form. Instead of only reading formulas, you can inspect the nodes, switch between chaining strategies, and watch truth values move through the tree.
The three preset scenarios are designed to answer three different questions:
- Clinical signs route: can symptoms alone justify the antibiotics branch?
- Test-confirmed route: can a single strong test result justify the same branch more directly?
- Supportive care route: what happens when the antibiotics branch fails, but symptom relief is still valid?
So the result of the visualization is not just “true” or “false.” The more useful result is understanding why one branch succeeds and another fails.
Walkthrough: Clinical signs route
Use the Clinical signs route preset once from start to finish:
- Keep mode on Backward and click Run.
- Watch the root node (Choose Care Plan) ask whether one major branch can be proven.
- The tree then checks Start Antibiotics, which requires both infection evidence and no allergy.
- Under infection evidence, the Classic Signs path is satisfied because high fever and stiff neck are both present.
- Because No Allergy is also present, the antibiotics branch becomes true, and the root plan is proven.
What this means: the algorithm did not guess. It proved the recommendation through AND/OR semantics step by step.
Once you use the tree for a minute or two, the equations above usually become much easier to understand. That is exactly the goal: the math and the interaction should support each other.
This is the first time the course really opens up the inside of the mapping from lesson 1. Instead of only saying that an AI system turns input into output, we now look at one possible internal structure that can perform that transformation step by step.
What comes next
In lesson 3, the tree structure stays in the background, but we add explicit rules and uncertainty on top of it. That is where symbolic reasoning starts to feel closer to real expert systems.
Notation quick reference
| Symbol | Meaning | Detailed Explanation |
|---|---|---|
| branching factor | Why decomposition matters | |
| search depth | Why decomposition matters | |
| exponential growth summary | Why decomposition matters | |
| (node set) | all nodes in the tree | The goal tree as a mathematical object |
| (edge set) | all parent-child links | The goal tree as a mathematical object |
| root node | The goal tree as a mathematical object | |
| type of node | The goal tree as a mathematical object | |
| belongs to | Why decomposition matters | |
| logical and | How the tree is evaluated | |
| logical or | How the tree is evaluated | |
| $ | V | $ |
| greater than | The triage example in action |
For standalone math deep dives:
Why this still matters in modern AI
Even modern AI agents still rely on some version of this decomposition idea. A model may generate candidate steps, but those steps still have to be checked, combined, and satisfied in a structured way.
That is why goal trees are not just historical background. They are part of the logic that still sits underneath modern reasoning systems.
References and Further Reading
- Winston, Patrick H. Artificial Intelligence, 3rd ed. Addison-Wesley, 1992. Chapter 4.
- Nilsson, Nils J. Principles of Artificial Intelligence. Tioga Publishing, 1980.
- Shortliffe, E.H. Computer-Based Medical Consultations: MYCIN. Elsevier, 1976.
- Russell, S. and Norvig, P. Artificial Intelligence: A Modern Approach, 4th ed. Pearson, 2020. Chapter 3.
This is Lesson 2 of 18 in the AI Starter Course.