In lesson 13, we saw how deep networks learn powerful representations from data. But those representations are mostly latent: they are useful for prediction yet hard to inspect directly. In clinical settings, we often need knowledge that is explicit, auditable, and logically constrained.
This lesson focuses on knowledge representation: how AI systems encode concepts, relations, and constraints so they can support reasoning, interoperability, and traceability.
Core learnings about knowledge representation
- Semantic networks, frames, and ontologies encode knowledge at increasing levels of formal precision.
- Description logic adds machine-checkable semantics for consistency and inference.
- Open-world and closed-world assumptions produce different behavior on missing data.
- Modern clinical systems increasingly combine neural perception with symbolic knowledge layers.
From concept graphs to formal semantics
A semantic network is a labeled graph of concepts and relations. It is intuitive and easy to traverse, which is why early expert systems and modern knowledge graphs both use graph structures.
A frame extends this by adding slots and default values to a concept template. In triage, a BacterialMeningitis frame can include slots for symptoms, recommended tests, and treatment constraints. This provides structured, editable knowledge that clinicians can inspect.
An ontology goes further by formalizing relations in a logic-backed language (for example, OWL over description logic). That enables automated reasoning: contradiction detection, subclass inference, and query answering.
Description logic in one practical expression
A simple ontology-style axiom can be written as:
Interpretation details:
- Left side: a specific concept class.
- Right side: a broader concept class.
- means every instance on the left is also an instance on the right.
In this expression, Meningitis is a concept and InfectiousDisease is a more general concept. The symbol means concept inclusion (subclass). In the triage example, if infection-control rules are attached to InfectiousDisease, a reasoner can infer they also apply to Meningitis cases without re-entering duplicate rules.
So what does this tell us in practice? Formal concept inclusion lets us scale clinical knowledge safely: new specific diagnoses inherit validated policy constraints automatically.
Open world vs closed world decisions
Closed-world systems treat absent facts as false. Open-world systems treat absent facts as unknown. This distinction strongly affects medical inference.
If a chart does not mention hypertension, closed-world logic may conclude “no hypertension.” Open-world logic concludes “status unknown.” For care pathways, these lead to different actions: immediate rule firing versus data-completion prompts.
When teams mix ML outputs with symbolic systems, making this assumption explicit is essential for safe behavior.
Practical walkthrough: building a clinical knowledge layer
Use this sequence for a triage knowledge representation pipeline:
- Define core concepts and relations (symptoms, diagnoses, tests, interventions).
- Add frame-style slots for required evidence, contraindications, and recommended next actions.
- Encode high-value ontology axioms (subclass, disjointness, required relations).
- Run a reasoner to detect contradictions and implied classifications.
- Connect the symbolic layer to ML outputs through normalized clinical codes.
What this means in practice: symbolic structure transforms model outputs from isolated scores into traceable clinical reasoning artifacts.
Relation to earlier lessons
- Lessons 3 and 8 used explicit symbolic rules and planning constraints.
- Lessons 10-13 emphasized learning patterns from data.
- Lesson 14 reconnects these tracks by formalizing inspectable knowledge structures that can validate or contextualize learned outputs.
Concrete bridge: neural models answer “what pattern is likely?” Knowledge representations answer “is this conclusion semantically consistent and interoperable?”
Notation quick reference
| Symbol/Term | Meaning | Detailed link |
|---|---|---|
| Node | concept/entity in a semantic graph | From concept graphs to formal semantics |
| Edge | typed relation between nodes | From concept graphs to formal semantics |
| Frame | slot-based concept template | From concept graphs to formal semantics |
| Slot | named attribute in a frame | From concept graphs to formal semantics |
| Ontology | formal shared conceptual schema | From concept graphs to formal semantics |
| subclass inclusion in description logic | Description logic in one practical expression | |
| CWA | closed-world assumption | Open world vs closed world decisions |
| OWA | open-world assumption | Open world vs closed world decisions |
Concept deep dives
What comes next
In lesson 15, we move to Transformers and foundation models, where attention-based architectures scale sequence reasoning far beyond earlier recurrent designs.
References and Further Reading
- Brachman, R. and Levesque, H. Knowledge Representation and Reasoning. Morgan Kaufmann, 2004.
- Baader, F. et al. The Description Logic Handbook, 2nd ed. Cambridge University Press, 2010.
- SNOMED International. SNOMED CT Technical Documentation.
This is Lesson 14 of 18 in the AI Starter Course.