Course navigation Course overview

All lessons

18 total

In lesson 13, we saw how deep networks learn powerful representations from data. But those representations are mostly latent: they are useful for prediction yet hard to inspect directly. In clinical settings, we often need knowledge that is explicit, auditable, and logically constrained.

This lesson focuses on knowledge representation: how AI systems encode concepts, relations, and constraints so they can support reasoning, interoperability, and traceability.

Core learnings about knowledge representation

  • Semantic networks, frames, and ontologies encode knowledge at increasing levels of formal precision.
  • Description logic adds machine-checkable semantics for consistency and inference.
  • Open-world and closed-world assumptions produce different behavior on missing data.
  • Modern clinical systems increasingly combine neural perception with symbolic knowledge layers.

From concept graphs to formal semantics

A semantic network is a labeled graph of concepts and relations. It is intuitive and easy to traverse, which is why early expert systems and modern knowledge graphs both use graph structures.

A frame extends this by adding slots and default values to a concept template. In triage, a BacterialMeningitis frame can include slots for symptoms, recommended tests, and treatment constraints. This provides structured, editable knowledge that clinicians can inspect.

An ontology goes further by formalizing relations in a logic-backed language (for example, OWL over description logic). That enables automated reasoning: contradiction detection, subclass inference, and query answering.

Description logic in one practical expression

A simple ontology-style axiom can be written as:

MeningitisInfectiousDisease\mathrm{Meningitis} \sqsubseteq \mathrm{InfectiousDisease}

Interpretation details:

  • Left side: a specific concept class.
  • Right side: a broader concept class.
  • \sqsubseteq means every instance on the left is also an instance on the right.

In this expression, Meningitis is a concept and InfectiousDisease is a more general concept. The symbol \sqsubseteq means concept inclusion (subclass). In the triage example, if infection-control rules are attached to InfectiousDisease, a reasoner can infer they also apply to Meningitis cases without re-entering duplicate rules.

So what does this tell us in practice? Formal concept inclusion lets us scale clinical knowledge safely: new specific diagnoses inherit validated policy constraints automatically.

Open world vs closed world decisions

Closed-world systems treat absent facts as false. Open-world systems treat absent facts as unknown. This distinction strongly affects medical inference.

If a chart does not mention hypertension, closed-world logic may conclude “no hypertension.” Open-world logic concludes “status unknown.” For care pathways, these lead to different actions: immediate rule firing versus data-completion prompts.

When teams mix ML outputs with symbolic systems, making this assumption explicit is essential for safe behavior.

Practical walkthrough: building a clinical knowledge layer

Use this sequence for a triage knowledge representation pipeline:

  1. Define core concepts and relations (symptoms, diagnoses, tests, interventions).
  2. Add frame-style slots for required evidence, contraindications, and recommended next actions.
  3. Encode high-value ontology axioms (subclass, disjointness, required relations).
  4. Run a reasoner to detect contradictions and implied classifications.
  5. Connect the symbolic layer to ML outputs through normalized clinical codes.

What this means in practice: symbolic structure transforms model outputs from isolated scores into traceable clinical reasoning artifacts.

Relation to earlier lessons

  1. Lessons 3 and 8 used explicit symbolic rules and planning constraints.
  2. Lessons 10-13 emphasized learning patterns from data.
  3. Lesson 14 reconnects these tracks by formalizing inspectable knowledge structures that can validate or contextualize learned outputs.

Concrete bridge: neural models answer “what pattern is likely?” Knowledge representations answer “is this conclusion semantically consistent and interoperable?”

Notation quick reference

Symbol/TermMeaningDetailed link
Nodeconcept/entity in a semantic graphFrom concept graphs to formal semantics
Edgetyped relation between nodesFrom concept graphs to formal semantics
Frameslot-based concept templateFrom concept graphs to formal semantics
Slotnamed attribute in a frameFrom concept graphs to formal semantics
Ontologyformal shared conceptual schemaFrom concept graphs to formal semantics
\sqsubseteqsubclass inclusion in description logicDescription logic in one practical expression
CWAclosed-world assumptionOpen world vs closed world decisions
OWAopen-world assumptionOpen world vs closed world decisions

Concept deep dives

What comes next

In lesson 15, we move to Transformers and foundation models, where attention-based architectures scale sequence reasoning far beyond earlier recurrent designs.


References and Further Reading

  • Brachman, R. and Levesque, H. Knowledge Representation and Reasoning. Morgan Kaufmann, 2004.
  • Baader, F. et al. The Description Logic Handbook, 2nd ed. Cambridge University Press, 2010.
  • SNOMED International. SNOMED CT Technical Documentation.

This is Lesson 14 of 18 in the AI Starter Course.