2nd IAOA Summer School – Renata Wassermann – Knowledge Representation and Description Logics

Renata Wassermann started her course “Knowledge Representation and Description Logics” on Monday morning. The course started with basics on AI, Knowledge Representation and basic logics.

Part 1

Initially, Renata discussed some basic concepts of AI: what does it mean for machines to think? “Strong AI” is worried about the process of thinking, whereas “Weak AI” is focused on having machines produce intelligent results, independently of the process resembling anything like (biological) thinking. She is interested in the latter. This process involves two parts: having the knowledge and making inferences on it.

Before going into Knowledge Representation (KR), Renata talked about the role of ontologies in Knowledge-Based Systems, sharing her experience in a project conducted in a hospital setting, in which doctors had a lot of data but did not make a good use of it because of difficulties in interpreting them. Ontologies helped in this situation.

Back to AI, she also mentioned a recent growing interest in Machine Learning in the AI community and how it is very good to find answers in many settings, however only Symbolic AI — which is her focus — can explain how the answer was found. In some cases this is as important as finding the answer itself.

Renata illustrated a very important point in KR: the task is concerned with symbol manipulation with no regards for semantics. The actual semantics are in the mind of the person reading the symbols. To illustrate this, she mentioned Searl’s Chinese Room: suppose you’re in a room with a book that has answers to questions all written in Chinese. If you receive written questions in Chinese, look for the symbols in the book’s questions and copy the symbols in the corresponding answer, for the point of view of an outsider it looks like you know Chinese (even if you don’t). So Man(x) -> Human(x) for the computer is the same as Xyz(x) -> Abc(x). We use natural language for our own interpretation.

She then moved on to illustrating the Knowledge-Based Systems (KBS) hypothesis: separating the knowledge from the procedural aspects of the system (what it does with the knowledge). She illustrated it with a Prolog program that prints (the procedure) the color of some things (the knowledge).

Then, she proceeded to give a history on Knowledge Representation in AI, starting with the McCarthy and the foundations of AI, Expert Systems (which is how Knowledge-Based Systems were known at first) starting with DENDRAL (~ 1965) and MYCIN (~ 1975), the latter having better performance than many experts in doing diagnosis (it was never used in practice for ethical and performance reasons. The latter has been solved by now, but the former is and will continue to be an issue).

She mentioned also the Naïve Physics Manifesto (1978, 1983), trying to represent day-to-day knowledge on physics (e.g., if I drop an object, it will fall). This started a trend on representing everything needed for common sense reasoning. On the same idea, CYC (1984) has been going on for over thirty years. She ended the history part with the egg cracking challenge: four formalizations, two of which became journal papers, one of them with 66 axioms and lots of theorem proofs over this very simple common sense knowledge.

Renata then moved on to an introduction to the formal part of the course, presenting the basics of First-Order Logic: syntax (symbols like functions, predicates, connectives, quantifiers, variables) and semantics (when is a predicate true, interpretation, entailment).

The main message of the first part of her course was: the idea is to avoid unintended interpretations and this is done by adding formulas (syntactical objects) in order to grant the system a semantic relation.

Slides for the first part were made available at the summer school’s website.

Part 2

Renata resumed her course on Tuesday with a couple of exercises on interpretations, continuing where she left off the day before. She also defined what a sound Knowledge Base (KB) and a complete KB are (saying we are usually interested in sound and complete KBs) and the Satisfiability problem.

She then moved on to Description Logics (DLs), which are less expressive than First-Order Logic (FOL). DLs is a family of formalisms well-suited for representing and reasoning about terminological knowledge. According to Renata, DLs are useful for some things and the fact that people are using it in a sloppy way should not be taken as the language being bad.

DLs can be divided in the terminological part (concepts and their properties) and assertional part (situations). Concepts represent classes (sets of objects), roles represent relations (properties/attributes). The terminological knowledge is said to be in a TBox. The assertional knowledge (about individuals) is said to be in the ABox. At the end, there was a discussion on what should/must be in each box. I guess the distinction is not that straightforward.

Renata continued to present ALC (Attributive Concept Language with Complements), which serves as the basis for many other languages in the DLs family. She showed how ALC is very similar to FOL, but simpler: no functions (except constants, which are considered functions with arity 0), only unary and binary predicates, etc.

With some examples, Renata showed how the TBox and the ABox could be modeled, the inferences that could be drawn from them and the reasoning services that are possible from such a Knowledge Base. She then showed how all reasoning services in ALC can be translated into a Satisfiability problem (but being a more restrictive logic, these reasoning services are decidable). This means that a single mechanism (implementation) can solve all of the reasoning services.

Renata finished the part 2 of her course by connecting it with yesterday’s lecture, i.e., providing a translation of ALC (DL) to FOL, which is very straightforward. However, two points are important here: first, DLs are not only a simplification to make things tractable but also a syntactic sugar over FOL; second, even if you constraint your knowledge in FOL to things that admissible in DLs, by reasoning in FOL you may come to conclusions that cannot be represented in DLs and may make your model intractable.

Slides for the second part were made available at the summer school’s website.

Part 3

Renata started the presentation recalling ALC, the DL language presented in the previous part and presenting some other languages in the DL family such as EL (less expressive than ALC, gaining popularity because of reasoning performance — SAT in PTime); DL-Lite (also below ALC, very much used for reasoning on databases, c.f. Ontology-Based Database Access — OBDA); SHIF, SHOIN, SROIQ and other above-ALC combination of letters, each of which have a meaning in terms of what you can do (S = ALC + transitive, H = role hierarchy, etc.). These three are the most used in the context of the Semantic Web.

She then started talking about OWL. In version 1.0, OWL Full was undecidable, OWL DL was decidable and corresponded to SHOIN and OWL Lite was also decidable and corresponded to SHIF. Although some are decidable, all OWL 1.0 flavors are intractable (very complex, making reasoning very hard). With examples, she presented some OWL constructs and how they can represent knowledge in DL.

She then briefly introduced OWL 2.0: OWL-DL 2 was based on SROIQ (i.e., extended w.r.t. OWL 1.0) and instead of having a single OWL-Lite, three new profiles were proposed: OWL-EL (based on EL logic, very inexpressive but very good for reasoning with large TBoxes), OWL-QL (fast — logspace — query answering sing RDBMs via SQL, useful for smaller TBoxes but a lot of data) and OWL-RL (fast — polynomial — good for combining rules in large triplestores with rule-based reasoning). Unlike version 1.0, the Lite profiles of OWL 2.0 were designed with real-world problems in mind.

She then moved to a topic that interests her more: belief revision in DL. Belief revision of Knowledge Bases (KBs) studies the dynamics of KBs: adding (expansion) or removing (contraction) new knowledge and how it impacts the current KB (revision). She worked on adapting the AGM Belief Revision paradigm (the most classical/popular one) to DLs in order to apply it on the Semantic Web.

Renata presented the basics of AGM (logic too heavy for me to follow properly). AGM not being directly applicable to many of the DL languages, some of its postulates had to be adapted. On top of this adaptation, Renata’s research group started implementing tools using Protégé, OWL API and Pellet/HermiT.

Slides for the third part were made available at the summer school’s website.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

code