Summary of the 9th International Conference on Formal Ontology in Information Systems (FOIS 2016)

As stated earlier, from July 6th until the 9th I participated in the 9th International Conference on Formal Ontology in Information Systems (FOIS 2016) in Annecy, France.

The main conference spanned from July 7th through 9th. Here are quick summaries of the presentations that I attended during the event. The full proceedings are available with open access.


Day 1: Thursday, July 7th

After the official FOIS opening, Gilberto Câmara, from INPE, Brazil, gave the first keynote, entitled “Geographical ontologies for land use and land cover change: distinguishing continuants from occurrents”, subtitled “What can Big Data learn from Ontology?”.

The context was the task of monitoring deforestation in the Amazon Forest. INPE has the satelite images, but before answering that question that is a more fundamental ontological question, which is: what is a forest? There are many different definitions. One particularly interesting definition considers the temporal dimension and looks at the trajectory of the different kinds of forests. This calls for a differentiation between continuants and occurrents over geospacial Big Data.

Continuants perspective is not able to differentiate between deforestation of an unmanaged forest and the exploration of timber on a planted forest, for example. The occurrents approach can look at the trajectories of forests and was the choice of the INPE team for this task.

Given space partitions, time partitions, discrete properties (e.g., land use classification) and Allen’s interval logic, Gilberto extended the logic with two new propositions: holds(s,p,t) (if property p holds in space s at time t) and occur(s,p,Te) to establish if an event regarding property p occurs in space s during time interval Te.

With that, questions such as “which forest areas have been replaced by soybeans?” or “which forest areas have ben replaced by pasture and then turned into soybeans?” (both related to the Soy Moratorium in 2006, for example) could be answered.

He then shared with the FOIS community the direction he’s taking regarding spatio-temporal entities, based on work by members of the community (whose names I didn’t take note, sorry): considering land regions to be independent continuants; land cover to be dependent continuants; and land use to be occurrents.

He concluded by saying that managing change is a major challenge and big data creates new challenges. Ontological thinking helps to understand big data. We need a mathematically and cognitively sound model of spatiotemporal events.


ICCS Session: Ontologies and LOD

After the coffee break, I decided to skip the overly-foundational FOIS session on “Space, Time and Change” and watch the “Ontologies and LOD” session from ICCS instead. ICCS is co-located with FOIS.

Danai Symeonidou presented “Key Discovery for Numerical Data: Application to Oenological Practices”. The goal of the work is to discover keys in numerical data. The proposal is a 3-steps approach — (1) Data Pre-Processing, (2) Key Discovery, (3) Key Quality Evaluation. For (1) they use quantiles (statistical concept, cut points dividing a set of observations in equal-sized groups) to group data values. For (2), they use SAKey discovery approach. For (3) they calculate Key Support (ratio of number of instances described by the set of properties of a key to the number of instances described in the data — want high values), Key Exceptions (number of n instances sharing vlues allowed in a key — want low values), Key Size (number of properties participating in a key — want small values), Property Correlation (dependence of properties co-appearing in a key — don’t want highly correlated properties in the same key), Non-Key Probability (probability that a set of properties contains instances sharing the same values for this set).

Céline Alec presented “A Model for Linked Open Data Acquisition and SPARQL Query Generation”. The goal is to tackle LOD problems such as incompleteness, redundacy, inconsistencies, etc. The context is Saupodoc: approach of ontology population and enrichment for semantic annotation of documents (previous work from the authors). Populating a LOD dataset in this context suffers from the aforementioned problems, plus SPARQL queries are too complicated. The proposal is a model for LOD acquisition composed of a model of correspondences and a model of paths, plus providing automatic generation of SPARQL queries from these models.

Finally, Catherine Roussey presented “Dealing with incompatibilities when fusioning knowledge bases”. The goal is to merge different knowledge bases that contain equivalent concepts to a single KB. If I understood correctly, part of the approach was presented the previous day in ICCS, so the presentation focused on the last part of the process: candidate selection. The approach manages incompatibilities during merging by forming a candidate group of similar elements from distinct KBs and generating an optimal extension from this group.


Second Keynote

After lunch, Fabien Gandon gave the second keynote, entitled “On the many graphs of the Web and the interest of adding their missing links”. The speech consisted of an overview of recent research results of the Wimmics Team of INRIA, of which he is a member.

According to Fabien, the goal of the research group is to bridge social semantics and formal semantics on the Web using Web graphs. They have works in methods and tools for: user & interaction design; communities & social networks; linked data & semantic web; and reasoning & analyzing.

After a crash course on the Semantic Web, he described Semantic Web projects: — French part of DBPedia, including history of data changes; — exploratory search, question-answering, recommendation with explanation using semantic spreading activation; — linguistic relational pattern extraction, named entity recognition similarity based SPARQL querying; ALOOF — robots learning by reading on the Web, building triples by reading text; the SMILK plugin — browsing of data from named entities present in text.

He followed with projects in the area of user modeling (individual context, social structures): Prissma — tailoring information based on user context; Ocktopus — calculating the probability that a person is a member of a community based on the topics discussed by her; Emoca&Seempad — debates and emotions, facial recognition (emotion detection), uses debate graphs, argument networks, argumentation theory; e-learning ∧ serious games, e.g., generating automatic quiz questions from LOD, uses the LUDO ontology for serious games.

The final set of projects presented regarded the typed graph machinery behind some of the other projects: CORESE — an RDF search engine; RATIO4TA — for prediction and explanation; INDUCTION — induction reasoning for finding missing knowledge; LICENTIA — deontic-based reasoning over license compatibility and composition when you compute over different sources with different licenses.

Publications from the group are available at

Finally, I have to finish this summary with two quotes from Fabien. One that he put in his slides: “He who controls metadata, controls the web and through the world-wide web many things in our world”. The other was in response to a question someone from the audience asked: “The Web is garbage. The Semantic Web is Semantic Garbage. But semantics can help you sort out the garbage.”


Afternoon Sessions

Unfortunately I had to work on a paper, so I didn’t attend the afternoon sessions on “Space, Time and Change” and “Biomedical Ontologies”.


Day 2: Friday, July 8th

Stephen Mumford gave the third keynote, entitled “Powers of Wholes and of Parts”. He distributed hand-outs with a summary of this talk and I think that is better than any summary I could possibly make, given I’m more an engineer than a philosopher… I asked him for a copy and he was kind enough to allow me to make it available here.


Session: Ontologies for Engineering

Emilio Sanfilippo presented “Features and Componentes in Product Structure Models”. Context is Knowledge Representation for product modeling. Focus: representing the physical layout of a product (e.g., the assembly of an engine for a car). The work adopts the composition operator introduced by Kit Fine for assembly representation. For representing component parts and features, from three different approaches — trope-based; spatially qualified predicate; and spacial part — they chose the last.

Stefano Borgo presented “Formalizing and Adapting a General Function Module for Foundational Ontologies”. Among the different understandings of what a function is, the work focuses on the meaning of functions from Biology and Engineering, which have some commonalities, such as being of an object, in a system and related to a goal. They then propose a module for foundational ontologies that describe functions in the chosen sense (formalization in the paper). The module is exemplified in BFO and DOLCE, but formalized only in the latter.

Lastly, I presented the paper “Towards an Ontology of Requirements at Runtime”, work of my Masters student Bruno Borlini and other colleagues from Nemo. The presentation went well, I think. We also received a distinguished paper award!


Session: Empiricism and Measurement

Miroslav Vacura presented “Event Categories on the Semantic Web and their Relationship/Object Distinction”. They use the PURO (Particular Universal Relationship Object) ontological language which allows engineers to generate/analyze/compare OWL ontologies from background models. Background models are composed of six language primitives, plus subTypeOf and instanceOf realtionships. The approach follows modeling by example, allows for higher-order types, relationships are not limited to binary arity, reification is expressed directly and there is an online visual tool. They then analyzed popular OWL ontologies about events: The Event Ontology, The Simple Event Model Ontology and Linking Open Descriptions of Events, Time-Indexed Situation design pattern (ODP Library), and DBPedia Ontology. Then end up with an empirical categorization of event with 4 classes of events: actions, happenings, planned “social” events and structural components of temporal entities (“arbitrary” events). Then they map events to their ontological language PURO, depending on the case.

Claudio Masolo presented “Observations and Their Explanations”. The work concerns the provenance and traceability of data expressed in knowledge representation (ABox, TBox). The work is a preliminary attempt to explicitly represent the link between observations and their explanation. They propose a First-Order Logic framework to do it. The framework proposes justification operator that justifies an ABox assertion based on observations and an explanation operator that explains an observation using a set of observations. This paper received the conference’s best paper award.

Peter Chapman presented “Antipattern Comprehension: An Empirical Evaluation”. The work uses Concept Diagrams, which are diagrams that encode, graphically, Description Logic information such as subsumbtion, disjointness, object properties, etc. These diagrams can be constructed by building a single diagram for each DL assertion and then merging the diagrams. Not all merges, however, are helpful (w.r.t. the result of the merge being more useful graphically than the separated diagrams). The authors conducted a study in which some people worked with merged Concept Diagrams, some worked with multiple diagrams and some worked with DL in Protegè. All groups had to answer the same questions on the models. For 4 out of 5 criteria of evaluation, merged diagrams did better (and for the last one all groups were equally bad).


Session: Foundations

Megan Katsumi presented “What is Ontology Reuse?” The paper deals with the question of when extending an ontology (using its axioms and adding new ones) ceases to be reuse and becomes something else. Informally, T is reused to create Trequired if it serves as input in the design of Trequired. Intuitively, if you reuse some theory T, one should expect to find some of T in your ontology, i.e., some of the intended models of T should also be intended models of your ontology. The definition of reuse also accounts for the case of reusing a reduced theory T (e.g., just some concepts). The work presents four distinct reuse operations: as-is; extension; extraction; combination.

Brandon Bennet presented “Defining Relations: a general incremental approach with spatial and temporal case studies”. From the abstract (sorry), the goal is to “lay a foundation for a systematic study of mechanisms for construction of definitions within a formal theory, by investigating operators for incremental construction of definitions of new relations from an existing set of prim- itives and previously defined relations”. The operators for constructing relations are: negation, converse, conjunction and composition. An automatic generator combines de relations using these operations, excluding equivalent relations. They apply the approach to two well known theories.

Selja Seppälä presented “The Functions of Definitions in Ontologies”. The work analyzes lexical resources from ontologies and other sources of information, e.g., dictionaries. They use Marconi’s theory about lexical competence: referential competence and inferential competence, subdividing the latter into output inferential competence and semantic inferential competence. The authors then separate the itens present in definitions (dictionaries, ontologies) into these three types of competence. They show that both resources fulfill cognitive and linguistic functions, although realized in different ways, analyzing such realizations.


Day 3: Saturday, July 9th

Friederike Moltmann gave the fourth and final keynote: “Natural Language Ontology”. As with the previous keynote, She also distributed hand-outs and I asked her for copies of it and she was kind enough to allow me to make it available here.


Session: Cognition, Language and Semantics

Daniele Porello presented “Understanding Predication in Conceptual Spaces”. The goal of the work is to understand the relationship between a language (e.g., predicate logic) and a conceptual space — more specifically, predicates–concepts and individual constants–objects.

Mattia Fumagalli presented “Concepts as (Recognition) Abilities”. The work builds on the Teleosemantics interpretation of concepts as objects having (biological) functions. After an analysis of this and other related concepts focused on the substance concepts, they provide an Ontology of (Recognition) Abilities and a prototype methodology for discovering which classes of an ontology correspond to recognition abilities.

Aldo Gangemi presented “Adjective Semantics in Open Knowledge Extraction”. The work concerns the interpretation of adjectives in natural language processing, focusing on the problems of sectivity and framality in order to support open knowledge extraction. The proposed solution is based on a simple quality-oriented representation. A tool is available.


Session: Ontology of Social Reality

Nicola Guarino presented “Towards an Ontology of Value Ascription”. The work analyzes the concept of “value”, which has many different conceptualizations. They conduct an ontological analysis of value, asking what, when, who, why, etc. questions regarding the ascription of value.

Pawel Garbacz presented “A Formal Ontology of Texts”. The context is OWL2 DL ontology, focusing on the “knowledge resources” class. A language is proposed to formally represent text, with primitive terms (something is a text, dependency, occurrence, precedence, etc.) — 85 axioms/definitions in total. Uses BFO as top-level ontology.

Zena Wood presented “Considering Collective: Roles, Members and Goals”. Extension of her PhD thesis from 2011, which considers collectives as concrete particulars and classify them according to membership, coherence, location, differentiation of role and depth. The work dealt with three limitations of the original proposal: effect of membership change on the identity of the collective; importance of roles; combination of classification due to sensitivity to temporal scope. A literature review was conducted, divided in these three questions, then some ideas for the extension of the original model were presented and discussed.

Leave a Reply

Your email address will not be published. Required fields are marked *