Previous live blog post: Graph-Based Analysis and Prediction for Software Evolution
In the morning session of the last day of the conference, Sebastian Uchitel (Imperial College London, UK) presented a paper called Generating Obstacle Conditions for Requirements Completeness, co-authored with Dalal Alrajeh, Jeff Kramer, Axel van Lamsweerde and Alessandra Russo.
The motivation for this work is a key problem in Requirements Engineering: requirements incompleteness (a major cause for software system failure). The problem is highly related to risk analysis: anticipating what could go wrong, when and why. The question is: how to support analysts understand the domain and tackle this problem?
The proposal is to model obstacles to goal achievements as preconditions for a goal to be violated in a given domain. A complete set of obstacles allows one to entail the satisfaction of the goal if all obstacles are negated. The approach is composed of three steps: (1) identify obstacles; (2) assess likelihood & severity; and (3) explore resolutions.
The literature offers many techniques for identifying obstacles, but no tool support is offered. The authors, therefore, set out to fill this gap, proposing a technique called Inductive Logic Programming and Model Checking (ILP MC). ILP takes a partial requirements specification and a set of good and bad examples (traces that are allowed by the spec) as input and produces extended specifications that “explain” the examples. The MC part consists of a model checker that generates counter-examples (bad examples) and witnesses (good examples) from the partial specification, in order to provide the input to ILP.
Particularly for obstacles, the model checker is asked to determine if the negation of the goal can be entailed from the information modeled about the domain. Witnesses and counter-examples are produced and ILP identifies an obstacle to “explain” the examples, adding the obstacle to the partial specification. The new spec is then fed into the model checker again in order to find further obstacles. The process continues until no other obstacles are found and, thus, we have a complete set of obstacles.
The specification that is given as input to the model checker consists of a statemachine, the goal, domain assumptions and fluents. Obstacles are then produced in LTL and added to the specification. The approach is combined with interactions with domain experts which based on the process of discovering obstacles could introduce new concepts about the domain that have not been previously modeled.
The approach was validated using Emmanuel Letier’s specification of the London Ambulance System (available in his PhD Thesis).