Summary of the NII Shonan Meeting on Engineering Adaptive Software Systems

On September 9th through 12th I participated in the NII Shonan Meeting on Engineering Adaptive Software Systems (EASSy), organized by the Japanese National Institute of Informatics in the Shonan Village Center, in Kanagawa, Japan. The Shonan Meetings are Daghstul Seminar-like meeting of researchers on a specific theme and this is the second edition on the subject of adaptive systems. As usual, in order to force myself to pay attention and to share the knowledge with others, I wrote quick summaries of the presentations that were given during the seminar.

The rest of the post summarizes the presentations in chronological order, w.r.t. the schedule. If you want, you can jump to Day 1, Day 2, Day 3 or Day 4. Videos, slides and abstracts of presentations (provided by the author themselves, with references) are also available in the EASSy website.

Day 1: Monday, September 9th

Session 01: Welcome

The first session was an introduction to the Shonan Meeting, practical information and welcoming message, with a quick self-introduction by each of the participants at the end.

Session 02: Victoria/Icesi Universities

Hausi presented “Software Engineering at Runtime — Situation-Aware Smart Application”. His interest is to take design-time engineering techniques/activities and move them to runtime (for instance, changing a statechart or a requirements model at runtime for adaptation requires verification & validation also be done at runtime). Hausi argued that we are currently at the age of context (also mentioned GE’s Industrial Internet), with devices being more and more interconnected and smart (smartphones, Google Glass, Telepathy One, etc.), but Software Engineering textbooks have not yet included software adaptation in their outlines. So he calls for rethinking Software Engineering towards this goal. He advocates for “control science”, which, as stated in the SEfSAS2 book roadmap paper, “characterize this research realm that combines self- adaptation with run-time V&V techniques to regulate the satisfaction of system requirements”.

After a brief discussion on Hausi’s presentation, Norha followed with “Dynamic Context Management and Reference Models for Dynamic Self-Adaptation”. The focus of the research presented is to “maintain the relevance of situation-awareness, with respect to changing requirements and context situation to improve QoE (Quality of Experience) and self-adaptivity”. She divided this in three challenges: (1) the complete specification of context is impractical at design-time (uncertainty); (2) context monitoring infrastructure must be self-adaptive and user-driven; (3) need for reference models for self-adaptation that addresses dynamicity at three different levels. One of their contributions towards these challenges is the SmarterContext Ontology and Context Spheres, which provides modeling support for context entities, context reasoning, context monitoring requirements and privacy policies. The infrastructure itself is adaptable at runtime. Another contribution is DYNAMICO, a reference model for context-aware self-adaptation, which can dynamically deploy new service components that work as sensors and analyzers in the MAPE adaptation loop, making any of the MAPE activities themselves adaptive. These contributions can be found in her PhD thesis (which includes implementation for adaptive monitoring).

Gabriel followed with “Self-Adaptive Software Systems: Properties and Assessment”. He highlighted there’s a lack of standard mechanisms to certify adaptive software systems, so he argues that this has become a relevant problem for SEfSAS, given that there are plenty of approaches for designing and developing self-adaptive systems, but no mechanism to evaluate that they work properly and therefore increase their trustworthiness and eventually adopt them in industry. Gabriel’s recent work produced a Self-Adaptive Systems (SAS) Mechanisms and Properties catalog, extracted from Control Theory (SASO) and seminal SAS papers. He then mapped these properties to quality attributes, which allows one to assess them at runtime. He proposes a run-time V&V process for it based on the MAPE-K loop. Gabriel also presented some open challenges, such as deciding between structural and behavioral SAS mechanisms; how to verify is a SAS mechanism preserves a given set of SAS properties at runtime; if it’s possible to find a set of principles to design such mechanisms; trade-offs between SAS properties.

Session 03: Korea University

  • Peter Hoh In (Korea University, Korea)
  • Jeong-Dong Kim (Korea University, Korea)

Peter presented “Dynamic Self-Adaptive Software Technology Using Collective Intelligence”. Having just received a grant for his Center for Autonomous and Adaptive Software, he recently begun investigating challenges in the field and presented some of these (his research agenda). His interest are the Internet of Things and what he calls “Mega-Ecosystems”, i.e., ecosystems composed of other ecosystems (e.g. smart homes have smart devices which are ecosystems by themselves, and might participate in larger ecosystems such as a smart city, and so on). In this context, challenges include increasing complex and diverse system (and systems of systems), limits of human intervention and the vulnerability of these systems due to dynamic changes and he proposes to investigate Software Engineering methodologies and tools to transform existing non-adaptive systems in adaptive ones (what he called “adaptization”).

Other research focuses mentioned during his presentation (current research -> his research agenda): from one/homogeneous robot -> to multiple/heterogeneous robots; from centralized decision -> to distributed decision; from simple knowledge based (syntactic) -> to ontological knowledge based (semantic); from adaptive system -> to adaptive system of systems; from ad-hoc design -> to adaptization. His proposal is to use collective intelligence to tackle these challenges and he divided this proposal in four parts: (1) situation-aware based self-adaptive software system modeling and monitoring; (2) collective intelligence-based self-diagnosis for problem analysis; (3) self-growing adaptation strategy based on collective intelligence; (4) development weaving adaptive pattern into non-adaptable software and middleware.

Comments from participants suggested that there is a large body of work on the individual issues presented by Peter, and the big challenge is to integrate this in the big “system of systems” picture. John also highlighted the difference between systems of systems and ecosystems, mainly that systems have requirements, while ecosystems do not (only it participants, the systems, do).

Session 04: Lero, Ireland

Bashar started with “Adaptive Security & Privacy”, describing the research agenda of collaborative work between Lero and The Open University: Adaptive Security, Adaptive Privacy and Adaptive Forensics. Their research focuses in the requirements space, therefore in the security realm they are interested in assets (things you want to protect) and threats (things from which you want to protect), which could be physical (e.g., a piece of hardware), information (e.g., some data) or even social (e.g., reputation). Analyzing the security problem, in the solution space they talk about vulnerabilities, attacks and couter-measures. In the problem space there are security goals and requirements. Between them there might be risk and policies. Adaptation can be related to any of those concepts. Mazeiar will detail this topic.

In the Adaptive Privacy area, privacy being inherently personal, context-sensitive and socially constructed, they have been applying some HCI concepts to try to study what they call “privacy dynamics”, combining research in social psychology and machine learning in a crowdsourcing-type of approach, but focusing on specific groups. Bashar briefly mentioned three empirical studies they have already conducted in this field (one of them called Buddy Tracker). He advocates for moving contextual information from crude data (e.g., the coordinates of Shonan Village) to a more meaningful, social definition (e.g., the EASSy workshop, what we’re doing here, etc.). Bashar also mentioned a tool for engineering adaptive privacy called Caprice. Their current focus is to use the results of these preliminary studies to investigate how this affects the development of mobile ubiquitous applications.

Maseiar presented “Adaptive Security: a Requirements-Driven Approach”. According to their research, the main goal of security is to protect valuable assets. Therefore, changes in assets should be monitored and asset protection should be proactive. He presented an overview of their security framework, centered around three models: goal model (based on KAOS models), threat model (based on KAOS obstacles) and asset model. A Security Fuzzy Causal Network (SFNet) is derived from these models using a systematic process that uses input from security experts and stakeholders. The SFNet allows them to calculate the utility and risk values using fuzzy causal reasoning. For each specific utility they can solve a satisfiability problem (using existing SMT tools). At the end, Maseiar quickly mentioned some experiments on adaptive security and adaptive emergency response using their security framework.

Liliana then presented “Engineering Adaptive Digital Investigation using Forensic Requirements”. Her focus is on digital investigation: collecting and analyzing digital evidence in the context of a criminal investigation and presenting this in a court of law. There has been an increase in need for this kind of investigation, therefore software systems should support this, dealing with current limitations. The objective of this research is to develop an automated and adaptive process that guides a digital forensic investigation and provides robust evidence in court. They propose a three step approach: (1) model the requirements for the digital forensics (crime scene model, generic hypothesis for crimes, suspicious events); (2) identify the moments in which is necessary to be proactive and collect information because a crime might be in place (based on the modeled suspicious events); (3) support the reactive activities that need to take place once it is confirmed that a crime was committed (using abductive analysis to fill in gaps and help build arguments for court).

Session 05: The Open University

Yijun started with the presentation “Adaptive Software Systems”, in which he presents an overview of the group’s research on traceability, usability, reliability and privacy for self-adaptive systems. In the first part he focused on traceability, presenting challenges with respect to software evolution and large ecosystems as motivation for the research. In their research, they have proposed monitoring the evolution of the systems, keeping traceability of changes, observing patterns that may occur again and

Lionel took over for part 2 on reliability. As motivation he presented a scenario of software upgrade that causes a crash of the system, which takes a lot of time of the user/developer to discover what happened and fix it. Their idea is to try to automate this solution, detecting a crash; identify the problematic component; roll it back to a previous, working version (using a distance function to find the closest configuration based on the components’ version numbers); fix any dependencies that need fixing; and automatic deploy the new configuration. They base their proposal in the OSGi platform for Java. They are currently working on their first study, using the Eclipse platform as test case, and plan to use Gentoo Linux (which is not OSGi-based) for the second study.

The next presenter was Pierre, which talked about usability for adaptive systems. The goal is to enhance the usability of complex systems by making the UI adaptive. The focus is on Enterprise Applications (ERPs, CRMs, etc.). Surveys indicate that such systems suffer from usability problems. They propose a Role-Based User Interface Simplification (RBUIS) mechanism that provides users with a minimal feature set and optimal layout in their UIs. RBUIS also provides a feedback mechanism so users, when not satisfied with the simplified UI, can access a list of simplifications and turn some of them off. The proposal is based on Kramer & Magee’s three layer architecture, the Chameleon framework (Calvary et al.) and the MVC architecture. It has a CASE tool (Cedar Studio) that supports developers and visual designers.

Finally, Asoka talked about “Engineering Adaptive Privacy”, picking up from Bashar’s earlier overview. Asoka argued that in the MAPE loop for adaptation, when considering privacy requirements the user and the software engineer should be active in the picture. Asoka mentioned again how privacy requirements are very difficult to elicit, being socially constructed, highly dependent on context (which changes a lot for mobile users), difficult to articulate by users, etc. They therefore propose an approach called “privacy requirements distillation process” (Bashar mentioned a few experiments that use it), that uses a framework called “privacy facets” to extract information flow models from structured qualitative data provided by the users and come up with new requirements to be satisfied by the application in order to deal with privacy concerns from users.

Day 2: Tuesday, September 10th

Session 06: Lucretius (Trento)

We started the second day with our presentations from the Lucretius project, John’s ERC advanced grant based in Trento. John opened with “Engineering Adaptive Software Systems: A Requirements Engineering Perspective”, in which he presented an overview of our research ideas in the Lucretius project, specially those related to adaptive systems.

Ivan followed with “Requirements problem and solution concepts for adaptive systems”, in which he discussed two main questions regarding Requirements Engineering modeling: (a) how do adaptive systems RE and traditional RE differ? (b) Can we use existing requirements modeling languages for adaptive systems RE? He then claims that adaptive systems RE and classic RE are different in their minimum modeling specification and presented his idea of a minimal adaptive systems RE problem.

Finally, I gave my talk, entitled “Requirements-based Software System Adaptation in Practice: the Zanshin Framework”, in which I gave an overview of the Zanshin approach and framework for the design and development of adaptive systems. During the talk, I illustrated the ideas behind Zanshin using the A-CAD report, so details on the examples presented can be obtained there.

Session 07: Canadian Universities

Patrick started with “Exploiting Big Data in Engineering Adaptive Cloud Services”. He defined “Big Data” as 3 v’s: volume, velocity, variety (& veracity?). It’s also related to data analytics to support effective on-line decision making (as opposed to off-line analytics done in data warehouses). As for clouds, adaptive cloud services would exploit elasticity and mobility of the cloud environment. Patrick’s claim in the talk is that big data has an important role to play in the engineering of adaptive software systems, in general, and adaptive cloud services, in particular.

He then presented work that he has been doing: the QuARAM Framework for cloud services. The framework supports a QoS-aware management of services, using case-based reasoning for brokering; workload forecasting and performance prediction for provisioning; monitoring, workload forecasting and performance prediction for elastic services (the run-time adaptation part).

Marin followed with “Adaptive Management in Extended Clouds”. In his talk, he presented extended types of clouds called Hybrid Clouds and SAVI Clouds. Hybrid clouds consists of mixing clouds that are private (limited capacity, low latency, private) and public (high capacity, low cost, high latency, lack of privacy) for running cloud applications. His motivation scenario for this, called “cloud bursting”, is companies using private clouds for reasons of privacy (in some cases prescribed by laws) most of the time but needing more capacity/latency during specific high-demand events (e.g., black friday in U.S. for e-commerce companies). Some techniques allow you to move parts of the application to public clouds.

SAVI Clouds is a Canadian project (savinetwork.ca) that spans several research themes, including future internet, adaptive management of applications, network management, etc. (SAVI stands for Smart Applications on Virtual Infrastructures). The goal of the project is to explore to tier clouds: edge (low latency, high bandwith, limited storage and computing) and core (infinite storage and computing capacity). This would demand integrated, end-to-end adaptation and involve challenges on how to partition the code between edge and core portions, integrate different adaptation layers (application, platform, network) and consider users’ geographical locations, for instance.

Session 08: National Institute of Informatics, Japan

Honiden sensei started with the talk “Engineering Adaptive Software Systems @ NII”. At NII, research on adaptive systems focus on four sub-areas, called (a) traceabiity maintenance to localize changes; (b) adaptation space analysis; (c) adaptation; and (d) change propagation. These four topics will be covered by the group’s presentations that follow.

Prof. Honiden continued with a second presentation, “Designing adaptive systems using feedback loops”. In their research, they use KAOS to model goals and design adaptable systems by using a control loop pattern when designing the goal models, annotating them with behavioral tags related to each action of the feedback loop. This pattern comes with a systematic process for elaboration of the model and allows one to easily map the goal model into an architectural model for traceability. For the implementation of the control loop, they introduce what they call a “Promela description”, which contains variable definitions, LTL formulas specifying domain properties and adaptations described as processes (pre- and post-conditions that define the adaptation). He illustrated this approach with the ZNN.com exemplar.

Kenji continued with the talk “Composition-based interaction design for adaptable distributed software systems”. His background is on networks and his current focus regarding adaptive systems is on distributed systems. Given the basic satisfaction criteria of Zave & Jackson (S, D |- R), when the domain D changes to D’, the specification S should also change to a S’ in order to adapt the system. Kenji is interested in the traceability from requirements models to subsequent structural (component) and behavioral (sequence) models: when changes happen in the goal model, how are they propagated to these subsequent models (in particular, to the behavioral model)? Their idea is to apply composition-based interaction design: each requirement would be associated with a piece of interaction that would be automatically composed into the whole behavioral model. When changes happen, the model is rebuilt from the composition of the pieces of interactions associated with the new requirements.

Next, Fuyuki presented “Exploration of Adaptation Space — Linking with Efforts in Service-Oriented Computing”. His background is on service-composition, so he tries to link previous efforts in this area with the area of adaptive systems. He proposes to map concepts of service-composition to GORE: functional consistency and global quality constraints are mapped to hard goals, while prioritized quality criteria are mapped to softgoals. At runtime, the different services can be analyzed and the best composition made for each situation.

Soichiro continued with “Bidirectional Graph Transformation Infrastructure and its Applications”. When deriving a target artifact from a source artifact, bidirectional transformation consists on being able to propagate backwards to the source artifact changes that are made in the target artifact (e.g., co-evolution of model and code generated from the model). He’s particularly interested in doing this with graphs, which pose three main challenges: how to deal with termination of graph transformation; how to deal with equality of two graphs; and how to correctly reflect changes on the target to the source. He proposes solutions to each of these challenges, called GRoundTram.

Finally, Zhenjiang presented “Can BX be used for implementing adaptive software? — Putbased Bidirectional Programming” (BX standing for “bidirectional transformation”, presented earlier). He claims that changes in components in a system could be dealt with by small transformations carried out using BX. This is a new idea, so there’s no definitive answer to the question proposed in the title of the presentation yet.

Session 09: Chinese Universities

Xin Peng presented “Human Factors in Self-Adaptive Systems”. He focuses on Socio-technical Systems and discusses the role that humans can have in self-adaptive systems (as an expert during elicitation, as the user of the system, as a component of the system, as an agent with their own goals, etc.). Considering humans as experts, Xin argues for multi-layered control loops (infrastructure, design, business — the different kinds of expertise), which interact with one another (bottom-up and top-down). For the user role, he argues for implicit monitoring of user behavior/feedback (e.g., eye tracking), as different users (or same user in different contexts) may have different expectations regarding the system’s quality.

For the agent role, he argues for decentralized requirements monitoring and adaptation and support for social interactions among agents. In this part of the talk, he mentioned his proposal for requirements monitoring and hierarchical repair, presented in an RE 2012 paper and proposed to use evolving commitments for adaptation. Finally, for the component role he mentioned Dorn and Taylor’s ICSE 2013 work and mentioned some other considerations when humans play a component part in a system (organizational aspects, side effects of adapting with humans, bottlenecks, etc.).

Bihuan followed with “Model-Based Self-Adaptation from Requirements to Architectures: a Decision-Making Process”. He proposes a framework for self-adaptation that combines requirements- and architectural-based forms of adaptation. Architecture-based adaptations range from simply adding/removing components to making cross-cutting changes throughout the model. Model adaptations are achieved through incremental and generative model transformations.

Moving from Shangai to Beijing, Guiling was next, with the talk “User-Driven Situational Service Mashups”. Situational applications are “good enough” software created for a narrow group of users with a unique set of needs that continues to evolve over time to accommodate changes in these needs. For these cases, she proposes to use service mashups to create applications to suit the specific needs of the users. Lin Liu then presented “Software Systems Adaptation by Composition”. I’m not sure I got the message during her presentation. What I understood is that she was claiming that for service composition, if you need to do adaptation you need to use environment modeling and mashup composition, as done by other researchers in her institution.

Day 3: Wednesday, September 11th

Session 10: Kent / Linnaeus Universities

  • Rogerio de Lemos (University of Kent, UK)
  • Danny Weyns (Linnaeus University Sweden)
  • Muhammad Usman Iftikhar (Linnaeus University Sweden)

Rogerio started the day with the session “Provision of Assurances” (integrated talks among the three presenters). His talk, “Architecting Resilience: Handling Accidental and Malicious Threats”, was divided in three parts: integration testing, evaluating resilience and self-adaptive authorization infrastructures. However, Rogerio preferred to use his time to foster discussion on the topic, presenting the following questions: (1) Does it make sense to test self-adaptive systems at runtime? If yes, what would be needed? (2) How effective is empirical evidence when evaluating the resilience of self-adaptive systems? Should it be done at development-time or runtime? (3) How useful is self-adaptation in dealing with insider threats? What should be monitored, analyzed and controlled? He closed the talk saying that some answers are present in the slides, which are going to be made available, plus the references that are present in the slides.

Danny followed with “ActivFORMS: Active Formal Models for Self-Adaptive Systems”. After reviewing some of the state-of-the-art in the use of formal models for the design of adaptive systems, to provide guarantees (assurances), he proposes to use active formal models to formalize the full MAPE-K loop (as opposed to previous work, which focus on the K). The work is based on the three levels of Kramer and Magee, focusing on the change management level (where the active models are) and goal management (which monitors the active model and, in case of need, adapts it). There’s also a third level of adaptation, consisting of adding a new goal, which also triggers and adaptation of the MAPE loop.

Session 11: Hosei University and SEI

Tetsuo presented “Role-based Adaptive Modeling Framework ‘Epsilon’ and a Case Study”. Epsilon is a role model whose goals are to support adaptive evolution, enable separation of concerns and advance reuse (the focus in the talk was the first goal). The idea of Epsilon is to have contexts enclose a set of roles and allow objects to enter/leave certain contexts by binding/unbinding them to/from roles. Based on this model, they have design a language called EpsilonJ to declare context, roles and bindings. Further, they have developed a transformation framework that can go from i* requirements models to Epsilon code and used it in some adaptation scenarios (e.g., one built based on this paper).

Then, Scott gave his talk, entitled “Coordinating Architecture-Based Self-Protecting Systems”. Their goal is to improve the ability to resist attacks on systems by making them self-protecting. There are some approaches towards this goal, some of them proactive, some reactive; some based on little information, some based on observed evidence. His idea is to combine these approaches in a coordinated way to exploit their advantages.

Afternoon excursion

In the afternoon we went on an excursion to Kamakura.

Day 4: Thursday, September 12th

Session 12: the Italian guys

Giordano started the last session of the meeting with “Managing Non-Functional Uncertainty via Model-Driven Adaptivity”. He presented the ADAM approach (Adaptive Model-driven execution), which uses model transformation and probability theory to guide the developer in the design of adaptive systems. I’ve seen this presentation before, during ICSE 2013’s track on adaptive systems. I provide a summary of his ICSE presentation here.

Valerio then presented “Dynamic Updates and Self-adaptation. Can we fill the gap?”. Software updates for unanticipated changes usually happen in an offline manner, but some kinds of software cannot be taken offline for these updates. The goals of his work are then to provide safe and ASAP updates that are done dynamically at runtime. The work was presented in the past two SEAMS symposiums and my summary of his presentations can be found here and here. Valerio closed his presentation with some considerations on safety of adaptations (dynamic updates could be used to automatically identify safe updatable states and not having to wait for quiescent states); automatic generation of self-adaptive systems (he proposes to integrate goal models and scenario-based specifications); and dynamic updates on systems that are already self-adaptive (applying the approach to the MAPE-K).

Discussion session

The last session was reserved for an open discussion, targeting the organization of a book about the workshop contents.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

code