Summary of SEAMS 2013

On May 20th and 21st I participated in the 8th International Symposium on Software Engineering for Adaptive and Self-Managing Systems (SEAMS 2013), which was co-located with the 35th International Conference on Software Engineering (ICSE 2013), in San Francisco, CA (USA). This year, I was really happy to have two full papers and a short paper accepted, result of work with Trento colleagues after I finished my PhD.

As in 2012 and 2011, I wrote quick summaries of the presentations that were given in the symposium. At the end of the post I included also a summary of the ICSE technical track on Adaptation. The rest of the post summarizes the presentations in chronological order.

Day 1: Monday, May 20th

Keynote: A 10-year Perspective on Software Engineering for Self-Adaptive Systems
David Garlan (Carnegie Mellon Institute for Software Research)

David divided the keynote in three parts: (i) a history of the SEAMS community; (ii) an analysis of his own proposal — the Rainbow framework: what did they get right and what did they get wrong? Finally, (iii) challenges moving ahead: open problems and ideas for progress.

SEAMS was born from an ACM SIGSOFT Workshop on Self-Healing Systems (WOSS’02), about 10 years ago. The motivation behind that workshop was to bring together many different areas, focusing on Software Engineering and in the idea of moving from an open loop to a closed loop in the development of this kind of system. WOSS’02 was also motivated by a DARPA project in the U.S. on Dynamic Assembly for System Adaptability, Dependability and Assurance (DASADA). Professor Garlan also commented on parallel milestones such as the ICAC, SASO and SOAR conferences, the IBM Autonomic Computing efforts and the TASS and IJAC journals.

In the second part, David presented an overview of the Rainbow framework and the Stitch language, highlighting the four basic principles behind their creation: a control-system model (deciding the next action based on observed effects of previous actions), a value system (utility-based selection of adaptation strategies), asynchrony (considering “settling time” explicitly) and uncertainty (effects are know within a certain probability). Garlan concluded with a list of lessons learned: using a model-based approach for adaptation based on control systems, implementing it in a reusable framework and having a repair language (Stitch) were all considered things they got right. In the “wrong” category, on the other hand, he included: (a) specifying tactics outcomes in the utility space limits the amount of reasoning one can do about strategies; (b) triggering repair when constraints are violated does not easily accommodate homeostatic adaptation; (c) humans-in-the-loop are not currently easily supported; and (d) deployment of the framework is difficult due to the complexity in the number of plug-ins that must be installed.

In the last part of the keynote, professor Garlan listed seven challenges and ideas for future research (he had more, but ran out of time):

  1. Uncertainty is everywhere: sensing is imprecise/partial, diagnosis often ambiguous, effects of strategies cannot be accurately predicted, systems may not behave the way we expect, etc. Ideas: adaptive probing and diagnosis, machine learning, making uncertainty first class citizen in our (design/run-time) models;
  2. Combining reactive (we know what’s wrong, fix it) and deliberative (we don’t know exactly what’s wrong, what could be done?). Inspiration could be taken from the analysis of the interaction of the two systems that run in parallel in human brains from the book “Thinking, Fast and Slow” by Daniel Kahneman;
  3. Humans in the loop: how can we get humans involved? Sensors (input from users), actuators (carry on adaptive actions), knowledge augmentators, etc.
  4. Architecting for adaptability: other “illities” are now routine, but what does it mean to consider adaptability when architecting a system? What tradeoffs are involved?
  5. Proactivity: most systems today are reactive, but avoiding the problems could be useful. How to rationally balance the costs and benefits of a proactive approach? Some ideas: change the strategy selection from instantaneous utility to aggregate utility, develop better ways to predict future behavior (game theory, predictive calculi, machine learning), learn from the “planning and uncertainty” community;
  6. Concurrency, Preemption, Sync: many systems execute a single adaptation at a time, concurrency adaptation has many potential benefits. How do we make it manageable?
  7. Distribution: centralized architectures simplify control but may not scale. How to coordinate decentralized loops? Explore alternative integration architectures and social networks.
  8. Other challenges (no time to talk about them in depth): assurances, cyber-physical systems, security, benchmark problems and metrics, compositionality.

During Q&A session, John (Mylopoulos) asked if David agreed that requirements and domain models are also missing from the Rainbow approach and that they could be helpful, to which Garlan agreed.

Session 1: Evaluation

Do External Feedback Loops Improve the Design of Self-Adaptive Systems? A Controlled Experiment, presented by Danny Weyns from Linnaeus University, Sweden: the authors conducted an experiment comparing a system with internal adaptation mechanisms vs. systems with an explicit external feedback loop in order to verify a claim that is common in self-adaptive proposals, but that there is no systematic scientific evidence for it: that the use of external fedback loops improves the design of adaptive systems. The chosen system was an adaptive traffic management system and the subjects were masters students from their university. Results showed no evidence of improvement in dealing with activity complexity, but there was evidence for improvement in dealing with control-flow complexity, fault density and productivity. As threats to validity, they mention possible learning effect and the fact that master-level students, and not practitioners, were subjects.

Evolving an Adaptive Industrial Software System to Use Architecture-based Self-Adaptation, presented by Javier Cámara from Carnegie Mellon University, USA: a report on the experience of applying architecture-based self-adaptation, namely the Rainbow framework, to an existing industrial middleware, namely a Data Acquisition and Control Service (reusable infrastructure to manage highly populated networks of devices). The DCAS has scale-up (add/remove pollers), rescheduling (degrading priority of failing devices) and scale-out (deploying new instances) adaptation mechanisms already built-in that were removed from the software and replaced by Rainbow adaptation strategies. Rainbow also had to be customized for the experiment (configurations, implementing probes and gauges, effectors, scripting). Several measures were done, including the effort taken to customize DCAS and Rainbow. The conclusions where that the arch-based adaptation could successfully replicate and, moreover, improve the adaptation behavior required; that the effort for Rainbow’s customization was consistent with previous experiments; and that using it pays off in further system evolution.

Requirements and Architectural Approaches to Adaptive Software Systems: A Comparative Study — this is one of my papers, authored with Konstantinos Angelopoulos (who presented it) and João Pimentel. To know more about this work, you can download a preprint version of it (the proceedings haven’t yet been published).

Session 2: Qualities

Self-Adaptive Containers: Building Resource-Efficient Applications with Low Programmer Overhead, presented by Wei-chih Huang from Imperial College London, UK: motivated by the fact that the use of resources by applications varies greatly in different contexts (eg., a server, a desktop, a smartphone), the authors propose to develop self-adaptive containers which automatically adapt their use of data structures to meet objectives, optimizes non-functional behavior, providing a scalable solution that meets SLOs (service-level objectives) specified in WSLA (Web Service Level Agreement). They implemented a C++ library that gets SLOs as input, compares system observations with the objectives and adapts by either improving the observation or reducing the SLO. The approach uses a technique called Sparse Bloom filters.

Synthesizing Self-Adaptive Connectors meeting Functional and Performance Concerns, presented by Romina Spalazzese from University of L’Aquila, Italy: evolves previous proposal for synthesizing connectors in order to take into account performance concerns and making the connectors self-adaptive in response to requirements changes. The work is in the context of service interoperation, in which services are considered black boxes and the only place you can add adaptation is at the connectors. Adaptation strategies include dividing the behaviors of connectors, tuning the upper bound on the number of cycle iterations or choosing the most convenient deployment configuration. The authors used the AEmilia ADL for the specifications. Description generation and analysis can be performed in order to choose the best solution.

Adapt Case Modeling Language: High-Quality Specification of Self-Adaptive Software Systems, presented by Markus Luckey from University of Paderborn, Germany: the goal is to support the engineering of self-adaptive systems during design time in order to reduce the complexity and avoid errors. Adaptation concerns are separated from the application logic and the authors propose a method and a language for it that integrates with standard UML practices for the development of the application itself. The approach includes model checking tools that are able to indicate problems at design time. It builds on work presented in SEAMS 2011.

Session 3: Learning and Updates

Formalizing Correctness Criteria of Dynamic Updates Derived from Specification Changes, presented by Valerio Panzica La Manna from Polytechnic University of Milan, Italy: the motivation is unanticipated changes in the environment or requirements in systems that must operate continuously and cannot be stopped for evolution. The proposal consists of safe dynamic updates (that don’t lead to erroneous behavior), as soon as possible. The domain of the work is open reactive systems controlled by a finite state machine and operated in an uncontrolled environment. Uses as foundation work from the same authors that was presented in SEASMS ’12. That work does not allow dynamic updates for relevant classes of specification changes. This work adds poly-updatable states, weakly-updatable states and cycle-agnostic updatable states as additional criteria for dynamic update.

Run-time Adaptation of Mobile Applications using Genetic Algorithms, presented by Gustavo Pascual from University of Málaga, Spain: the growing market for mobile devices and challenges in modeling Dynamic Software Product Lines motivated this work. Using variation points at runtime and optimizing the architectural configuration is hard, the latter being an NP-hard problem that would be intractable especially in mobile devices (which are resource-constrained). The proposal is therefore to use a genetic algorithm to find nearly-optimal solutions at runtime. The benefit is not only that if performs better but also that it calculates the difference between the old and new configuration very easily (this is necessary in order to generate a reconfiguration plan for DSPLs).

Guaranteeing Robustness in a Mobile Learning Application Using Formally Verified MAPE-K Loops, presented by Didac Gil De La Iglesia from Linnaeus University, Sweden: developing self-healing systems with MAPE-K is very difficult and generates models that are hard to maintain. The requirements for such systems include separating concerns, easy analysis and validation of quality properties, and the MAPE-K helps with the 1st requirement only. The proposal is then to model in separate automata the behavior of the components of the MAPE-K loop and checking for properties using Uppaal, a verification tool. Quality properties can be specified in this tool as well, so they can be verified. The design components were implemented in Jade.

Day 2: Tuesday, May 21th

Keynote: Science in the Cloud
Joseph L. Hellerstein (Google, Inc.)

Hellerstein’s keynote was not directly related to the topic of adaptive systems, but it was nonetheless very interesting. He talked about biochemical research, which is a field that collects lots of data in their experiments. This data is collected very rigorously, but the computational support (e.g., calculations done in MatLab) is, most of the times, ill organized and thus not repeatable.

Joe explained the usual methodology for computational discovery: identify empirical results for model calibration; select/refine computational models; run simulations; check if you need to change/refine the model (loop A) or more calibrations (loop B); run prediction simulations; confirm it in the lab (restart if not confirmed, loop C); finally, publish. There is need for computational tools to help with these three loops (A, B and C). This is where feedback loops and adaptation/automation would really be useful (this was the only direct mention of adaptive systems in the keynote).

Considering all of this, science in the cloud could help with burst capacity, reproducibility, sharing and efficient use of scarce research money. The challenges for this are: fine grained parallelism; sciences should focus on science, not distributed programming; scalability of interactive exploratory analysis; introspective batch processing and multi-cloud support (won’t have all the data in a single cloud); systematic model developing calibrated with empirical data; testing; integrating models at multiple levels of granularity; schema standardization; create a culture of software engineering among scientists (producing high quality software is currently not a well-recognized activity for researchers). For the last item, he mentioned the Software Carpentry initiative to make scientists better software/data engineers.

Hellerstein concluded saying that the area of biochemistry is a good opportunity for Computer Science research: structure of molecules is like structure of software (is-a, has-a, relationship multiplicities), bio pathways (function) are workflows with twists (bi-directional, self-modifying code), and so on.

Session 4: Case studies and decision-making

Engineering Adaptation with Zanshin: an Experience Report — my second full paper at the symposium, authored with Genci Tallabaci and presented by myself. A preprint version is also available.

Diagnosing architectural run-time failures, presented by Paulo Casanova from Carnegie Mellon University, USA (got the best paper award of the symposium): in the context of the ZNN case study, the authors listed callenges for autonomic diagnosis: identifying failures (detecting that something is wrong) and performing diagnosis (pinpointing the source of the failure). Considering the former, correctness and monitoring occur at different levels of abstraction, concurrency makes it difficult to identify what observations relate to the relevant behavior. For the latter, multiple explanations for a failure, should respond in useful time. The authors then propose an approach in which a recognizer observes low-level events (system calls), comparing them to a behavior model and infer which computations relate to relevant behavior (abstraction). It then sends them to an Oracle, which compares the computations to correctness criteria (from requirements) to classify if the computations are correct or not (classification). Finally, a fault localizer identifies where the failures are (localization). The Oracle works by analyzing many computations (diagnosis with a single one is not possible) and applying probability to reduce the set of components that could be the culprit. According to Casanova, the major contribution of the work was recognizing high-level architectural computational from low-level events with an algorithm that returns in useful time.

Dynamic Decision Networks for Decision-making in Self-adaptive Systems : a Case Study, presented by from Aston University, Birmigham, UK: in the context of monitoring of volcanos, using different realizations for this goal have opposite effects on desired softgoals (NFRs), the authors focus on how architectural decisions (configuration) impact the satisficing of NFRs. Having the system make decisions at runtime in presence of uncertainty makes this problem harder. The research question, then, is: should we use probability theory to describe the lack of crispness and the uncertainty of the satisfiability of NFRs? Instead of ++ and — in contribution links to softgoals, the authors propose to use probability. Then decision making is done with Dynamic Decision Networks (DDN). The motivation behind this work was the realization by the authors that a lot of characteristics of DDN problems are similar to problems tackled for adaptive systems.

Session 5: Services

On Estimating Actuation Delays in Elastic Computing Systems, presented by Alessio Gambi from Italian Switzerland University: Elastic Computing Systems, usually deployed in the cloud, can dynamically (de)allocate resources. The problem here is that controllers assume that actions take immediate effect, which is not true. They should take into account the delay between decision and effect. The challenges in the context of delays of actuators in these applications are: no reliable metric, many factors impacting on delays, clouds are shared (which means noise) and transitory effects. A good solution would use indirect observations, be applicable in general, deal with noise, and use contextual information. The authors therefore propose to observe the monitored system metrics, identify the changes that actuators caused on them and elaborate and combine them to estimate the delays in actuators in the future.

Self-Adaptive and Sensitivity-Aware QoS Modeling for the Cloud, presented by Tao Cheng from University of Birmingham, UK: the context of this work is the quality of services (QoS) in the Cloud, which could be optimized using hardware or software control primitives, considering environmental primitives. The challenges of doing this are knowing which primitives correlate with the QoS provision, i.e., which are the relevant primitives? When these primitives correlate with QoS? How the uncertainty of QoS provision can be apportioned and be sensitive to these primitives? Their proposal consists of an approach that uses machine learning techniques to build an adaptive, sensitivity-aware QoS model that can be used by cloud applications for QoS provision.

QoS-Aware Fully Decentralized Service Assembly, presented by Vincenzo Grassi from University of Rome “Tor Vergata”, Italy: the scenario for this work is a dynamic set of agents, each offering a specific service and may enter/leave the system at will. In this context, producing fully resolved assemblies of services, in which non-atomic services (which depend on other services) have their dependencies resolved, is not an easy task. Plus, non-functional requirements and only the currently available services should be considered. Even further, all of this should be done using decentralized self-assembly (no external control, dynamic operation, no central control). Based on the 3 layers of architectural model of Kramer & Magee and concentrating their contributions on the middle layer (change management), the authors propose to model the utility (quality) of components and then combine them to form the overall utility of each assembly (or sub-assembly, recursively). This combination is done using a function that can be changed depending on the case. Distribution of state information between services is done using a gossip protocol, services update themselves (with better dependencies) once the information arrives at them.

Session 6: Evolution

Improving Context-Awareness in Self-Adaptation Using the DYNAMICO Reference Model, presented by Gabriel Tamura from Icesi University, Colombia: the motivation for this work is uncertainty management (highly dynamic contexts), considering that contex-awareness is fundamental for adaptive software systems. The authors previously proposed DYNAMICO, a reference model for context-aware self-adaptation in which adaptation goals and monitoring requirements change dynamically. Now, an implementation for DYNAMICO has been proposed and evaluated using Znn.com.

Law and Adaptivity in Requirements Engineering — finally, this is my short paper, authored with Silvia Ingolfo. I presented this work as well. Check out the preprint version here.

Towards Run-time Testing of Dynamic Adaptive Systems, presented by Erick Frederiks from Michigan State University, USA: the motivating question was “how do we provide a measure of run-time assurance for adaptive systems?” to which the authors answer: exploring from testing domain. This short paper analyzed how existing assurance techniques could be ported from design-time to run-time. Identified challenges include: what test cases to generate? (system, requirements, environment can change) When to test? (could impact the running system) How to test? (extend current run-time assurance techniques for adaptive systems) How to determine the impact of test results? The high level proposal is to have a MAPE-T loop, in which T stands for Testing Knowledge. This loop would run in parallel with the MAPE-K and reuse its infrastructure. The author proposes directions on building the MAPE-T loop.

RPC Automation: Making Legacy Code Relevant, presented by Andreas Bergen from University of Victoria, Canada: the final (short) paper of the symposium focused on the topic of converting monolithic legacy applications into self-adaptive distributed systems that benefit from today’s tech (cloud, virtualization, big data, large amount of user interactions). Breaking legacy code into distributed systems is an old idea (e.g., RPCGEN). The new idea is to do that in a semi-automatic way. Experiments show that the proposal reduces considerably developer interaction. Changing the communication method from RPC to other ways also gets improvement in performance. The proposal uses dynamic profiling to make the system adaptive (use local or use remote).

ICSE Track on Adaptation

Born as an ICSE workshop, SEAMS continued to be co-located with the main conference even after becoming a symposium. In ICSE 2013 there was a technical session focused on the theme of adaptation. Four papers were presented in this session.

Managing Non-functional Uncertainty via Model-Driven Adaptivity, presented by Giordano Tamburrelli from University of the Italian Switzerland in Lugano: in the context of using adaptive systems to deal with complexity and uncertainty, the goal of this work is to formalize and automate non-functional adaptivity. The proposal is called ADAM (ADAptive Model-driven execution) and is based on model transformation techniques and probability theory. The modeling part consists in creating an annotated UML Activity diagram (e.g. <<optional>> activities) whose branches can have a probability assigned, plus an annotated implementation (methods are annotated with the impact they have in NFRs). Then, a generator converts activity diagram into a MDP (Markov Decision Process), obeying the optionality and the probabilities and reports the NFR annotations that the developers added in the implementation. PRISM then calculates the possible values for the different executions. Finally, an interpreter navigates the model to execute it, invoking the implementation when necessary. It maximizes the contribution to NFRs based on the contribution of each branch (aggregation function combines them) and the current context information.

GuideArch: Guiding the Exploration of Architectural Solution Space under Uncertainty, presented by Naeem Esfahani from George Mason University, USA: the author started the presentation with a disclaimer saying the work is not really about adaptation (although was placed in this section by ICSE organizers). The motivation for this paper is that architectural decisions affect properties of the system (examples cited look like NFRs). Specifying this impact at the beginning of the project (what the author called “early architecture”) is hard. How can we help architects make this decision early on? The author then proposes to use “Computation with Words” to apply qualitative values to this impact. GuideArch uses this and helps the architect going from rough notions of this impact to more precise values. Intuitions are represented as fuzzy values and there’s a transformation process from words to fuzzy values. Various properties are combined into an overall quality of the system. Constraints can also be specified and exclude some alternative architectures. Then, the remaining architectures can be ranked based on their overall quality and the best can be chosen.

Coupling Software Architecture and Human Architecture for Collaboration-Aware System Adaptation, presented by Christoph Dorn from Vienna University of Technology, Austria: the context here are large-scale systems that increasingly rely on user collaboration (e.g., Wikipedia, Facebook, etc.). The specific targeted problem is that software-level adaptation is unaware of collaboration implications. It is limited to the technical/software system. The proposal then is to couple collaboration patterns and software architecture for co-adaptation of socio-technical systems. Baseline is architectural models in xADL. The proposal is called hADL (Human Architecture Description Language). Now human users and the way they interact with each other can be included in the architectural model. xADL and hADL models are linked together (including cardinality and interaction point of each mapping). At runtime, a MAPE harnesses the linked model during all steps of the feedback loop in order to adapt to undesired system states.

Learning Revised Models for Planning in Adaptive Systems, presented by Daniel Sykes from Imperial College London, UK: the goal of this work is to have behavior models (markov chains) on which an adaptive system is based evolve once the environment changes, so the system adapts itself. At runtime, monitoring reports on the actual (markov) traces of the program. The proposed solution updates the model considering these. To deal with contradicting traces, the approach adds probabilities to the new model depending on which rules in the model explain the real traces or not.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

code