Summary of the 38th International Conference on Conceptual Modeling (ER 2019)

From November 4th to 7th I participated in the 38th International Conference on Conceptual Modeling (ER 2019) in Salvador, BA – Brazil. Here are quick summaries of the presentations that I attended during the event.

 

Day 1: Monday, November 4th

The first day was dedicated to the workshops of the conference. Busy day! Many great workshops I wanted to attend, but had a paper to present, a student of mine had two other papers and, in the afternoon, I had a workshop to co-chair. Here’s what I could grasp:

iStar Workshop:

I started at the iStar Workshop. After a quick opening, César Bernabé (NEMO/UFES) presented Using GORO to provide ontological interpretations of iStar constructs. In his work, César used the Goal-Oriented Requirements Ontology (see OntoCom below) to analyze iStar models and make three hypothesis over the interpretation of its constructs (see here for details). During the discussion, John Mylopoulos agreed on the hypothesis that what iStar calls a refinement is not actually a refinement and pointed out the importance of looking at the definition of the relations in the language. Wilco Engelsman argued that languages actually need to reduce the number of constructs and become simpler (e.g., why have both goal and task constructs?).

Right after César, I presented the paper iStar2.0-OWL: an Operational Ontology for iStar, a collaboration with CIn-UFPE and the work of Camilo Almendra (UFC Quixadá), supervised by Carla Silva. The work proposes an operational ontology that represents the definitions and rules of iStar in order to be able to validate models, using technologies such as OWL, SWRL and SQWRL (details here). During discussion, João Pimentel asked if we had plans to integrate this work with tools for automatica generation of OWL instances and model validation, which was a good suggestion for future work. John asked if we had considered other formalisms that allow the proper definition of metaclasses and so on, which is another thing to consider as future work.

After my presentation I switched rooms to OntoCom (Workshop on Ontologies and Conceptual Modeling), where César was going to present his other paper (see below). After César’s presentation, I came back to continue following the iStar Workshop.

Jaelson Castro (UFPE) presented Adressing Symbol Redundancy Representations in iStar Extensions. Based on an Systematic Literature Review in which 96 iStar extensions were identified, the authors analyzed extensions that used redundant symbols, i.e., the same concept was represented by different symbols in different extensions. They conducted a survey with 83 participants in order to mitigate symbol redundancy in 8 concepts, asking the participants to express their preference among the redundant symbols.

Last presentaion before lunch, Roxana Portugal (PUC-RJ) presented A critical view over iStar visual constructs, a paper from Julio Leite‘s group (Julio was coming but had last minute problems). Using the work of Daniel Moody (Physics of Notations), in particular an analysis that he has conducted of iStar constructs, the authors disagree with some of Moody’s suggestions and argue for keeping the constructs as they currently are.

OntoCom Workshop:

In the middle of the morning, I quickly attended OntoCom (Workshop on Ontologies and Conceptual Modeling) in order to support César in the presentation of his paper GORO 2.0: Evolving an Ontology for Goal-Oriented Requirements Engineering. In this work, César proposes an ontology that defines the semantics behind constructs of Goal-Oriented Requirements Engineering (GORE) languages. During discussion, there were questions regarding the actual difference between goals and tasks (there’s usually confusion because modelers tend to name goals and tasks in a very similar way), regarding why the Motivation Layer of ArchiMate was left out of ontology elicitation (due to the scope being set at languages that were specific to GORE), and regarding plans for actually using GORO in real-world Requirements Engineering projects. Also, in the context of the first question, it was suggested that we analyze datasets of models in order to extract how people use GORE languages.

MREBA Workshop:

After lunch, I co-chaired MREBA (Workshop on Conceptual Modeling in Requirements Engineering and Business Analysis). We had 4 accepted papers in the workshop, divided in two sessions (one with three papers, another with just one, due to space constraints for the conference workshops).

First, Geert Poels (Ghent University) presented Early Identification of Potential Distributed Ledger Technology Business Cases Using e3value Models. A research among projects from students at Ghent University showed that e3value models are useful for modeling projects that are based on DLT (Distributed Ledger Technology, such as blockchain). DLT is a disruptive technology, but very few DLT projects survive for long. This prompted the question: can conceptual modeling help identifying sustainable business cases for implementing DLT? The proposal is to use e3value to model a successful DLT business case, then abstract a model fragment with early indications of DLT business case, then apply that model to a real case using the Kodak platform.

Then, Wilco Engelsman presented Realizing Traceability from the Business Model to Enterprise Architecture. From his past experience in industry working with ArchiMate in BIZZdesign, Wilco detected lack of constructs to represent business requirements/goals, so research started on this topic with the proposal of the Armor language. After having to do a hiatus in this research, ArchiMate had already incorporated new related concepts in the Motivation Layer. He then targeted his research on realizing traceability from the business model to the enterprise architecture, using the e3value language. He mapped e3value concepts to ArchiMate, in order to be able to go from an ArchiMate goal model (Motivation Layer) to an e3value model, then finally to an ArchiMate business layer model.

Before we stopped for coffee break, Vik Pant presented Towards a Catalog of Goals for Strategic Coopetition. Coopetition is the combination of cooperation and competition in which peers (e.g., companies) all come out a little better in the end, in a win-win situation. Previous work proposed a method for generating win-win strategies using iStar. In that context, he built a catalog of goals both on cooperation and competition, built based on an exploratory literature review over Google Scholar. Papers were ranked by relevance, refining the model progressively and solving conflicts by analyzing textual majority. He then conducted a case study with a real-world case in which the catalog was used to insert coopetition aspects in an AS-IS iStar Strategic Rationale model in order to improve the situation for a TO-BE situation.

Finally, after the coffee break, Abhimanyu Gupta (Ghent University) presented Creation of Multiple Conceptual Models from User Stories – A Natural Language Processing Approach. They propose a conceptual model to represent the contents of user stories and BDD acceptance criteria, which is used as basis for processing the natural language text and generating four types of models based on the analyzed contents: entity-relationship diagram (domain model), BPMN process model, state machine model and use case model.

Back to iStar Workshop:

After MREBA concluded, I went back to the iStar Workshop for the discussion session. João Pimentel, the chair, organized a fishbowl panel in which people could take turns in sitting in chairs in front of the audience and present their view on what they expected for the future of iStar.

(See also Renata’s notes on the panel)

John Mylopoulos mentioned ethical and attitudinal requirements, which are the kinds of things that systems need to model nowadays (e.g., ethics in AI systems). Eric Yu mentioned iStar was inspired by the definition of AI at the time it was proposed (goals and plans to achieve these goals), then wondered: “can it be used to represent what AI is today and start with data instead of goals? Can there be reconciliation between these two views?” Xavier Franch mentioned the importance of ontological definition of the constructs of the language, in order to have a complete definition of iStar 2.0, and also argued for more empirical evidence. Renata Guizzardi suggested the community keep discussing the definition of the language (as Xavier said), starting small but providing semantics to the concepts a few at a time. Jaelson Castro talked about the importance of having more resources to newcomers, worried that the community is getting thinner, then called me to express my views. I went along the lines of Xavier Franch, saying I’d like to know/see more use o iStar by the industry, being used in practice.

After the panel, we had the joint iStar/MREBA keynote from Matthias Jarke: Inter-Organizational Data Sharing: From Goals to Policies to Code.

(See also Renata’s notes on this keynote)

Business ecosystems are a multilateral form of organizing for customer innovation (e.g., the rail industry). In this context, data sharing is essential, but comes with many issues, such as, e.g., trust and security, data sovereignty and interoperability. What follows is a long list of requirements with respect to data sharing in business ecosystems, e.g., data must be available on demand, data events should be logged, access and usage rights should be customizable, and so on. There are some initiatives going on regarding these challenges (e.g., International Data Space Association).

Prof. Jarke proceeded with an overview of the current status of this field (“From Goals to Policies to Code”), showing academic proposals and discussing the technology that is actually used in industry, in particular talking about data flows, semantic interoperability and layered multi-view information metamodels. He mentioned existing business ecosystems such as Skywise in the airline industry, DataConnect in the farming industry and Nevada in the automobile industry. Finally, he mentioned the GAIA-X Reference Architecture, a virtual hyperscaler for obtaining performance in alliance-driven data ecosystems.

Veda C. Storey Keynote:

(See also Renata’s notes on this keynote)

Veda Storey (Georgia State University) gave the first keynote of the main conference, entitled Data Management in the Digitalization Era. Veda started the keynote describing the role of data in today’s digital world. She moved on to talk about data management, in particular data challenges, such as sheer volume, discovery and interpretation of data, etc. She went through a brief history of the field, which started with a focus on modeling, abstraction and representation. She mentioned the 4S framework: semantics, structure, syntax and situation. Then there was big data and its 5V challenges: volume, velocity, variety, veracity and value. She proposes to combine the 4S framework with the 5Vs of Big Data. Finally, there’s the era o digitalization, in which many different aspects of processes and operations across business and society are being digitalized. At this point we need to check which challenges remain and which new challenges present themselves.

Veda then proposes a new framework with three dimensions: the 4S dimension (increasing difficulty: semantics, structure, syntax, situation), the environment dimension (increasing complexity: closed, open) and the socio-technical dimension (increasing automation: people, task, structure, technology). She mentioned disruptive technnologies, such as blockchain and how they impact the environment (imagine applying the idea of blockchain to the task of obtaining a degree, removing the University from the environment). She then showed that some of the traditional challenges of data management (with the 4S framework) still apply in this new case.

She concluded with a discussion about the role of data in emerging opportunities, citing examples in agriculture and health care.

 

Day 2: Tuesday, November 5th

On the second day the technical sessions of the main conference and the tutorials started, plus we had some more keynotes.

Marco A. Casanova Keynote:

The program started with the keynote from Marco Casanova (PUC-RIO), entitled Keyword Search over RDF Datasets. Casanova started by motivating the use of keyword search by comparing a traditional database interface for searching (e.g., filling in forms to look for a flight) vs. using an information retrieval interface (e.g., Googling for the flight). Keyword search can offer database users interfaces as simple as those in information retrieval. He then also motivated the use of RDF: it makes no distinction between data and metadata, so keywords may match value, class or property; keyword search can be rephrased as a graph search; RDF may or may not have a schema, facilitating incremental construction of the database. He summarized his own talk as an invitation to RDF keyword search, based on the experience with industry-strength prototypes.

After some quick basic definitions about RDF, he defined keyword search over RDF: the search is a set of keywords that is matched against data in the dataset, but the answer consists of a subgraph of your RDF dataset that connects the nodes that match the keywords. Given a keyword-based query K over an RDF dataset D, find an answer for K over D that maximizes the keyword matches and minimizes the number of triples. Challenges here are: keyword match (not covered in the talk), node connection; and node ranking.

He talked about two approaches for node connection. In the schema-based approach: (1) find keyword matches; (2) use the matches to select classes and properties; (3) use the schema to join the selected classes and properties; (4) compile a SPARQL query; (5) run it. Seems simple, but can be very complicated depending on the query. In the graph-based (tensor-based or graph-summary): pre-compute a graph summary, e.g. with KMV-synopses, use it to compile a SPARQL query — the synopsis basically uses domain and range of properties, without the schema. The graph-based approach works better than the schema-based approach in benchmarks. Challenges here are: large number of classes and properties and refreshing the synopses when the RDF dataset changes.

Regarding node ranking: PageRank doesn’t work, but it can be adapted to InfoRank, based on the intuitions that important things have detailed descriptions, important classes have important instances and that important properties connect important instances. InfoRank is then used to match keywords, to select classes and properties, to find the best joins and to rank final answers. He talked about two challenges: the Entity Relatedness Problem — lots of paths between two entities, but paths can be ranked — and the Serendipitous Search Problem — “the art of making unsought findings” — a 1994 article mentions serendipity patterns, some are easy to implement.

He briefly mentioned other challenges not covered in the talk: keyword disambiguation/expansion, result summarization, “big data” keyword search, “data lake” keyword search.

During discussions, John asked: RDF has less expressivity than other things that already exist. How do you see the future of RDF? Casanova answer: RDF lack of expressivity is convenient in some cases, but when needed we should move to OWL2, using a subset of it with enough expressivity and deductive power.

Technical Session #1: Conceptual Modeling

First in this session, Camila Aguiar (NEMO/UFES) presented OOC-O: a Reference Ontology on Object-Oriented Code. She presents a reference ontology that represents the concepts behind object-oriented source code, elicited from an analysis of popular and traditional OO programming languages. An operational version of the ontology was also implemented in OWL and used in practice in another work from our group to translate object-oriented code from one framework/platform (e.g., Hibernate/Java) to another (e.g., Django/Python). Complete specifications and a harmonization among the different OO languages is avaliable at our project’s website. During discussion, Óscar Pastor suggested that we use not only a bottom up approach, but also a top-down approach coming from UFO and analyzing the OO concepts from an ontological perspective.

Next, Claudenir Fonseca (University of Bolzano, Italy, formerly at NEMO/UFES) presented Relations in Ontology-Driven Conceptual Modeling. The work is motivated by limitations of OntoUML/UFO in practice, in particular the presence of what he called “systematic subversions”, i.e., intentional misuse of OntoUML constraints or intentional violation of constraints in order to achieve a valid conceptuatlization of the domain that the language doesn’t support. Current limitations of OntoUML are that material relations are too restrictive (only accommodate two-sided relations) and formal relations are too permissive (cover relations of essentially distinct natures). Beyond these limitations, what are the truthmakers of relations? The work thus review UFO’s taxonomy of Relation, Endurant Type and Endurant, providing new OntoUML stereotypes, so now you can use: comparative, material, characterization, mediation, external dependency and historical. Both a formalization in UFO and a CASE Tool for OntoUML have been developed. (See also Renata’s notes on this presentation)

Claudenir continued “on stage” to present Capturing Multi-Level Models in a Two-Level Formal Modeling Technique. The motivation for this work are domains that challenge the two-level divide (classes and instances). A usual workaround is powertypes, but it comes with limitations. A proper language to specify such models is ML2, but if you need to work with a two-level schema language? Claudenir presents, then, design transformation principles to fit ML2 models in UML. The proposal was implemented in the ML2 Editor, including autocomplete, error highlighting, live MLT*-based validation and Allow transformation for model testing. (See also Renata’s notes on this presentation)

Finally, Hasan Jamil (University of Idaho, USA) presented An SQLo front-end for Relational Databases with Non-Monotonic Inheritance and De-referencing. His work is motivated by the fact that there hasn’t been a definitive approach for representation of inheritance and object navigation in relational databases. He proposes the SQLo language that allows one to express these concepts in DDL and DML queries that get translated to relational SQL.

Tutorials

In the afternoon, I attended two tutorials, but decided not to take notes on them (lots of contents during a long period of time, and I was already quite tired). However, Renata did take and publish her notes:

 

Day 3: Wednesday, November 6th

On the third day, I attended more technical sessions and keynotes, but also took the opportunity to make face-to-face meetings with people that I usually meet via Internet.

Barbara Weber Keynote:

The program started with the keynote from Barbara Weber (University of St. Gallen, Switzerland), entitled Next Generation Modeling Environment. She started with a quick summary of her past work on the Cheetah Platform, in which a development environment (context is Software Engineering, program comprehension) is instrumentalized in order to collect interaction data, using technologies such as eye tracking and feature extraction (machine learning).

Based on this data, they try to classify modelers on the fly (context detection). Novices and experts differ in their cognitive processes, so modeling environments should adapt to whomever is sitting in front of it. Also, there is difference in brain activity in different phases of the Software Engineering process, thus an automatic modeling phase detection was implemented. Modeling environments should have phase-specific support as well.

She finally arrives at her suggestion for next-generation modeling environment: a neuro-adaptive modeling environment that performs online model analytics based on data collected from increasingly available biosensors. The environment should detect cognitive load,emotions and attention level, in order to establish a mental state and adjust to context. High cognitive load leads to poor task performance and wrong decisions. She then proposed an architecture for a neuro-adaptive modeling environment, explaining the key components.

She closed the talk with some comments on hybrid process representations. In parallel to adjusting to the user (expertise level, modeling phase, mental state), modeling environments should also have hybrid representations (she showed examples of hybrid representations: model and test case; declarative model, diagram and simulation, and so on). This should be guided by insights from other disciplines that can tell us about the human cognitive process regarding modeling tasks.

Technical Session #7: Domain Specific Models II

The session started with Jan Ladleif (University of Potsdam, Germany) presenting A Unifying Model fo Legal Smart Contracts. Currently you can generate smart contracts (e.g., Ethereum blockchain) based on behavior models (e.g., BPMN choreography diagram). However, these models don’t include aspects such as laws, attachments, regulations and obligations. Currently, people use Ricardian triples in smart contracts to connect legal prose (non-operational, e.g. legal ontologies), code (operational, e.g. choreography diagrams) and their parameters, but these have some limitations (e.g., granularity, lack of consideration for data other than parameters, etc.). The work extends this existing model, proposing a unifying model of legal smart contracts. They used the model to evaluate 8 existing smart contract specification languages in terms of their support to the model constructs: roles, data sources, legal states, meta-rules, actions, conditions, etc.

Next, Dalay Israel de Almeida Pereira (IFSTTAR) presented Formal Specification of Environmental Aspects of a Railway Interlocking System Based on a Conceptual Model. Relay-based Railway Interlocking System is a type of system used by infrastructure managers, such as SNCF (French railway manager). Such systems use a formal specification to guarantee safety, but there are some limitations. This work augment the specification with three new conceptual models in order to improve the formal specification with more safety properties. The work is based on the Dysfunctional Analysis Ontology (DAO, grounded in UFO) and the specification model was checked and simulated with ProB, guaranteeing that it doesn’t reach dangerous state.

Finally, Anna Bernasconi (Politecnico di Milano, Italy) presented From a Conceptual Model to a Knowledge Graph for Genomic Datasets. There are many challenges in Genomics Data Integration: many data sources, different data types, many interfaces, formats and terminologies. In order to deal with some of these issues, they propose an approach that integrates the different data sources, performing cleaning and ontology annotation, and generates two types of interfaces: relational-based (genosurf) and graph-based (neo4j). Based on a genomic conceptual model, they arrive (through mapping, annotating, enriching) at a genomic knowledge graph and provide users with system’s explanation, control on inference and exploration capabilities.

ER Symposium on Conceptual Modeling Education (SCME)

My laptop ran out of battery, so I took it to my room to charge it and didn’t make any notes. But, as usual, Renata took excellent notes on the panel which, as you can see in her blog, had excellent questions and discussion.

C. Mohan Industrial Keynote:

The day’s program ended with the keynote from C. Mohan (IBM, USA and Tsinghua University, China), entitled State of Permissionless and Permissioned Blockchains: Myths and Reality (videos, slides, bibliography available). Mohan started by saying he is a low-level database guy and that “Ted” Codd was a colleague of his and SQL was also born in his department. Also, blockchains is not the topic he works with more, but it’s a hot topic right now, so it gets more attention.

On to the topic of the keynote, he started explaining what BlockChain (BC) is and its origin in cryptocurrencies (Bitcoin) and all the ways that this is wrong. His focus is on permissioned/private BC Systems (PBCs): you’re part of the network if there’s a reason for it, identities are known, there’s better performance/scalability, controlled information, deterministic behavior and avoid greedy behaviors (no rewards or fees). Permissioned BCs build on basic business concepts: a network of businesses and customers, participants with identity, assets flow over the network, transactions describe exchanges or changes of state, smart contracts regulate and so on. He argued in favor of using permissioned BCs instead of permissionless BCs.

Current status for permissioned BCs is that few systems released but users are on their own to figure out how to use them effectively, compare systems. Greate momentum behind Hyperledger Fabric (IBM, Oracle, Baidu, Amazon, Microsoft, etc.). He also addressed a long list of permissionless BC myths, continuing to criticize what he called a religious speech from cryptocurrency people.

He then moved on to explain how the Hyperledger Fabric Ledger works (details in the slides) and the potential for such technology once you fix the problems of permissionless BCs and the appropriate use cases for permissioned BCs. Overall, it was a very dense keynote, with a lot of information in a very short amount of time.

 

Day 4: Thursday, November 7th

The fourth day included an industrial panel an the last technical sessions of the conference.

Industrial panel:

The industrial panel was composed by Óscar Pastor (University of Valencia, Spain), Karin Breitman (Rio Tinto) and C. Mohan (IBM, USA and Tsinghua University, China).

The main question of the panel was “How to apply Conceptual Modeling (CM) in industry?” The question could be divided in three axis: technical (what we use in practice); interface between technical and business; and people (profiles industry is looking for).

Óscar started with a few questions and his answers to them. In summary:

  1. What are the main inhibitors of modeling in practice?
    • Software Engineering not recognized in practice as true engineering;
    • Strongly dependent on skilled professionals;
    • Lack of a CM perspective: product focus instesad of process focus;
    • Lack of a universal, widely-based, ontologically-supported definition.
  2. What could be done to improve the popularity of CM in practice?
    • Conceptual Programming (CP)-based tools;
    • Stress the importance of CM/CP in Software Engineering.
  3. What lessons did you learn from teaching CM?
    • Big difference in CM abilities in students
    • Should a Software Engineer graduate without a solid CM ability?
    • What is an especially promising research direction in CM?

    • CM of Life;
    • The role of CM to guide/lead digital transformation.
  4. What are the current methods, tools and techonlogy in use, especially as it relates to modeling machine learning applications?
    • Explainable AI a big opportunity for CM;
    • Promising areas for using models@run-time: Big Data, human genome and precision medicine implications, Enterprise Modeling, alignment between enterprise models and software applications, from requirements to code.

Then, Karin described her background as a professor/researcher and her current role as Head of Analytics Centre of Excellence at Rio Tinto. She then proceeded to the questions of the panel.

  • How to use CM in industry? Two main applications of CM in the industry she works for: capturing processes and integrating data;
  • What’s the role of CM in digital transformation? There is no transformation if you don’t perform integration;
  • About people: people that have the capability of abstract, conceptualize and most importantly to translate the technical (Software Engineering) skills to the business you work on in order to produce value (e.g., save money). Soft skills make a big difference (communication, prioritization).

Finally, C. Mohan commented on what the other two talked before, mentioning the importance of CM in explainable AI, as deep learning is a black box and sometimes just to have an answer is not enough.

Again, Renata’s notes on this panel are more detailed than mine. 🙂

Technical Session #12: Requirements modeling

First, Sotirios Liaskos (York University, Canada) presented Factors affecting comprehension of contribution links in goal models: an experiment. Given the proposals for contribution links (both symbolic and numeric) and propagation rules, the work measured the intuitiveness of these representations with an experiment that intended to answer two research questions — RQ1: compare symbolic and numeric models; RQ2: what factors affect intuitiveness? They measured accuracy, efficiency, method confidence and response confidence and considered factors such as representation method and semantics, mathematics anxiety, cognitive style and apporoach followed. The setup consisted of giving 12 goal models from 3 domains (day-to-day subjects) in different sizes to about 100 subjects (students) and ask them to choose the hardgoal that satisfies a root softgoal refined in sub-softgoals which propagate satisfaction towards the root softgoal. Results show that less math anxiety lead to more accuracy, the numeric group was more accurate than the symbolic group and those working methodically were more accurate than those working intuitively.

Then, Maria Lencastre (Universidade de Pernambuco, Brazil) presented iStar-p: A Modelling Language for Requirements Prioritization. Given many concerns in requirements prioritization, e.g., number of requirements to consider, lack of resources, high customer expectation, etc., the work proposes to contribute to the preparatory steps of prioritization: select stakeholders, determine involved requirements, select prioritization technique and define prioritization criteria, in order to support strategic planning of requirements prioritization through a visual modeling language. To build it, the authors (1) performed a systematic mapping of the literature and interviews to identify the main concepts to consider; (2) definition of 7 essential concepts; (3) proposal of the modeling language iStar-p; (4) evaluation; and (5) creation of an iStar-p metamodel. The language was evaluated in two experiments with 8 and 15 participants, respectively. The proposal was considered easy to be applied and useful, and it increases the transparency of the prioritization process by explicitly expressing the factors used to calculate priorities.

Next, Xavier Franch (Polytechnic University of Catalunya, Spain) presented On the Use of Requiremennt Patterns to Analyse Request for Proposal Documents. In a call for tenders process, a public organization publishes a request for proposals (RfP) and different IT providers propose bids that are evaluated by the public administration to select the best one. Since public organizations often publish RfPs on the same domain, a catalog of reusable requirements could be used. Proposals have already been made to help public organizations reuse requirements to publish CfPs, but what about IT providers? They propose a pattern-based approach on assessing RfPs from the perspective of IT providers in the context of multiple call for tender processes in the same domain. They use the PABRE approach for requirements patterns, previous work from the same group, and applied it in a case study on Siemens on the Railway Domain.

Finally, João Araújo (Universidade Nova de Lisboa, Portugal) presented iStar4RationalAgents: Modeling Requirements of Multi-Agent Systems with Rational Agents. In the context of the MAS-ML 2.0 model-driven approach to develop Multi-Agent Systems, but proposals focus on architecture and development, therefore the work proposes an extension to model requirements. They used PRISE (a systematic way for proposing iStar 2.0 extensions) and offer tool support based on piStar. An experiment was conducted with 22 participants from 13 universities and 3 companies to evaluate the extension.