Tutorials Programme

Tutorials will be held in Polytech'Montpellier in conjunction with the three co-located conferences (ECMFA, ECOOP and ECSA) on Monday and Tuesday 1-2 July 2013.

Registration/Check-in starts at 7:15am in the same building.

Room assignment is available below.

Monday, 1st July 2013:

Tuesday, 2nd July 2013:

Abstracts of the tutorials:

  • "How to Implement Domain-Specific Modeling Languages: Hands-on"
    Abstract: A horrible lie exists in our industry today: it says that defining a graphical DSL is difficult and time-consuming. In this tutorial, we will lay bare this fallacy and demonstrate how simple and quick it is to create domain-specific modeling languages and their generators. Using a hands-on approach you will define several modeling languages and generators within a few hours, learning principles and best practices proven in industrial experience in domains such as telecom, consumer electronics and home automation. The tutorial teaches you and trains you in the practical, repeatable steps to invent and implement your own modeling language. The language definition process reveals the characteristics of modeling languages that enable truly model-driven engineering in which working code is generated from models:
    - Modeling language based on the concepts of problem domain rather than solution domain (code)
    - Scope of the language narrowed down to a particular domain
    - Language minimizes the effort needed to create, update and check the models
    - Language supports communication with users and customers

    At the end of the tutorial you will have implemented several versions of the language, each time raising the level of abstraction closer to the problem domain.

  • "Category Theory and Model-Driven Engineering: From Formal Semantics to Design Patterns and Beyond"
    Abstract: There is a hidden intrigue in the title. CT is one of the most abstract mathematical disciplines, sometimes nicknamed "abstract nonsense". MDE is a recent trend in software development, industrially supported by standards, tools, and the status of a new "silver bullet". Surprisingly, categorical patterns turn out to be directly applicable to mathematical modeling of structures appearing in everyday MDE practice. Model merging, transformation, synchronization, and other important model management scenarios can be seen as executions of categorical specifications. Moreover, the tutorial aims to elucidate a claim that relationships between CT and MDE are more complex and richer than is normally assumed for "applied mathematics". CT provides a toolbox of design patterns and structural principles of real practical value for MDE. We will present two examples of how an elementary categorical arrangement of a model management scenario (change propagation and heterogeneous model merge) reveals deficiencies in the architecture of modern tools automating the scenario.
  • "Scala as a Research Tool"
    Abstract: Programming languages and their implementations are not only the object of PL research, but typically also the most important tool for PL researchers. However, working with industrial-strength languages and compilers can be tedious and time-consuming because of the complexity of the underlying implementations.
    The Scala programming language has several properties that make it particularly attractive as a research platform: the ability to implement powerful programming models as libraries as opposed to language extensions, an expressive type system that can be used to enforce a variety of program properties, good support for asynchronous, concurrent, and parallel programming, among others. At the same time, Scala is open-source and is widely used in industry, which makes it not only possible but often much easier to evaluate new ideas on real-world software projects, with greater potential impact.
    This tutorial aims to introduce the Scala programming language for the programming languages researcher, focusing somewhat on concurrent and parallel programming. About 60% of this tutorial will consist of a general introduction to Scala for the PL researcher, while the other 40% will cover different areas of PL research that Scala is used to conduct.
    The first part of the tutorial will introduce participants to the fundamentals of the Scala programming language, and will move on to cover features, tools, and libraries that could be useful in their own research. This includes: concurrency utilities in the Scala standard library, techniques and methodologies for building domain-specific languages in Scala, powerful new techniques for compile-time meta-programming, tools for benchmarking, and more.
    The second part of the tutorial will show how and why research has been conducted using Scala in a number of fields, including: type systems, compilation techniques, concurrent/parallel data-structures, concurrent/parallel/asynchronous libraries, domain-specific languages, distributed programming, and usability. Where possible we outline self-contained artifacts that participants can reuse and build upon in their own research.
    Throughout, we will step through three detailed examples that demonstrate the introduced concepts. These include a simplified distributed collection, a compile-time serialization-check implemented as a library.
  • "From Use Cases to Java in a Snap"
    Abstract: Model Driven Software Development promises to shorten the software development lifecycle. This promise is associated with the ability to define software models at one level of abstraction and then to automatically generate more detailed models (including code). In this tutorial we will show that this is possible even from the level of use cases. We will demonstrate how to write use case scenarios that are understandable by business experts in a wide range of domains and at the same time precise enough for automatic transformations. In this demonstration we will apply the Requirements Specification Language (RSL) that is defined with a strict meta-model. We will also demonstrate and conduct exercises using a novel tool (ReDSeeDS) that allows for specifying RSL-based models and then translating them into design models and fully dynamic Java code. We will show that from use-case based requirements associated with conceptual domain models, the whole code for the application logic and even the user interface forms, can be produced automatically. Moreover, the domain logic code can be initiated from the requirements-level vocabulary and verb phrases.
    In the first part of this tutorial there will be introduced all the elements necessary to form automatically the path from use cases to Java code. In the tutorial we will discuss the pre-requisites for automating the creation of such software cases. Namely, there will be explained the level of precision and extensions needed for the use case models. During this first part, there will be also introduced a case study example - its thematic area and structure.
    The second part of the tutorial will present the Requirements Specification Language. There will be shown all the important language constructs. The presentation will mostly concentrate on these elements of the language that make it versatile, i.e. suitable for various problem domains. The presentation will also explain how to organise the domain vocabulary and link it with the requirements representations. The tutorial will also briefly explain the definition of the language and its meta-model. In this part, there will be introduced the ReDSeeDS tool (http://www.redseeds.eu/) used throughout the presentation. It will be shown how to use the tool to support precision in defining scenarios and in hyper-linking with the vocabulary. This will include presenting a larger case-study example with several use cases linked through various relationships.
    The third part of the tutorial will present the mechanisms enabling automatic translation from RSL models into code. There will be explained general and detailed rules for transformation from requirements to detailed design models. This includes the rules for transforming regular subject-verb-object sentences of use case scenarios. Also the rules for generating decisions within code for condition scenario sentences will be presented. By using these rules, there can be generated partial code of the final system. This includes full code skeleton containing all the classes with operations, attributes and other declarations. Moreover, the dynamic code (function calls, decision statements etc.) of the methods at the application logic layer can be generated automatically.
    During the tutorial, the above introduced transformations will be performed within the case study example started in the first part. There will be briefly presented the transformation code written in the MOLA language. This code will be run in the ReDSeeDS tool. There will be discussed the results of the transformation visualised within a UML tool and the generated code. There will also be shown these fragments of code (methods for domain logic operations and for the presentation layer) that necessitate additional "manual" programming.
    The tutorial will conclude with a discussion on the presented approach. This will be conducted by analysing change to software development practice from the point of view of different roles (analysts, designers and programmers). This analysis will be supported by the results of validation of the presented approach. This validation was performed by several industry organisations within the ReDSeeDS and REMICS Projects (under the 6th and 7th FP of the European Union), and also through comparative experiments with students.
  • "The UML Testing Profile - A Language for Model-Based Testing"
    Abstract: In the last few years, the principles of model-driven software engineering have been integrated with software quality assurance approaches, first and foremost represented by Model-Based Testing (MBT). However, the opinions on what exactly MBT is differ sharply about what kind of models should be used and which notation is most appropriate to keep the complexity of large-scale software-based systems manageable. As far back as 2001, a dedicated working group at Object Management Group (OMG) started collecting industry accepted testing concepts and practices in order to make them available via the Unified Modeling Language (UML). These efforts resulted in the adoption of the UML Testing Profile (UTP) by OMG in 2005.
    UTP is a standardized language based on for designing, visualizing, specifying, analyzing, constructing, and documenting the artifacts commonly used in and required for testing software-based systems. UTP allows representing the test objectives and testing logic on an abstract level. Test models, test configurations, test logs, etc. expressed with UTP are independent of a concrete test methodology, application domain, testing tool, or target technology. However, UTP is strongly dedicated to industrial application due to its relation to UML, the de-facto standard for model-driven engineering in industry. This tutorial provides a comprehensive overview of UTP and discusses how UTP can be used in a given test process.
  • "Fundamental Practices of Software Developers"
    Abstract: The tutorial surveys developer’s tasks in agile, iterative, and evolutionary software development. A particularly important task is software change (SC) where developers add a new feature, correct a bug, or add a new property to software. Several process models of SC have been proposed, among them TDD and the "Legacy code change algorithm"; the tutorial will present these as specific cases of a more general phased model of software change (PMSC).
    PMSC starts with initiation that includes activities of requirements elicitation, analysis, tasking, and prioritization. The result is a specific change request.
    The next phase is concept location in which the developer finds the core classes to be changed. Concept location may be an easy task in small programs, but it can be a very difficult task in large programs.
    Impact analysis determines classes where secondary changes are to be made. It starts with the classes identified by concept location, looks at interacting classes, and decides to what degree they are also affected. Impact analysis, together with concept location, constitutes the design of software change, where the strategy and extent of the change is determined.
    Actualization implements the new functionality either directly in the old code, or separately and then integrates it with the old code. The change can have repercussions in other parts of software; change propagation makes the secondary modifications.
    Refactoring changes the structure of software without changing the functionality. If it is done before the actualization (prefactoring), it prepares the old code for the actualization; for example, it gathers various pieces of the changing functionality into a single class and makes the actualization localized. After actualization, postfactoring cleans the accumulated technical debt.
    Verification aims to guarantee the quality of the work, and it interleaves with all code modifying phases. Verification techniques include unit, functional, and structural testing, and inspections.
    Conclusion is the last phase; the programmers commit new code into a version control system, create the new baseline, update the documentation, prepare a new software release, and so forth.
    The next part of the tutorial is oriented towards software engineering instructors who want to teach software developer skills applicable to agile, iterative, and evolutionary software development. It presents the format of the lectures that cover the theoretical parts and the exercises and projects that give the students a hands-on experience. Experience from a Wayne State University will be communicated in this section.
    This last part of the tutorial presents the research agenda related to PMSC. The research questions include: What are the open problems in concept location, impact analysis, refactoring, etc? What should an integrated software environment support? What are the empirical techniques applicable to this research? What is the role of the user studies, software repository mining, and other empirical techniques?
  • "Research 2.0 for Software Engineering: SHARE & 101companies"
    Abstract: While Research 2.0 initiatives receive a lot of attention from Life Sciences, Physics, Astronomy or Mathematics, the Software Engineering (SE) community seems to be lagging behind. This tutorial aims to tackle this problem and demonstrate concrete instances of Research 2.0 for SE. The tutorial is organized by three enthusiasts from the SE community who want (1) to explain how Research 2.0 principles could actually be applied in the context of Software Engineering, (2) to show why sharing and linking research is so important in practice, and (3) to illustrate these principles by demonstrating the results of two concrete community-oriented projects, namely SHARE and 101companies. We present the emerging best practices and show how participants can use these systems within the context of their own research.
  • "Abstract Behavioral Modelling of Variant-Rich, Concurrent Software Systems"
    Abstract: ABS (for Abstract Behavioral Specification) is a new modeling language particularly suitable for distributed software systems that exhibit a high degree of variability.
    ABS is an executable, yet abstract modelling langauge. Executability entails the capability to generate self-contained source code (Java and Scala are among the supported backends). Information about the runtime environment, scheduling strategies, etc., are reflected into ABS models in a suitably abstract manner. At the same time, ABS incorporates compositional, data and concurrency abstractions that enable a range of fully automated dynamic and static analyses, including resource analysis.
    Software variability is captured in ABS by feature models that are connected to behavioral units via code deltas. Delta-oriented programming is a recent instance of feature-oriented programming and enables ABS to model software product lines. Hence, ABS allows to model systems "end-to-end" from feature models to executable code in a homogeneous, fully formalized, language. The ABS language realizes a new software modelling paradigm that we call abstract behavioral modelling. There are several usage scenarios:
    - rapid prototyping including simulation and code generation;
    - reverse engineering and documentation of legacy software;
    - model-centric development, analysis, and validation of software systems.
    An implementation of ABS is available as a plugin for the Eclipse IDE and it is integrated with many analysis and productivity tools. In the tutorial we give an introduction into the main concepts of ABS, we show how to use ABS as a modeling language, and we illustrate the capabilities of the tool set with live and/or hands-on demos. We will also briefly report on case studies where ABS was used to model commercial production code.
    The ABS language is an outcome of the EC FP7 Integrated Project HATS (Highly Adaptable and Trustworthy Software using Formal Models). More information on ABS is available at http://www.hats-project.eu
  • "Empirical Research Methods in SE: An Introductory Tutorial"
    Abstract: The empirical research paradigm is rapidly gaining acceptance and influence in Software Engineering research, as is demonstrated by rising impact factors, more widespread use, and the almost ubiquitous requirement of "evaluation" in conference calls (e.g., MODELS, VL/HCC). Thus, all of us are now faced with empirical work one way or another, be it as empirical researchers, or as reviewers of work containing an element of empirical research. However, empirical research is not yet widely taught to our graduate students.
    The tutorial focuses on the practical aspects of empirical research, that is, there will be only short introductory lectures on the various topics. After that, participants shall work hands‐on with real case studies (some of which derive from my own work). In particular, the tutorial participants will
    - study existing experimental setups with a view to finding shortcomings and threats to validity and how they impact the claims put forward in a given article;
    - develop a small experimental setup for a given case study in small groups and present their design in a plenary discussion; and
    - define and refine a research question, and discuss alternative approaches to providing evidence for or against it.
    Of course, there is more to empirical research than what can be taught in 4 hours. Thus, the objective of this course is only to provide its attendees with a starting point, possibly removing any obstacles and filling knowledge gaps that might be there, and equip aspiring researches with some skills and first practical exercises in leading methodologically sound research. After attending this course, participants
    - will be aware of the potential and limitations of empirical research methods;
    - are capable of choosing an appropriate empirical research method for a given problem; and
    - can assess the quality of the empirical research reported in an article, such as for a review.
    Given the time restriction, we will focus mostly on controlled experiments as far as the practical exercises are concerned. Other paradigms will only be covered by lecture‐format introductions. We will use various articles by Basili, Kitchenham, and Runeson as teaching material, and the presenter’s own research for case studies. We recommend the following textbooks for preparatory / additional reading.
    - C. Wohlin, P. Runeson, M. Höst, M.C. Ohlsson, B. Regnell, A. Wesslén: Experimentation in Software Engineering: An Introduction. Kluwer Academic Publishers, 2000
    - J. Lazar, J. H. Feng, H. Hochheiser: Research Methods in Human-Computer Interaction. Wiley, 2010
    - S. J. Taylor, R. Bogdan: Introduction to Qualitative Research Methods. Wiley, 1984
  • "Model-Based Variability Management"
    Abstract: The customization of almost everything is observed in a wide range of domains. Many organizations should address the challenge of extending, changing, customizing or configuring numerous kinds of systems and artefacts (requirements, components, services, languages, architectural or design models, codes, user interfaces, etc.) for use in a particular context. As a result, modeling and managing variability of such systems and artefacts is a crucial activity in a growing number of software engineering contexts (e.g., software product lines, dynamic adaptive architectures). Numerous model-based techniques have been proposed and usually consist in i) a variability model (e.g., a feature model), ii) a model (e.g., a class diagram) expressed in a domain-specific modeling language (e.g., Unified Modelling language), and iii) a realization layer that maps and transforms variation points into model elements. Based on a selection of desired features in the variability model, a derivation engine can automatically synthesise customized models – each model corresponding to an individual product.
    In this tutorial, we present the foundations and tool-supported techniques of state-of-the-art variability modeling technologies. In the first part, we briefly exemplify the management of variability in some systems/artefacts (design models, languages, product configurators). We introduce the Common Variability Language (CVL), a representative approach and ongoing effort involving both academic and industry partners to promote standardization variability modeling technology. In the second part, we focus on feature models the most popular notation to formally represent and reason about commonality and variability of a software system. Feature modelling languages and tools, directly applicable to a wide range of model-based variability problems and application domains, are presented. The FAMILIAR language and environment is used to perform numerous management operations like the import, export, compose, decompose, edit, configure, compute diffs, refactor, reverse engineer, test, or reason about (multiple) feature models. We describe their theoretical foundations, efficient implementations, and how these operations can be combined to realize complex variability management tasks. In the third part, we show how to combine feature models and other modeling artefacts. We revisit the examples given in the first part of the tutorial, using the Kermeta workbench and familiarCVL, an implementation of CVL. Finally we present some of the ongoing challenges for variability modeling.
    At the end of the tutorial, participants (being practitioners or academics, beginners or advanced) will learn languages, tools and novel variability modeling techniques they can directly use in their industrial contexts or as part of their research.
  • "Enforcing Security and Dependability Patterns in Embedded System Design" cancelled
    Abstract: We are studying an approach using an integrated repository of models to capture several concerns of safety critical embedded systems. We promote the use of patterns as first class artifacts to embed solutions of extra-functional concerns such as safety, security and performance requirements of systems; specify the set of correct configurations; and capture the execution infrastructure of the systems, supporting the mechanisms to implement these concerns.
    The main focus of this tutorial is on the topic of making security and dependability expert knowledge available to embedded systems engineering processes by means of patterns. Special emphasis will be devoted to promote the use of patterns for engineering S&D embedded systems. Furthermore, an important focus is attached to the potential benefits of combining Model-Driven Engineering with pattern-based representations of security and dependability solutions. We identify three topics (1) methodology, (2) modeling languages and (3) tool-chain.
    A Pattern Based System engineering (PBSE). A methodology based on a repository was specified. This engineering methodology fully takes into account the need for separation of roles by defining three distinct processes, the pattern modeling process, the repository specification process, and the system development based on the reuse and the integration of patterns. The use of formal validation in the process was successfully carried out.
    Domain Specific Modeling Languages. A set of modeling languages were specified for the specification of patterns, S&D properties and the structure of repositories. In addition, transformation and instantiation rules targeting multiples modeling environment (IDEs) were specified.
    MDE tool-chain. An accompanying tool-suite based on and for MDE is implemented targeting S&D solutions developer, i.e. the engineer who creates S&D patterns, and system developers. For instance, a repository of patterns and models for reuse (Gaya), a tool for the design of S&D patterns supporting repository interactions (Arabion), a tool for the specification of properties and constraints libraries (Tiqueo) and access tools for transforming the Gaya representation to a representation that is consistent with specific design environments (mostly Rhapsody and Papyrus) and the process compatible with the targeted application domain.
    These works are achieved in the context of the FP7 TERESA (Trusted computing Engineering for Resource constrained Embedded Systems Applications) project. http://www.teresa-project.org/. The overall objective of this project is to enforce Security and Dependability (S&D) in RCES with Model Driven Engineering: (1) To use and reuse S&D solutions in the form of patterns, (2) Design process of S&D patterns, (3) Model-based repository of S&D patterns for RCES, (4) Formalization of S&D properties at the pattern and the design level and (5) Study the issue of reusing S&D mechanisms in RCES.
  • "Developing and Using Pluggable Type Systems"
    Abstract: A pluggable type system extends a language's built-in type system to confer additional compile-time guarantees. We will explain the theory and practice of pluggable types. The material is relevant for researchers who wish to apply type theory, and for anyone who wishes to increase confidence in their code. After this session, you will have the knowledge to: analyze a problem to determine whether pluggable type-checking is appropriate; design a domain-specific type system; implement a simple type-checker (in as little as 4 lines of code!); scale a simple type-checker to more sophisticated properties; and better appreciate both object-oriented types and flexible verification techniques.
    While the theory is general, our hands-on exercises will use a state-of-the-art system, the Checker Framework. The Checker Framework works for the Java language, scales to millions of lines of code, and is being adopted in the research and industrial communities. Such a framework enables researchers to easily evaluate their type systems in the context of a widely-used industrial language, Java. It enables non-researchers to verify their code in a lightweight way and to create custom analyses. And it enables students to better appreciate type system concepts. Pluggable type-checking as implemented by the Checker Framework is such a compelling use-case that Oracle is adding support for type qualifiers to Java 8.
  • "Pharo"
    Abstract: Pharo is an open-source Smalltalk-inspired system started in 2008. The official web site is http://www.pharo-project.org. By providing a stable and small core system, excellent dev tools, and maintained releases, Pharo is an attractive platform to build and deploy mission-critical applications.
    The tutorial consists of 2 parts: in the first part (taking one third of the time) we will present Pharo, and in the second part (two thirds of the time) attendees will have to do an exercise.
    Presentation (1h20). In this part we will present Pharo: its syntax, idioms, model, tools, live environment and reflection features, meta-structure, recent releases and future plans. We will also talk about the community behind, the companies, the research groups, the consortium, the association... After this slide-based presentation, we will demo a typical Pharo development session using Pharo tools. We will make sure the presentation is interactive and will keep on challenging the audience.
    Exercise (2h40). In this part I will give materials to each attendee summarizing the Pharo syntax, idioms, and some documentation about the tools. I will then give each attendee a small exercise they will have to do by themselves. This exercise will be designed to show the power behind Pharo.