Tutorials

PROFES 2017 tutorials complement and enhance the main conference program, offering insights into special topics of current and ongoing relevance.

The tutorials will be held on November 29th, 2017. Each participant can choose two half-day tutorials or one full-day tutorial to participate on that day.

Nov. 29, 2017 Tutorial Track I Tutorial Track II Tutorial Track III
 

Morning

 

T1. Analyzing the Potential of Big Data T2. Automatic Requirements Reviews T4. Process Mining: from Zero to Hero
 

Afternoon

 

T3. Need for Speed – Towards Real-time Business


T1. Analyzing the Potential of Big Data (full-day)

Andreas Jedlitschka, Fraunhofer IESE, Germany

Description

Independent of any domain, all roadmaps and future scenarios clearly show: A service layer will be established between the products and the customers, which will be oriented more towards the business processes of the market participants and which will create benefit especially through the combination of systems and data – for manufacturers, suppliers, service providers, and end customers. “Data-driven business models” and “Big Data” are the big buzz words.

The goal of this tutorial is to raise participants’ awareness on the necessity to strategically plan big data projects and to align it with business goals.

As a starting point, participants will get an overview of the trends on digitization and big data. From there, we will go through a hands-on example explaining the theoretical concepts behind a goal-oriented big data strategy. Participants will be actively involved in the exercises. They will experience different moderation methods that support the development of the strategy.

The tutorial consists of three parts, i.e., scoping, benefit, and readiness. All parts will be covered. However, due to their nature, the practical focus of the tutorial will be on the benefit phase. Participants will learn how to sketch a big data business model, to estimate its benefits, and to derive required capabilities. For the exercise, we will use an example, to which all participants can contribute.

Agenda
  • Welcome and Introduction Round
  • Introduction into digitization and big data
  • Scoping
  • Benefit Analysis (Part 1)
  • Lunch Break
  • Benefit Analysis (Part 2)
  • Benefit Analysis
  • Wrap-up and Closing
Presenter Information

T1-AndreasJedlitschkaAndreas Jedlitschka received his M.S. (1994) and PhD (2009) degrees in Computer Science from the University of Kaiserslautern. After working seven years as an IT consultant with an engineering company, he joined Fraunhofer IESE in 2000 where he worked as project manager and scientist. Currently he is heading the department “Data Engineering”, dealing with a systematic approach to data science (What, Why, When, Where, Who, How). His research interest is empirically-based and data-driven decision support, which is the core of his lecture at the University of Kaiserslautern on “Empirical Model Building and Methods”. He has published several papers in Journals and Conferences. His most cited work is on “guidelines for reporting controlled experiments in software engineering”. The guidelines provide one starting point for the adoption of the evidence-based SE paradigm. Dr. Jedlitschka is member several Program Committees (e.g., ESEM, PROFES) and served as a Program Co-Chair of ESEM 2016 and as the General Chair of PROFES 2013. He represents Fraunhofer IESE in the International Software Engineering Research Network (ISERN). As a project manager, he frequently moderates workshops with customers.


T2. Automatic Requirements Reviews – Potentials, Limitations and Practical Tool Support (half-day)

Henning Femmer, Qualicen GmbH, Germany

Description

To document requirements, natural language is still the primary means. Requirements in natural language can be created and understood by all stakeholders without additional effort and specific requirements engineering background. However, natural language poses the risk of being imprecise or ambiguous. Badly written requirements have an expensive impact on the whole project. Incomplete or ambiguous requirements generate additional effort due to unnecessary feedback loops. In the end, bad requirements lead to misinterpretations and finally to the wrong product.

Manual reviews are an effective tool to create high quality requirements documents. Although effective, this method comes with considerable effort. The manual inspection of the requirements by multiple reviewers and the integration of review results are time consuming. As one review cycle often takes days or weeks to complete, the author of the requirements has to wait a long time before receiving feedback. The result of these problems is that reviews are often only performed sporadically or only superficially.

Automatic review techniques have matured over the last years: A substantial set of widespread quality defects in requirements documents can now be found automatically. Examples of such defects are ambiguous wording or overly complex sentences. Also more complex defects, such as cloning, inadequate levels of abstraction, or wrong references in documents can be detected automatically.

In this tutorial we discuss techniques for automatic requirements reviews. For this, we take a requirements authoring guideline, and systematically illustrate the potentials, but also limitations of current automation techniques. To illustrate how automatic reviewing works in practice, we demonstrate a working tool chain. The tool chain includes an author perspective that brings real-time automatic reviews directly into industrial ALM software or office suites. The second perspective provides a bird’s-eye view for reviewers or quality engineers. In summary, automatic reviews do not replace, but complement manual reviews. They reduce the time needed for manual reviews and provide faster and less expensive feedback for requirements authors.

Agenda
  • Introduction
  • Motivation: Drawbacks of manual quality assurance for RE artifacts
  • Concept: Requirements Smells and other approaches towards automatic quality assurance
  • Techniques: The state of the art in NLP and types of automatic analyses
  • Limitations: Frontiers of automatic analyses
  • Tooling: State of the practice with the Qualicen Scout
Presenter Information

henning_femmer_webHenning Femmer is a Ph.D. candidate at Technical University Munich (TUM) and co-founder of the requirements consulting company Qualicen. Qualicen helps large companies in quality assuring their requirements and system tests. For this, Qualicen performs audits, tutorials, continuous quality control, and methodological consulting. Henning’s research focusses on improving the efficiency and effectiveness of requirements quality control, with a particular focus on automatic methods. He publishes at academic venues, such as ICSE, RE, PROFES, ESEM, but also speaks at industry-focused events, such as REConf or Embedded World. In both his research and practical work he aims to combine scientific rigor with industrial applicability in order to efficiently deliver high quality.


T3: Need for Speed – Towards Real-time Business
(half-day)

  • Janne Järvinen, F-Secure Corporation, Finland
  • Tommi Mikkonen, University of Helsinki, Finland
  • Jari Partanen, Bittium Corporation, Finland
Description

The Finnish software intensive industry has renewed their existing business and organizational ways of working towards a value-driven and adaptive real-time business paradigm. The industry is utilizing the new technical infrastructure such as data visualization and feedback from product delivery. These new capabilities as well as various sources of data and information help in gaining and applying the deep customer insight. This tutorial has been created and adapted from 100+ concrete N4S consortia results in public domain with several successful examples of adjacency towards the new markets and business areas.

Agenda
  • N4S introduction
  • N4S Building blocks
    • Real-time Value Delivery
    • Deep Customer Insight
    • Mercury Business
  • N4S Experiences: Case studies in several organizations
  • Advanced topics
    • Continuous Experimentation in an industrial setting
    • Exploitation (existing products/services) vs. Exploration (new products/services)
    • Applying N4S concepts in organizations of different maturity
    • N4S program experiences of conducting world-class software research while continuously delivering business impacts
Presenter Information

Dr. Janne Järvinen is Director, External R&D Collaboration at F-Secure Corporation. Janne has over 25 years of experience in software business in different positions ranging from programmer to VP Engineering, both in small and large software companies. He has also been active in various industry-driven research programmes in national and international level such as Need4Speed (www.n4s.fi). Janne also has recently served as the Future Cloud Action Line Leader of EIT Digital. He is an IEEE member and holds a PhD in Information processing science from University of Oulu (2000).

Professor Tommi Mikkonen received his doctorate 1999 from Tampere University of Technology. During his career, he has published over 200 research articles and supervised close to 20 doctoral theses. His research interests include software business and architectures, web programming, and continuous software engineering. Presently he is a full professor of software systems at Department of Computer Science, University of Helsinki, Helsinki Finland.

jariJari Partanen obtained his MSc degree in 1990 in Industrial Management at Process Engineering Department from the University of Oulu, Finland. Currently he is the Head of Quality and Environment at Bittium. Bittium has as a company gone through during last years change towards more Lean and Agile Way of Working and taken into use approaches like Continuous Integration towards Continuous Deployment, Continuous and transparent Planning as well as approaches like embedded DevOps practices. The results have been showing major improvements in the customer perceived quality, lead time over the time and also amongst the personnel reflecting the cultural changes of the company.
Jari Partanen has been active researcher with much over 10 peer-reviewed articles, presentations or publications. Research interests include topics like continuous planning methods, real-time value delivery methods, innovation exploitation methods as well as mass customization techniques. Recently he has been acting as the Work Package industrial leader in Finnish DIMECC/TEKES Need for Speed program, main Bittium contact to Accelerate project (ITEA3) of which Jari presented developed solutions in ITEA Digital Masterclass in Stockholm and since November 2016 as Exploitation and Innovation Manager in the H2020 Q-Rapids project.


T4. Process Mining: from Zero to Hero (full-day)

  • Andrea Janes, Free University of Bozen-Bolzano, Italy
  • Fabrizio Maria Maggi, University of Tartu, Tartu
  • Andrea Marrella, Sapienza University in Rome, Italy
  • Marco Montali, Free University of Bozen-Bolzano, Italy
Description

Process mining is a recent research discipline that sits between computational intelligence and data mining on the one hand, and process modeling and analysis on the other hand. Through process mining, decision makers can discover process models from data, compare expected and actual behaviors, and enrich models with key information about their actual execution. This, in turn, provides the basis to understand, maintain, and enhance processes based on reality. This tutorial has a twofold goal. Firstly, we will introduce the process mining framework, the main process mining techniques and tools, and the different phases of event data analysis through process mining, discussing the various ways data and process analysts can make use of the mined models. Secondly, we will have an hands-on session using concrete process mining tools, considering a standard business use case, as well as the particular case of software processes. Finally, we will discuss common pitfalls and critical issues, so that everyone can start process mining right away.

Agenda
  1. Introduction to the process mining framework
  2. Step1: Designing process models and collecting event logs
    1. Process modeling: basics of Petri Nets and BPMN
    2. The XES (eXtensible Event Stream) standard and OpenXES reference implementation
  3. Step2: mining event logs, discovering processes
    1. ProM, the open-source jack of all trades
    2. Choosing a mining algorithm: Alpha, Heuristic miner, Fuzzy miner, or Multi-phase miner?
    3. Disco, the user friendly tool
  4. Interpreting the mined models: discovery, conformance checking, and enhancement
    1. Deciding from which perspective to look at the data: choosing a case ID
    2. On the representational bias of process mining
  5. Hands-on session (in pairs)
    1. Process mining success stories for business processes and software processes
    2. Walk-through of two process mining examples examining a business process and a software process, discussing strategies to collect, filter, analyze, and interpret the data.
      1. Discussion of differences between mining business and software processes
Presenter Information

T4-Andrea JanesAndrea Janes is Assistant Professor at the Free University of Bolzano-Bozen (Italy). He received the Master in Computer Science from the Technical University of Vienna (Austria) and received the doctorate in computer science (with distinction) from the University of Klagenfurt (Austria). He worked both in industry and academia, as a freelancer, R&D engineer, and consultant. His research interests include software design, software quality, empirical and experimental software engineering, software analytics, Agile and Lean software development processes, and software testing. He authored more than 50 articles and 1 book on his work to improve the efficiency of software development processes using non-distracting measurement techniques as well as the introduction of Agile and Lean software production methods. More recently, he is interested in technology transfer activities in the context of small and medium enterprises.

T4-Fabrizio Maria MaggiFabrizio Maria Maggi received his PhD degree in Computer Science in 2010, and after a period at the Architecture of Information Systems (AIS) research group – Department of Mathematics and Computer Science – Eindhoven University of Technology, he is currently a Senior Researcher at the Software Engineering Group – Institute of Computer Science – University of Tartu. His PhD dissertation was entitled “Process Modelling, Implementation and Improvement” and his areas of interest have included in the last years business process management, service-oriented computing, and software engineering.  He authored more than 80 articles on process mining, (declarative) business process modeling and business constraints/rules, monitoring of business constraints at runtime, service oriented architectures, service choreographies and service composition. He was awarded with the best paper award of the BPM conference (the most prestigious conference in the field of Business Process Management) in 2015 and in 2016. He serves as senior program committee member of the same conference. In 2015, he was awarded with the best researcher award granted by the department of Computer Science of University of Tartu.

T4-Andrea MarellaAndrea Marrella is a senior post-doctoral researcher at the Department of Computer, Control, and Management Engineering at Sapienza University of Rome. His research interests include Business Process Management, Process Mining, Knowledge Representation, Reasoning about Action, Automated Planning, Human-Computer Interaction. The recent research of Andrea Marrella concentrates on the application of automated planning techniques to solve problems and challenges coming from other research fields, e.g., for the automated adaptation of business processes in cyber-physical domains, for the automatic generation of process models, for the conformance checking of imperative and declarative business processes, for the automated diagnosis of learnability in Human-Computer Interaction. He has published over 40 research papers and articles and 1 book chapter on the above topics, among others in ACM Transaction on Intelligent Systems and Technologies, Expert Systems with Applications, IEEE Internet Computing, Journal on Data Semantics, and on the KR, ICAPS, CAiSE and AAAI conferences. Furthermore, he is the principal investigator of the research project entitled “Data-aware Adaptation of Knowledge-intensive Processes in Cyber-Physical Domains through Action-based Languages”, which has been funded by Sapienza University of Rome in 2016.

T4-Marco MontaliMarco Montali is Senior Researcher at the KRDB Research Centre for Knowledge and Data, Faculty of Computer Science, Free University of Bozen-Bolzano (Italy). He devises techniques grounded in artificial intelligence, formal methods, knowledge representation and reasoning, for the intelligent management of dynamic systems operating over data, with particular emphasis on business processes and multiagent systems. On these topics, he authored more than 130 papers, appeared in top-tier, international journals, conferences, and workshops, such as ACM TWEB, ACM TIST, JAIR, Information Systems, PODS, IJCAI, AAAI, BPM, CAiSE, ICSOC. He has been investigator in the EU STREP Project ACSI (Artifact-Centric Service Interoperation, FP7-257593), principal investigator and co-investigator in several local projects focused on business processes and data, and he is currently principal co-investigator in the Interregional Project Network IPN12 KAOS (Knowledge-Aware Operational Support). In 2015, he received the “Marco Somalvico” 2015 Prize from the Italian Association for Artificial Intelligence, as the best under 35 Italian researcher who autonomously contributed to advance the state-of-the-art in AI. In 2010, his PhD thesis received the “Marco Cadoli” 2007-2009 Prize from the Italian Association for Logic Programming, as the best Italian thesis focused on computational logic and defended between 2007 and 2009. He is recipient of 4 best paper awards, two of which in the International Conference on Business Process Management. According to Google Scholar, he has an h-index of 29 and has received more than 2700 citations.