Room 203.IV

The Place of Agile Methods in the History of Software Engineering

Bertrand Meyer

Bertrand Meyer is a professor at Politecnico di Milano and Innopolis University, with a 2015-2016 Chair of Excellence at the University of Toulouse. He was previously at ETH Zurich and is Chief Architect of Eiffel Software. He has published a number of books about software engineering, most recently “Agile! The Good, the Hype and the Ugly” (Springer). His awards include Harlan Mills Prize, Jolt Award and ACM Software Systems award.

Abstract: Most of the discourse around agile methods has been inspirational and contrarian: its goal is to convince, and it presents agility as a reaction against more traditional techniques. As the dust settles and agile methods gain respectability, it is time to take a broader and more dispassionate view. In fact, the best ideas of agile methods do not negate preceding advances in software engineering; they complement them and rectify some of their deficiencies. In the spirit of the book “Agile! The Good, the Hype and the Ugly” (Springer), this talk presents agile methods in the general context of the evolution of software engineering principles, methods and tools. Taking the perspective of a practicing software developer and manager focused on quality and productivity, it dissects the key agile ideas, assessing their pluses and minuses. The result is an analysis of the agile approach not as an outlier or an exception but as a major step in the evolution of our ideas of software engineering.

Measuring and Managing Maintainability at Industrial ScaleJoost Visser

Joost Visser is Head of Research at the Software Improvement Group (SIG) in Amsterdam, The Netherlands. Joost also holds a part-time position as Professor of “Large-Scale Software Systems” at the Radboud University Nijmegen, The Netherlands.

Abstract: Maintainability is a core aspect of software quality and a key driver for successful software projects and products. Though the importance of maintainability is recognised almost universally by software engineers, industrial practice shows that maintainability is still rarely managed in a rigorous way. In this presentation, I will relate some of the practical lessons and more theoretical insights accumulated over a decade of helping a wide range of organisations to manage software quality and maintainability. In particular, I will cover the initial development of a practical model for rating maintainability, the use of this model in assessment and certification of software products, the annual recalibration and evolution of the model, a scalable infrastructure for continuous monitoring of maintainability, common pitfalls when using maintainability measurements for project management, and some of the insights gained from probably the largest industrial software analysis benchmark database worldwide.

On to Code Reviews – Lessons Learned at Microsoft [slides]

Michaela Greiler

Michaela Greiler works as a software engineer and researcher at Microsoft in the TSE (Tools for Software Engineers) team and in close collaboration with the Empirical Software Engineering group at Microsoft Research (MSR). Michaela focuses on data-driven software engineering – that is improving software development practices and processes by mining and analyzing engineering process data. She received a PhD in Software Engineering from the Delft University of Technology for her research on Test Suite Comprehension and holds a Master’s and Bachelor’s degree in Computer Science.

Abstract: Four eyes see more than two. Following this well-known principle, within Microsoft, code reviews are a part of the backbone of Microsoft’s quality culture. Not only Microsoft bets on code reviews. Over the past decade, both open source and commercial software projects have adopted code review practices as a quality control mechanism. Nevertheless, even though code reviews have many benefits, our experience and also prior research has shown that developers spend a large amount of time and effort performing code reviews. Therefore at Microsoft, we constantly seek to improve our understanding on the practice of code reviewing. We do so by analyzing millions of code reviews that are produced by our engineers and we complement this data by observing, interviewing and surveying groups of software engineers that participate in code reviewing. In this talk, I will give an overview on Microsoft code review practices, explain what we learned on why and how they are performed, which benefits, challenges, do’s and don’ts come with them, and which open questions are still to be answered.