Web 2.0 ohjelmistotuotannossa

Pidin asiantuntijapuheenvuoron 14.4.2010 Digitaalinen Suomi -seminaarissa Jyväskylässä otsikon mukaisesta aiheesta. Tässä 45 minuutin pituinen slidecast-tallenne puheestani. Käsittelen teemoja sekä web 2.0 -ohjelmien tekemisen näkökulmasta että web 2.0 -palveluiden hyödyntämisestä ohjelmistotuotannossa yleensä. Joitain teknisiäkin asioita mukana on, kuten pilviarkkitehtuuri ja NoSQL-tietokannat. Oli ihan hauska välillä puhua ihan teknisestä aiheesta – sisäistä nörttiä pitää silloin tällöin ruokkia. Niin ja minut voi tilata puhumaan muihinkin tilaisuuksiin. [Read more…]

Failures of the scientific method and the waterfall method (again)

I originally discussed the waterfall method in a previous blog post. Pranav just made a comment that I feel needs some further discussion:

I disagree with the following statement: “This is how science (unfortunately) often works – researchers just cite something, because everyone else does so as well, and don’t really read the publications that they refer to. So eventually an often cited claim becomes “fact”.”

I would like to say that this is generally not how science works. Claims(predictions) made by any scientific theory is verified by multiple laboratories/people over a period of time and they are always falsifiable.

The problem with software development is that it is not based on a scientific theory yet (I am not talking about computer science here, only software development). It has been said many times that it is more of an art than science.

Problems with science

Pranav, thanks for your comment. Of course you are right in that science normally should work by verifying findings, and falsifying those that are found to not hold up to evidence. However, my experience as a researcher has shown that this is unfortunately not often what happens. I’ll present a few examples.

First, the convention of citing another article. If you follow the rules strictly, you should only cite someone when you are referring to their empirical findings, or to the conclusions they draw from that. Nothing else. Nothing. But, unfortunately, most scientific articles do not follow these rules, but liberally make citations like “X used a similar research design (X, 2006)”. Wrong! While X may very well have done so, you’re not supposed to cite that. You only cite empirical results and conclusions thereof. But this is what happens. Because citation practice has degenerated across all fields, it is quite difficult to see the quality of a citation – is the claim (that is backed by a citation) based on empirical evidence or is it just some fluff? It has become too easy to build “castles in the sky” – researchers citing each other and building up a common understanding, but with no empirical evidence in what everyone is citing to. This is what has happened to the Waterfall model – researchers in the whole field have cited to Royce (1970) because that’s what everyone else has done. And the citation isn’t valid – there’s no empirical evidence to back the claim that a linear process works, and actually there’s not even a fluffy “i think so” claim to that effect, but rather Royce considers the idea unworkable. This hasn’t stopped hundreds of researchers from citing Royce. I consider this an excellent example of the failure of the scientific method. Of course the method in theory is still solid, but the degenerate practices have made it prone to failure.

In my field of educational technology I see this only too often. Cloud castles being built as researchers cite each other in a merry-go-round, with no-one realizing that whoever started the idea did not in fact have empirical evidence for it, but just a fluffy idea. And shooting down these established cloud castles is not easy, because the whole scientific publishing industry is also skewed:

  • You usually can’t get an article published unless to cite the editors and reviewers of the journal, and support their findings. Therefore contraversial results are hard to publish in the journal where they would be most useful.
  • Researchers often do not publish zero-findings. That is, when they try to find a certain pattern, and they fail, then they just scrap the whole thing and move on. But the scientific method actually needs also these zero-findings, because there are many fluffy theories that may occasionally get statistically significant empirical evidence (because with a 95% margin of confidence there’s 5% of studies that will find a connection even if there is none), but cannot be easily proved wrong. Therefore the numerous studies that fail to find a connection would show that the theory isn’t very strong. But these almost never get published.

And let’s make it clear that this is not a problem of “soft science” alone. Let’s take for example statistical science. Those who do statistics will know Cronbach’s alpha as a reliability estimator. It’s the most commonly used estimator, being in wide use for decades. Unfortunately, it doesn’t work for multidimensional data, which most data actually is. It’s still being used for that, because “That’s what everybody is using”. Here’s the article where professor Tarkkonen proves that Cronback’s alpha is invalid for multidimensional data, and in fact most data (pdf). You’ll notice that it is not published in a peer-reviewed article. I’m told it was submitted years ago to a journal, and the editor replied something to this effect:

“Look, your article is sound and I cannot find any fault within it, but we can’t publish this. I mean we’ve been using Cronback’s alpha for decades. Publishing this would make us all look silly.”

Waterfall and software engineering

OK, back to the waterfall method and software engineering. I like Pranav’s comment that making software is more of an art than science. And I agree. Creating new software is not like producing another car off the assembly line, it’s like designing a new car. Creating copies in bits is easy and cheap, since the engineering of computers is good enough. But making software is more of an art, and a design process.

In fact I have a problem with the term “software engineering”. Because software isn’t engineered, it’s envisioned, explored, protyped, tried out, iterated, redone, and designed. Researchers and managers have been trying to get software engineering to work for several decades now, and what are the outcomes? Can we build software as reliably as we can build a new bridge? No. But, if the bridge builder was given for each new bridge requirements like “use rubber as the construction material”, changing the requirements, tools and crew each time, maybe building a bridge would not be so predictable either.

But this doesn’t mean that software development can’t be a scientific method. If we can do empirical data gathering (like metrics of source code, and budget/schedule overruns), and can find patterns there, then there is a science. I mean there’s science in arts as well – compatible colour combinations, the golden ratio and other rules (of thumb) are based on experience, and currently many of them have proper studies to back them up, not just folklore.

So also software development can be studied scientifically, although much of it is unpredictable and challenging. My understanding is that most of the studies published in the last decade quite uniformly find that practices such as iterative life cycle models, lean management, intuitive expert estimation instead of automated model estimation, and common sense, actually are well correlated with successful software development projects. So there are some rules of thumb with real empirical backing.

Even the waterfall method has empirical studies to show that it is a really horrible way to do software except in some extremely specific circumstances, which almost never coincide in a single project. If they would, it would be the most boring project in the world. I would not want to have any part it it. Remember, NASA flew to the moon in the 1960s with iteratively developed software and hardware. And I’d bet the software guidance system for the Patriot missiles (which never hit anything in the 1990s) were based on a linear model.

The agile way of starting a company

I just listened to Greg Gianforte’s presentation on bootstrapping a company. What he says is that the way of starting a company that’s taught in all business schools is not the right way. Of the hundreds of thousands of startups every year, only one percent follow the business school dogma of writing a business plan, finding investors, and then starting to develop a product.

Greg’s message is also available in his book Bootstrapping Your Business: Start And Grow a Successful Company With Almost No Money. The basic message is to start by contacting potential customers and asking them would they be interested in the product or service. And if not, what would make them interested. After a good amount of these inquiries you’ll have a very good idea of what you should develop to get interested customers. And it’s a lot sooner than with the traditional method, where the first contact with potential customers can be 1-2 years after starting to draft the business plan. And at that time it’s a lot harder to accomodate customer requests that you haven’t thought up. Whereas with bootstrapping you can respond to all requests, because you have no product – yet.

This whole approach seems quite reasonable, and has a good analogy to the agile software development method. The method taught in software engineering schools is to start with requirements gathering, followed by analysis and design, followed by implementation, then testing, and finally delivery. The agile approach is to start with a minimal set of requirements, first do tests, then implement those tests, do analysis and design as necessary, refactoring the code while you’re at it, delivering every 2-4 weeks, and gathering more requirements after each delivery. The benefits are pretty much the same: you have customer contact with every release, not just at the end of the project, you’ll have a working product after 1-2 iterations, and you can respond to customer requests more easily.

The power of the PHP community and the demise of Microsoft

When I read Moishe Lettvin’s blog entry about the design of the Windows Vista shutdown mechanisms (moblog: The Windows Shutdown crapfest), it immediately connected with the podcast from Open Source Conversations on business benefits of PHP. The main benefit that several companies gained by switching to PHP was reduced time-to-market. The main reason for this is not that the language is simple, but that the PHP community has become so huge and vibrant that any new feature (be it barcode reading, excel sheet processing, pdf generation, etc.) is more often than not available as a PHP module. So instead of needing to implement new features, the work is more in the lines of combining chunks of modules into an application.

Now, any beginning PHP coder out there should take heart: The difference between a beginning PHP programmer and an expert PHP programmer is knowledge of what’s available. There are countless modules available out there, so for task X, which possibilities are available, and which is the best one, and is that good enough to use as such, or use and improve, or do we need to build from scratch? This is the kind of question that a beginning PHP programmer cannot answer. So the knowledge of the vocabulary of the language (libraries and modules) instead of the grammar (the language syntax) is the factor.

But back to the main topic. Enterprise level companies are starting to use PHP because

  • there is enterprise level support available and
  • the vast amount of freely available modules decreases time-to-market

Let’s focus on time-to-market. If you listen to the podcast, you’ll hear that time-to-market is for many companies a mission-critical factor. If their customer base needs a new feature in the services that they provide and if they cannot provide it quickly, the customers will find another service provider. So getting that new feature out there in days, instead of weeks, is of the utmost importance.

Now enter Microsoft. What made Microsoft big was that their QDOS that Bill maneuvered as the base OS for IBM PCs was a disruptive innovation – it wasn’t as good as the competition, but it was good enough. The strength of Microsoft back in 1980 was that it was small, maneuverable, and fast. Unlike the bulky, slow IBM.

What’s the situation now? It took them 5 years to deliver a new version of Internet Explorer. What happened in the mean time? They lost their customer base. The market share of IE dropped from 95% to 60%. Why? Because they weren’t able to provide the new features that the competition (Firefox and Opera) were able to roll out, and the customers started switching to something else.

It’s taken them too many years to get the next release of Windows out. Vista is now coming, but it’s too little, too late. It’s main new features seem to be doubled hardware requirements. The “new UI innovations” have been done by Apple in Mac OS X several years ago. The glossy new graphical excellence is already done in Linux with its 3D-accelerated windowing managers. Their new revolutionary file system got bumped off, while the Linux community has developed several of them. Their new security features are still lagging behind what is available for Linux.

And what’s the situation now? On the server side Linux is growing its supremacy. On the client side Linux is gaining slowly but surely. Many national governments are switching from Windows to Linux, from MS Office to OpenOffice.org, from IE to Firefox. Why? Because the latter alternative is better, due to pricing, licencing, openness, and – let’s not forget – faster time-to-market. The open alternatives release new versions regularly, usually several times a year, versus Microsoft’s 3-5 year release cycle. The open alternatives release patches in days, not in months. The open alternatives release critical security fixes in a matter of hours, not every 2-4 weeks.

Agility, maneuverability, reaction time to customer demands, and willingness to react are key factors for businesses for keeping their customer base. Microsoft is doing none of these and losing badly. Based on the blog post I mentioned in the beginning, the reason is also very clear – Microsoft has become the bloated, bureaucratic, clumsy, slow monster that it originally vanquished by being lean, agile, and fast back in the 1980’s.

What’s left? Protection of their crumbling monopoly. As Lawrence Lessig has been telling me, it’s cost-efficient for a monopoly to spend all of its capital minus $1 in the protection of its monopoly. And apprently this is what Microsfsoft is planning. Lessig already several years ago reported that at Stanford Law School the “suck effect” is the sound they hear as Microsoft hires every patent lawyer they graduate. The stage is set for a huge patent war between Microsoft and the open source community, and the recent Filed Under: English, Technology Tagged With: , , , , ,

Distributed design and development using agile methods and Trac (XP2006 presentation)

I’m hosting a session at the XP2006 conference and here’s the material for that session.

Also, the title should probably talk about Dispersed, not distributed, since that’s what we’re doing (not just having teams in separate locations, but having team members in separate locations).

Managing an agile software team in 4 countries

We’re now two months into CALIBRATE, a huge software development project funded by the European Schoolnet. It has close to 20 partners in different EU countries, and our team is coordinating one of the work packages, where we have 30 months to build a jazzy web portal for creating, browsing and using learning material in the EU. We’re calling it the “Toolbox” for now.

From the software development point of view, the most interesting aspect of this project is that we’re attempting to do agile development (using a modification of Scrum and XP) with a team of 8 developers, which reside in four countries (Estonia, Finland, Hungary, and Norway). And one of the most important principles in agile is to keep the team together. Well, also in a waterfall splitting the team apart is always an issue that needs addressing.

So the problem is that there’s a few people in each country. No location has enough people to form a working development team. So somehow we need to create a team that is present in all four locations simultaneously. Having attended Agile Finland’s seminar in Helsinki I was convinced that one of the key make-or-break issues in this project will be communication and team dynamics.

A good book to read was Organizational Patterns of Agile Software Development by James O. Coplien and Neil B. Harrison, which is basically a collection of best practices (or patterns) for software development, from the point of view of the people doing the work.

A plan was then set: Geographically distributed work can work, as long as no-one is left out of the loop. This means that no important decisions should be done by people in one location alone, and no information should be held in a place where others cannot access it. So everything should be done as openly as possible. After a quick review of available software, we decided to go with Trac, which is an open-source development platform that integrates a wiki, a ticket system and version control into one browsable, interlinked web interface. You can take a look at our project’s Trac site at http://goedel.uiah.fi/projects/calibrate/. The burndown chart and time tracking are additions to the plain Trac, which allow some measure of time estimation (the burndown chart is a Scrum practice).

When browsing our Trac, you’ll notice that everything is done completely visibly. All meeting minutes are there, all discussions about concepts and features are there. Every patch commited into the version control system (subversion, btw) is connected to a defect or enhancement ticket, which in turn is connected to a user story ticket. So the reason for everything that we do can be tracked down. Our team also has a mailing list, where Trac automatically sends all ticket changes, and Subversion sends all commit diffs.

And yes, we did start with a “Face to Face Before Working Remotely” by inviting everybody to Helsinki for a week for an intensive workshop, sauna, and swimming in the sea (in November). I’ll post another entry discussing the organizational patterns that we use more closely.

Don't draw diagrams of wrong practices – or: Why people still believe in the Waterfall model

The Waterfall model is originally invented by Winston W. Royce in 1970. He wrote a scientific article that contained his personal views on software development. In the first half of the article, he discusses a process that he calls “grandiose”. He even drew a figure of the model, and another showing why it doesn’t work (because the requirements always change). This model is the waterfall. He used it as an example of a process that simply does not work. In the latter half of the article he describes an iterative process that he deems much better.

OK, so why do people still advocate the waterfall? If you look at the scientific articles on software engineering that discuss the waterfall, they all cite Royce’s article. In other words, they’re saying something like “The waterfall is a proven method (Royce, 1970).” So they base their claims on an article that actually says the opposite: that the model does not work.

This is how science (unfortunately) often works – researchers just cite something, because everyone else does so as well, and don’t really read the publications that they refer to. So eventually an often cited claim becomes “fact”.

Agile methods and the linear method were both originally used already in the 1950’s. The scientific mis-citing boosted the waterfall’s popularity, but in the 1980’s it was being phased out – because people had found out (the hard way) that it does not work.

But, things change as a busy engineer in the US defense organization is asked to come up with a standard for military grade software projects. He doesn’t know what a good process would be, and he’s told that “Hey, Royce already came up with the correct method: the waterfall. Use it.” So the waterfall becomes the US military standard DoD-2167.

A bit later the NATO countries notice that they too need a software engineering standard. And they ask the US for their standard – no need in reinventing the wheel, right? And thus the waterfall is again used everywhrere, since if the military uses it, it must be good. Right?

Here in Finland we had already dropped the waterfall in the 1980’s, but after the DoD-2167 was sent over the Atlantic like a weapon of mass desctruction, it became the standard, again.

What have we learned here: The agile (iterative, incremental) methods are not new. They’re as old as the waterfall. There is no scientific evidence that the waterfall works – on the contrary, most projects fail. There is, however, massive evidence that agile methods improve productivity, quality and success rates.

So what should we really have learned: People remember images. And people often read the beginning of an article, but not the end.

So: Don’t draw figures or diagrams of wrong models, because people will remember them. And in the worst case that could cause hundreds of thousands of failed projects and billions of euros and dollars wasted for nothing.

Update (February 2010): Commenters have been asking for references to back up my claims, so I added links to the most central documents referred in this blog post.