Why Run a Technology Pilot?

Steven Walling of ReadWriteWeb polled the site’s readers on the perceived value of technology pilots. A comment on the article suggested the generally small size of pilots makes them irrelevant to large organizations:

Interaction among workers in a tiny subset of your organization isn’t a fair test.

However, of the 84 responses to Steven’s poll, 30% said they’re essential and 53% said it depends on the particular software being introduced. Only 17% said pilots are not valuable to the overall adoption process.

Pilots work well because they are culturally and functionally useful. Most organizations are not going to test a new technology on their entire workforce, so a pilot with a subset of employees is the next best option. One of the most important benefits of running a pilot is that it gives you an opportunity to discover and fix technical issues before a tool is made available to thousands or tens of thousands of people. Fixing those problems while they’re small and you’re working with a pilot group that’s empathetic makes it much easier to refine the system for a smooth rollout and adoption.

That’s the part about pilots that’s inarguable in my experience. If you find & fix a problem with the assistance of that compact group of helpful people, you’ll be in much better shape than if you have 10,000 angry people beating down your door while you try to fix a major problem that has brought down the whole system. Political debates about the importance of pilots can go one way or the other, but technical problems can derail even the best-laid plans. Ann All wrote a great article that explores why her organization’s wiki hasn’t taken off, and she cited technical problems as a major factor. A pilot would have helped catch and fix those problems early.

What’s critically important to the usefulness of the pilot is how you structure the pilot participants. If you only involve hard-core early adopters, then of course it won’t be representative. However, if you construct a pilot that involves people from several key areas of the organization (HR, product/project groups, technical writers, customer support/service, etc.) and those people understand that they’re getting a chance to set up their uses before everyone else in exchange for their help and patience in finding and fixing technical problems, you’ll get a group that will solidly support the project once it’s opened up for large-scale use. D. Keith Casey Jr. wrote that pilots give you relevant examples of use within the organization. These help in presenting the new tool as less of an unknown, and therefore less of a major, threatening change:

By deploying a tool and training on it within a single group/project/department, you’re working with a small group and can work with them to show/convince them of the value. If it’s successful and you roll it out further, you have a sample of real live data *and* potentially a few champions who’s lives/work were made better. Now instead of you trying to teach everyone, you have other users doing it.

The use cases developed by your pilot group are an important part of the larger scale rollout that happens after the pilot, because they give the rest of the organization a set of tangible starting points. Some teams will want to mimic the examples generated by your pilot, and others will say “hey, my case is different.” Offer the former as much guidance and access to structural and process details of the use cases they want to mimic, and let the latter develop uses that meet their particular needs. Ultimately, running a pilot gives people an opportunity to actively engage with the tool and make it a productive part of their daily work. Clare Flanagan of CSC offered an assessment of the value of pilots based on her own experience running one:

As a program lead for our internal E2.0 pilot I can say with confidence that pilots are important culturally and functionally.

Pilots allow you test the waters. For us, we had tremendous experience with collaboration tools in the past. We knew the short comings of these tools. Our pilot had several critical success factors to prove out (security, features, adoption, business terms, would it scale, could it perform, and so on). So that was all the functional part.

But, and more importantly, when we’re talking about E2.0 tools – or social collaboration – there’s some ‘cultural’ sell as well. Pilots represent a ‘smaller’ chunk of work or investment. They are easier to sell to management. And if they don’t work, you adjust.

If they do work, Pilot results are HUGELY important leverage for the next stage – the full business case. What better way to make a full business case by having experience, performance metrics, success stories and user anecdotes to back up your assertions?

Similar Posts