The “observer effect” applied to digital transformation

A different take on GDS’s Performance Platform

The “observer effect” states that whatever you observe, by the very act of observation, it changes. Developing tools to measure the performance of a digital transformation – such as the GDS Performance Platform – is a key step of any transformation journey itself, as it can accelerate the process and guide it to bear positive outcomes.

The act of measuring is change itself

In science, the term “observer effect” refers to changes that the act of observation will make on a phenomenon being observed. This is often the result of instruments that, by necessity, alter the state of what they measure in some manner. A commonplace example is checking the pressure in a car’s tyre: this is difficult to do without letting out some of the air, thus changing the pressure.

The GDS’s Performance Platform

Started as a simple dashboard to display web traffic data on gov.uk, the Government Digital Service (GDS) Performance Platform has now become a key tool that gives departments the ability to monitor the performance of their digital services in real time, aggregating data from a range of sources including web analytics, survey and finance data.

The digital by default service standard – a set of criteria for all government services to meet – now mandates the following four key performance indicators (KPIs): cost per transaction, user satisfaction, completion rate, digital take-up. These KPIs can be used to measure the success of a service and to support decisions and planning of improvements.

Similar to the tyre pressure, the very act of measuring those indicators is influencing and accelerating the transformation process, focusing the departments’ attention to delivering efficiency and quality of service to citizens. This is a key enabler of any transformation journey and it will be interesting to see how far the Performance Platform will go in the coming years.

(Note: although this example is specific to the public sector, the above is easily applicable to private organisations too – this will the subject of another blog post).

Where next? The difference between performance and evaluation

Performance measurement and evaluation are complementary activities. Evaluation gives meaning to performance measurement and performance measurement gives empirical rigour (evidence) to evaluation.

Performance measurements do not question the objectives themselves and, therefore, stop short of any final judgement as to whether the programme or activity was good or bad – only if it was successful (or not) within the narrow confines of its mandate.

The current debate on Gov2.0/Government as a Platform is precisely around the purpose of governments in the 21st century, with two schools of thoughts arguing that it’s the profitable thing to do or, well, it’s the right thing to do.

Although a clear approach on how to evaluate the impacts this approach will have on the wider society is not yet agreed, tools such as the Performance Platform can and will inform and support this discussion.

What do you think? Does this capture the distinction between programme evaluation and performance measurement – or is there a lot more to it? Is your organisation measuring the performance of its transformation? Leave a reply below, or contact me by email.

Continual service improvement – the clue’s in the name

Many organisations struggle to implement effective continual service improvement (CSI). Many purport to deliver CSI but are paying lip-service to the principle and missing the point. The clue is in the name.

Continual

CSI is not a once a year workshop that creates an actions list that sits in a dark recess on a shared drive for the next eleven months. It’s a consideration for every day. What isn’t working? What causes your team pain? You can even think of it from a selfish perspective – what bits of my job do I hate and why do I hate them? How can I improve them so I don’t hate them anymore?

Service

What you are providing is a service. It’s not a contract (though it is likely to be contractually bound). We hear more and more about customer experience yet we forget that we, as service professionals, are providing a service to our clients, not a list of activities or outputs. When considering CSI, ask yourself how your service feels to a customer and think about what you can do to make that experience better.

Improvement

Too often, people confuse change with improvement. Just changing something doesn’t make it better. When you are looking at ideas for CSI activity, make sure it is a measurable improvement. Can you articulate how it will make something better and measure the before and after so you’ll know if it had the desired effect?

It doesn’t have to be a tangible improvement like cost, speed or quality; intangible improvements that make a service feel better can be just as valuable, though you still need to measure the improvement (e.g. improved customer satisfaction scores). Either way, you must be able to define the improvement. If you can’t, then it probably isn’t an improvement. It’s just a change.

So, keeping the name in mind, why not dust down that CSI process and tear up that year-old CSI log?

Start afresh and enjoy the opportunity to be truly creative.