Industry standards for measuring websites

So what’s best practice for measuring the impact of websites – especially public sector sites?

I’ve recently been doing some research about useful published standards for measuring the effectiveness of websites, particularly those in the public sector that don’t have commercial drivers.

What gets published

Many public sector organisations, in the spirit of transparency, publish data about the use of their sites. But they tend to be things that are relatively easily measured, rather than being derived from organisational aims.

The United States federal government’s Digital Analytics Program (DAP) offers advanced, easy web analytics to federal agencies. It provides dashboards for the easily reportable:

A screenshot of the analytics USA.gov dashboard, showing users on government sites, vertical bar chart of visits and horizontal bar charts of top pages visited, devices, browsers and operating systems

Analytics USA.gov dashboard

To be fair, the site also provides Digital Metrics Guidance and Best Practices

Digital metrics are critical for measuring, analyzing, and reporting on the effectiveness of your Web, mobile, social media, and other digital channels. Every agency should have a metrics strategy to measure performance, customer satisfaction, and engagement, and use the data to make continuous improvements to serve its customers.

Again, it’s a list of what to measure, rather than a rationale for why, but goes into some detail for web metrics and customer satisfaction and is a worth a read.

The French Republic’s corporate site also publishes similar, easily reportablemetrics – traffic, most popular themes, top pages. The UK government’s GOV.UK site again publishes similar metrics derived from digital analytics, but extends this to the volume of feedback, the most commented pages, page load time and availability.

Excerpt from the GOV.UK dashboard, showing distribution of device use (desktop, mobile and tablet); volume of feedback comments and most commented pages.

GOV.UK site dashboard

KPIs rooted in business objectives/ user needs

Whilst some of these published metrics are clearly important KPIs – such as page load time and availability, how many actually help product owners prioritise improvements or demonstrate benefits realisation and return on investment to stakeholders?

Some might be useful proxies – volumes of feedback, or useful to monitor for design decisions – distribution of sessions across different types of device and browsers. Others are really not much more than vanity metrics.

So how do we go about defining meaningful KPIs?

A hierarchy of measures and KPIs

Stacey Barr,writing in the KPI Library, suggests a four level Hierarchy of measures and KPIs:

Level 1: success & sustainability KPIs These are the results that are implied by your vision, mission and ultimate outcomes for your stakeholders (i.e. customers, shareholders/owners, partners, communities, employees). Examples of success & sustainability metrics might include profit, market value of your business and customer loyalty.

Level 2: strategic KPIs The measures that monitor your whole-business strategic objectives or goals are the next level in the measure hierarchy. These measures track the results implied by your business’s current strategic direction. They basically describe what the organisation is going to be like in the next 2 to 5 years. Examples of strategic measures might include return on investments, market share, revenue and customer churn.

Level 3: tactical (or process output) KPIs Tactical objectives or goals are derived from your core, end-to-end processes. It is these processes that have the significant impact on the business’s ability to achieve its success & sustainability results, and its strategic results. Examples of tactical measures might include product development cycle time, new leads, product sales, customer satisfaction (with specific products or services), lost time injuries, on-time delivery to customers.

Level 4: operational KPIs The results implied by operational objectives or goals or specific activities are monitored by operational performance measures. They usually track the root causes of tactical performance results. They are the drivers of whole-process results and are where resources are allocated to improve process performance and ultimately improve organisational success and sustainability. Examples of operational measures might include sales conversion rate, rework, near-miss safety incidents, inventory turn.

A focus on website KPIs and metrics

With websites and other digital products, there is the opportunity to collect so much data when the digital analytics are implemented. But, as we’ve seen above, it’s easy to be seduced by the default metrics – to measure and report on these rather than what’s important to your organisation.

Fortunately, there is some excellent guidance available – here are some approaches I find especially useful.

KPIs from the top

Fitting into the hierarchical model, Jim Sterne suggests some digital analytics ‘KPIs from the top’:

  • Raise Revenue
  • Lower Costs
  • Improve Customer Satisfaction
  • Introduce a New Competency

‘Alternatively; make more, spend less, make people happier, get it done more efficiently… …Next? Well, we’re in Marketing so the answer is a classic customer lifecycle:’

  • Raise Awareness
  • Improve Affinity
  • Inspire Interaction
  • Generate Sales
  • Drive Endorsements

‘Everything else is a proxy’:

  • Email opens
  • Display ad clicks
  • Brand search
  • Non-branded search
  • Visits
  • Depth and duration of visit
  • and (dare I say it?) Engagement (Note – public sector sites often have marketing roles, I’d argue).

Measurement plans

Drilling down to more detail, it’s best practice to develop a measure plan.

Julian Erbsloeh has shared Fresh Egg’s process for creating a measurement plan, which is very accessible and takes you from working out KPIs through to implementing the tracking you need and deciding on reporting.

  • Step 1 – Define your objectives and key performance indicators (KPIs)
  • Step 2 – Consider data segmentation requirements and set targets
  • Step 3 – Create an implementation plan
  • Step 4 – Define the format and frequency for reporting

Andrew Kucheriavy, founder of intechnic reminds us to set SMART website goals to reach business objectives. Specific, Measurable, Attainable,Relevant and Timely.

Images showing cropped table showing 2 of the criteria - Specific, Measurable with corresponding Objectives.
SMART website goals

And in What to Measure and How to Measure It, Kucheriavy also provides some useful example metrics:

Sample metrics image showing cropped table with website goals and then examples of what to measure and how to measure them.
Sample metrics

I particularly like these examples as they demonstrate a segmentation of KPIs for different stakeholders or parts of the organisation and the recognition that many metrics will be derived from other sources than digital analytics – for example surveys or financial data.

Further examples include Signal Inc’s Creating a Measurement Plan, which outlines five steps:

  • Step 1: Defining Your Objectives
  • Step 2: Goals and KPIs
  • Step 3: Measurement
  • Step 4: Segments
  • Step 5: Implementation

Analytics Demystified’s 3-legged Stool Of Effective Analytics: Plan, Measure, Analyze makes the powerful distinction between analysis and reporting

“There is a fundamental flaw in any approach to using data that attempts to bundle scheduled reporting with analysis. It forces efforts to find ‘actionable insights’ in a context where there may very well be none. And, it perpetuates an assumption that it’s simply a matter of pointing an analyst at data and waiting for him/her to find insights and make recommendations.
…Using data to effectively inform decisions is a collaborative effort. It needs to start early (planning), it needs to have clear, concise performance measurement (KPI-driven dashboards), and it needs to have flexibility to drive the timing and approach of analyses that deliver meaningful results.

Content marketing

The Content Marketing Institute (CMI) defines content marketing as:

Although content marketing has different aims and goals compared to publishing, I think public sector website content objectives (if they are properly identified) are often aligned with a content marketing perspective. OK not profitable, but still user action.

“… a strategic marketing approach focused on creating and distributing valuable, relevant, and consistent content to attract and retain a clearly defined audience — and, ultimately, to drive profitable customer action.”

In A Field Guide to the 4 Types of Content Marketing Metrics, the CMI identifies four types of metrics:

  1. Consumption metrics
  2. Sharing metrics
  3. Lead generation metrics
  4. Sales metrics

But again stresses the need to tie metrics back to business objectives.

Back to the public sector

Although, as we’ve seen, public sector websites tend not to publish much beyond the easily reportable, there’s increasing guidance about how to approach performance measurement.

The UK Government Digital Service’s Service Manual has a section on Measuring success, including How to set performance metrics for your service.

Analysts in other governments, from the USA.gov and GobiernoUSA.gov to New Zealand are also blogging about their work in developing relevant and actionable measures.

In conclusion

There isn’t a magic list of things to measure, and that’s right – because working out what to measure should be a team sport involving team members and stakeholders and be grounded in organisational or team objectives. There is, however, lots of guidance about how to go about working out what you want to measure and how to go about it.

An analytical response to ‘Facing things as they are’

Will Myddleton recently posted about Facing things as they are — as a fundamental of user research and an approach to life. As an analyst and as a meditator who works (sometimes successfully) to appreciate the moment, I really valued what Will has written. I said I’d try to respond from an analyst’s perspective.

As analysts working in multi-disciplinary teams, we have a number of tasks:

  1. Gather and analyse existing evidence
  2. Identify a testable hypothesis
  3. Make sure the data we need is collected and is appropriate
  4. Do the analysis
  5. Tell an actionable story

So this is a very parallel, indeed intermingled, journey to that described by Will.

User research is about starting by facing things as they are.

And so is analysis. It’s why it’s so important to gather existing evidence about the issue you’re working on, using, for example, digital analytics from legacy products, call centre data, text analysis, search analytics. With this data, you can build up a picture of the current landscape.

When the quantitative data and the user research data are looked at together, we can get a richer understanding of ‘how things are’. Can we see findings from user research replicated at scale in the analytics? Can some behaviour that we see in the analytics be explained by the user research?

Then the team will be in a better place to determine what to work on, and importantly, to develop a hypothesis about how that work will make things better.

So the work gets done. But did it have the intended effect?

Again we need to face things — do the user research and analytics data tell us that the work we’ve done has made the impact it intended?

Will continues:

Facing things as they are often leads to a dawning realisation that the things we are working on are the wrong things.

Yes the data may tell us that; or a less dramatic, ‘the thing we worked on didn’t have the impact we expected’. Of course, we may have had a positive impact.

But again we’re challenged to face things as they are — to use the evidence we have and the learning we’ve made to re-prioritise, change direction or build on our success.