Metrics, Strategy and Getting More Satisfaction

Over the past few months I’ve been attempting to isolate metrics that best inform the context models I use for content strategy projects.

I’ve made this attempt because clients have been asking our community to quantify the value that content strategy is bringing to their annual spend. It’s also one of the things I hear new content strategists (both the independents and agency folks) asking about. We all know our work is important, but justifying it to clients is how we continue to show relevance.

Measurement and Optimization Cycle for Contextually Relevant Content Strategy

Not surprisingly, there is no measurement plan that will prove without doubt that content strategy is responsible for a site or business’ success, but based on the implementation of a few different measurement plans, I think it’s safe to say that content strategists can lean on at least four types of metrics to accurately demonstrate the fruits of their labor. They are:

1. Measures of user perception (satisfaction)
2. Task completion (user defined)
3. Measures of key business objectives (traditional metrics measurement)
4. Post visit behavior

The combination and examination of the of the four data sources is not only valuable; it’s crucial to the optimization of a sound communication strategy.

We need what I typically refer to as “perception measures” to show the true value of content strategy because success isn’t as simple as providing insights into what drives improvement in business behaviors. As I’ve noted several times in past posts, content strategy has to do heavier lifting by adding contextual relevance to business goals, which equates to informing the creation of content that helps improve the bottom line AND satisfy user tasks. Task completion becomes crucial because it directly correlates to something even more valuable that quarterly gains — loyalty.

Beyond The Bottom Line

Most companies have well-established business success metrics for their websites and measure for them consistently but few measure the quality of the site experience as a separate and distinct concept. That’s a mistake, because it’s in the qualitative measures where you can make a more informed decision on whether your content strategy is providing any real return on investment.

Without dedicated perception metrics, it’s nearly impossible to determine whether an experience actually got better or how if changes in a content strategy influenced the site’s impact on business performance.

Perception metrics should reveal which aspects of the experience customers aren’t happy with and what prevents visitors from accomplishing their tasks. While we don’t get direct insight into what the exact issues and solutions might be, perception metrics shed light on areas where content strategists are likely to provide value and we can also postulate the correlations this data has with shifts in our KPIs.

The Value of Perception

Site perception (satisfaction score) in its most basic of form is a user’s critique of the overall quality of the site’s content. So it follows that a clear understanding of perception will provide us with the best qualitative data needed to adjust our site/content/brand to be more contextually relevant for users.

These metrics become crucial to separate from KPI metrics, because while business metrics help us to understand current market conditions and how to optimize for the current demand, it’s perception that gives us a window into long term brand health, loyalty and consideration. As someone who always attempts to tie his work to the bottom line, I always enjoy seeing lift in KPIs, but tend to concern myself more with the qualitative measures. It’s the latter that ultimately keep businesses, nonprofits, etc. in business.

What Should Be Measured?

There are a variety of tools out there that can assist in gathering perception and user satisfaction with a website. Intercept surveys, panels and user interviews are most common, but whether you’re using a vendor-based solution or using a do-it-yourself approach, a good content strategist or analyst must pose the right survey questions that gather:

Satisfaction as it relates to the overall site experience
Gathering overall site experience satisfaction is the most sought after metric when measuring perception might seem like the no-brainer, but I’m consistently shocked to participate in discussions with clients and learn that they’re doing no post visit surveying or have no idea of how a site is performing beyond the old standbys. Time on site doesn’t necessarily equal engagement and I’d argue that nine times out of ten it equates to confusion. Understanding a user’s general feeling about the site, its navigation and how they’re left “feeling” after they’ve experienced it is huge.

Satisfaction as it relates to task completion
It’s a little known fact that people use websites to do stuff and complete some kind of task. This might seem like a novel concept, but it seems to escape a lot of designers, content creators or content managers that users are arriving at our sites with questions that need answers, causes that need effects and darkness that needs light. Is your site giving them all the information they need to leave feeling satisfied? Are things organized in a logical fashion? Are labels correct? Do users expect to find content that is missing? Is there too much ‘window dressing’ preventing the completion of tasks?

Satisfaction as it relates to content quality
Why don’t more researchers ask if people think the content is shit? Marketers especially (and I’m speaking as a marketer remember) are terrible culprits of this. It can’t be the creative. I’m a copywriter! I’ve got an Effie! Who the Effie Cares? Is the content written using the user’s common phrases and language or your client’s? Does the content leave the user with more questions or provide them with a clear path for deeper engagement if applicable?
The quality and task completion measures should always be joined in your reporting documents, because more often than not, if you have a quality problem, it will cause problems with task completion and ultimately, overall perception.

Satisfaction as it relates to “other” factors
I typically dislike “other” categories, but it really best sums up what we’d typically like to understand from a user’s post site experience. It’s in surveying these behaviors that we can better understand what user’s do with your content AFTER they visit your site and what their intent is in using another source of content in to complete their tasks. This is especially useful for e-commerce, higher education or non-profit. If they added donations or completed parts of an application did they return at a later time? Did they find another experience that was more satisfying? If so why?

Keep It Simple, Then Evolve

Basically, it boils down to the questions you ask your users. I’ve long said content strategy needs to channel its inner anthropologist to better understand our users. Taking quarterly stock of site satisfaction and perception is just one way we can all start to better understand our user’s unique needs and tasks. It doesn’t take a lot to get started. Simple surveys that take less than five minutes to complete are the most appropriate ways to get an early read on user satisfaction.

Once a baseline is established, kick your governance and optimization plan into high gear and measure, measure, measure some more. Getting more satisfaction is more than swapping out the creative. Our field is becoming increasingly scientific and understanding these basic user perception metrics is the first step in developing stronger use cases for our content strategies.

Wanna talk about it or start sharing some testing methodologies? Comments are below … lets start the conversation.