Metrics, Strategy and Getting More Satisfaction

Over the past few months I’ve been attempting to isolate metrics that best inform the context models I use for content strategy projects.

I’ve made this attempt because clients have been asking our community to quantify the value that content strategy is bringing to their annual spend. It’s also one of the things I hear new content strategists (both the independents and agency folks) asking about. We all know our work is important, but justifying it to clients is how we continue to show relevance.

Measurement and Optimization Cycle for Contextually Relevant Content Strategy

Not surprisingly, there is no measurement plan that will prove without doubt that content strategy is responsible for a site or business’ success, but based on the implementation of a few different measurement plans, I think it’s safe to say that content strategists can lean on at least four types of metrics to accurately demonstrate the fruits of their labor. They are:

1. Measures of user perception (satisfaction)
2. Task completion (user defined)
3. Measures of key business objectives (traditional metrics measurement)
4. Post visit behavior

The combination and examination of the of the four data sources is not only valuable; it’s crucial to the optimization of a sound communication strategy.

We need what I typically refer to as “perception measures” to show the true value of content strategy because success isn’t as simple as providing insights into what drives improvement in business behaviors. As I’ve noted several times in past posts, content strategy has to do heavier lifting by adding contextual relevance to business goals, which equates to informing the creation of content that helps improve the bottom line AND satisfy user tasks. Task completion becomes crucial because it directly correlates to something even more valuable that quarterly gains — loyalty.

Beyond The Bottom Line

Most companies have well-established business success metrics for their websites and measure for them consistently but few measure the quality of the site experience as a separate and distinct concept. That’s a mistake, because it’s in the qualitative measures where you can make a more informed decision on whether your content strategy is providing any real return on investment.

Without dedicated perception metrics, it’s nearly impossible to determine whether an experience actually got better or how if changes in a content strategy influenced the site’s impact on business performance.

Perception metrics should reveal which aspects of the experience customers aren’t happy with and what prevents visitors from accomplishing their tasks. While we don’t get direct insight into what the exact issues and solutions might be, perception metrics shed light on areas where content strategists are likely to provide value and we can also postulate the correlations this data has with shifts in our KPIs.

The Value of Perception

Site perception (satisfaction score) in its most basic of form is a user’s critique of the overall quality of the site’s content. So it follows that a clear understanding of perception will provide us with the best qualitative data needed to adjust our site/content/brand to be more contextually relevant for users.

These metrics become crucial to separate from KPI metrics, because while business metrics help us to understand current market conditions and how to optimize for the current demand, it’s perception that gives us a window into long term brand health, loyalty and consideration. As someone who always attempts to tie his work to the bottom line, I always enjoy seeing lift in KPIs, but tend to concern myself more with the qualitative measures. It’s the latter that ultimately keep businesses, nonprofits, etc. in business.

What Should Be Measured?

There are a variety of tools out there that can assist in gathering perception and user satisfaction with a website. Intercept surveys, panels and user interviews are most common, but whether you’re using a vendor-based solution or using a do-it-yourself approach, a good content strategist or analyst must pose the right survey questions that gather:

Satisfaction as it relates to the overall site experience
Gathering overall site experience satisfaction is the most sought after metric when measuring perception might seem like the no-brainer, but I’m consistently shocked to participate in discussions with clients and learn that they’re doing no post visit surveying or have no idea of how a site is performing beyond the old standbys. Time on site doesn’t necessarily equal engagement and I’d argue that nine times out of ten it equates to confusion. Understanding a user’s general feeling about the site, its navigation and how they’re left “feeling” after they’ve experienced it is huge.

Satisfaction as it relates to task completion
It’s a little known fact that people use websites to do stuff and complete some kind of task. This might seem like a novel concept, but it seems to escape a lot of designers, content creators or content managers that users are arriving at our sites with questions that need answers, causes that need effects and darkness that needs light. Is your site giving them all the information they need to leave feeling satisfied? Are things organized in a logical fashion? Are labels correct? Do users expect to find content that is missing? Is there too much ‘window dressing’ preventing the completion of tasks?

Satisfaction as it relates to content quality
Why don’t more researchers ask if people think the content is shit? Marketers especially (and I’m speaking as a marketer remember) are terrible culprits of this. It can’t be the creative. I’m a copywriter! I’ve got an Effie! Who the Effie Cares? Is the content written using the user’s common phrases and language or your client’s? Does the content leave the user with more questions or provide them with a clear path for deeper engagement if applicable?
The quality and task completion measures should always be joined in your reporting documents, because more often than not, if you have a quality problem, it will cause problems with task completion and ultimately, overall perception.

Satisfaction as it relates to “other” factors
I typically dislike “other” categories, but it really best sums up what we’d typically like to understand from a user’s post site experience. It’s in surveying these behaviors that we can better understand what user’s do with your content AFTER they visit your site and what their intent is in using another source of content in to complete their tasks. This is especially useful for e-commerce, higher education or non-profit. If they added donations or completed parts of an application did they return at a later time? Did they find another experience that was more satisfying? If so why?

Keep It Simple, Then Evolve

Basically, it boils down to the questions you ask your users. I’ve long said content strategy needs to channel its inner anthropologist to better understand our users. Taking quarterly stock of site satisfaction and perception is just one way we can all start to better understand our user’s unique needs and tasks. It doesn’t take a lot to get started. Simple surveys that take less than five minutes to complete are the most appropriate ways to get an early read on user satisfaction.

Once a baseline is established, kick your governance and optimization plan into high gear and measure, measure, measure some more. Getting more satisfaction is more than swapping out the creative. Our field is becoming increasingly scientific and understanding these basic user perception metrics is the first step in developing stronger use cases for our content strategies.

Wanna talk about it or start sharing some testing methodologies? Comments are below … lets start the conversation.

Context in Content Strategy: Ambient Data

Context in Content Strategy: Situational-Behavioral Context is the final part of a series of posts discussing the need to account for context in the practice of content strategy. Did you miss the first four?


Though ambient factors sometimes fall into the information we gather and analyze when preparing for Personal-Behavioral Context, they call for some special attention when planning content.

Why do they call for this specialized treatment? One need not look too far past the photo above for the answer. Laptop user? Sitting up. iPad user? Leaning back.

A user’s posture matters to us, partly, in that it provides insight into her cognitive capacity for learning, appetite for content and preferences on the way the information is presented and designed. And while we have no way of always knowing exactly what posture a user may or may not be in, we can start to make some assumptions on how to present information based on the user interface options available on the device of her choosing.

For example, the leaned back user might have a higher threshold and appetite for elegant information design with a mix of media. That means the content should play nicely with the pinches, pulls and taps that come with a tablet device. Conversely, someone sitting up at a desktop PC or laptop might find relative linking and taxonomic structures to be more valuable because they’re alert, upright and (possibly) research oriented. I’d argue that we can even make the assertion that the user’s personal context and cognitive capacity would be different if the MacBook user were on a PC (but that’s a whole other post and gets super granular). Mobile users are an entirely different breed. They’re leaned forward.

Now, all this concern with lean forward, lean back and upright does not necessarily mean content strategists have to become Jane Goodall and study users the way she studies the Chimpanzees of Gombe. We’d never get a site live if we attempted to be that deep or narrowcasted. Still, posture and the way we learn differently on the various devices that access the web do call for our attention and thought … because the same content won’t be interpreted or learned in the same way across different devices. Our context changes because the things we use and our own physical actions change with each new gizmo.

If we’re calling for different content, it requires a different template or stylesheet for display and it’s entirely possible thanks to ever evolving code. This type of design (Responsive Web Design) is currently an area of great interest and a lot of debate, so it’s worth looking into if you’re not familiar with it. I won’t dig into all of the different ways content should or can be presented by different devices here. Special Note: Responsive Web Design can also be a way to attempt to force context of a site that was initially meant for viewing on a specific resolution. Without doing the additional work of having a content strategy, specific to the channel it’s attempting to fit, it’ll be pretty much useless. My only wish is that you start to pay closer attention to different devices when you’re creating your content strategies.

That said, device type is just one of the many ambient factors a content strategist can focus on.

Defining “Ambient”

“Ambient” by definition, is an adjective meaning:
1. of the surrounding area or environment or surrounding on all sides.
2. completely surrounding; encompassing: the ambient air.

The traditional definition starts to scratch the itch, but for content strategy purposes, we’ll refer to “ambient data” as being any factor of a user’s surrounding environment, which could influence their understanding of our content. Some of the other factors are included in the absolutely non-comprehensive things below:

• Time
• Connection type
• Geo-Location
• Browser
• Access Device (desktop, laptop, mobile, tablet, etc.)
• Weather Conditions
• Language Settings

Publishers have already started to play with elements of time. In November of 2009 Esquire published an augmented reality issue. One of their regular features, Funny Joke Told By A Beautiful Woman, featured three different jokes all told by Gillian Jacobs. One was in the physical print edition while the second and third were accessed through the augmented reality application. The third joke could only be accessed after midnight (when it became a ‘dirty’ joke told by a beautiful woman). You can learn more about it here.

It’s a bit of a novelty in this application, but is definitely something to think about if you’re selling a product or providing a service that might be require different content depending on the time of access. Similarly, weather could be a key factor that influences the way you present content. Imagine a nursery’s content shifting with the seasons or growing periods or reconfiguring to snow removal services during the winter months. Special mobile templates could allow for access to what’s most important to their customers based on events like … snowpocalypse.

Because ambient factors are a small piece of the bigger contextual puzzle, I won’t write dive any deeper into all of the factors. Sometimes none of the factors will fit for a content strategy project, while others may call for several. Point being, starting to think about ambient factors will only become more important as demands for more content come rolling in.

As the wranglers of content and the advocates for its brilliance, content strategists are responsible for understanding which ambient factors make the most sense for their projects. Not every client will be able to fund or be ready to cope with accounting for all of these factors when all they thought they needed was a site redesign. Start by offering a client small bites. Hold their hands while they take baby steps. Dip one toe in the water at a time and prove it through analytics and engagement.

Closing Argument For Context

Since this is the last post in this series, I wanted to take a moment to review context. Though all of the charts, big words and seemingly endless calls for you to research your user to no end seem somewhat taxing, they’re incredibly important. Above all, content strategy is about making content better. Better content achieves business goals, user goals and has substance above everything else. Context is what gives you that substance. It’s not a silver bullet, but it is the secret sauce that makes content engaging (or influential) enough to make your users give a damn.

Thanks for hanging in there with me through these long posts and for the notes, comments and tweets that have kept me going and helped me to breathe some much needed life and energy back into this dormant Web presence. For the first time in a long time, I’m really enjoying writing again and all of you have a lot to do with that. Cheers, and thanks for reading the series.

Photo by Alui0000 and used under Creative Commons License.

Context in Content Strategy: Situational-Behavioral Context

Context in Content Strategy: Situational-Behavioral Context is the fourth in a series of five blog posts discussing the need to account for context in the practice of content strategy. Did you miss the first three?


When we fuse user behaviors with a situation for content we have the basis for contextual content strategy, or Situational-Behavioral Context. And when we have Situational-Behavioral Context we can plug in the data from all of our hard work into content scenarios.

A content scenario can help a content strategist and UX pro in a lot of ways. They can be the basis for content filtering should the CMS or Cascading Style Sheets be sophisticated enough to handle the requests. They can help to define editorial guidelines for writers producing content for specific user personas facing unique situations and can serve as a guidepost for governing content as it nears the end of its lifecycle (i.e. does the context scenario still apply to our audience and its needs? What do we need to do to make it audience appropriate).

At their most basic level, content scenarios are a lot like the “If-Then” type of programming you’ll most likely find in Java. For example, IF we learn that our users are faced with certain scenario and that they have a given set of needs, THEN the content scenarios should deliver only the content that corresponds to true factors. The scenario also provides a secondary path when an “if” clause evaluates to false. Perhaps the true factors get a highly customized experience, while the false get a more generic experience.

So, you’re probably wondering what these templates look like and how we deploy them. Let’s dig right in, shall we?

Below, you’ll find a rough version of what I provide to a clients when we’re working through builds that require a lot of content to suit different users. As you can see, our old friend Kyle Fisher is back and we’ve framed up some of the things he needs to find that elusive Drake Motors SUV.

At the top of the template, we’ve narrated Kyle’s story by reviewing one of the two situations we created for him last post. In doing so, we’ve brought up a few of the individual NEEDS that applied to his SITUATION that we outlined in previous discussions on Personal Behavioral Context and Personal Situational Context.

Stating Kyle’s situation directly at the top of the template gives the context for the content requirements that follow.

Something that deserves some special mention is the Goal/Success Metric area directly beneath the content scenario. Since client goals are important and are included with goals for the site, the editorial tone of our content and the way it’s framed up is partially derived from this content. I always find it incredibly valuable to restate site goals across the majority of my documentation and in any deliverable that goes to the client. It keeps writers aware of the needs of the site and reminds the client that we always have success metrics on the brain as we develop and curate content for the site.

In this scenario, we want Kyle to request a vehicle quote, build his own version of an SUV or schedule a test drive at a local dealer, reminding us to include relevant paths and entry points to these areas and maintain a sales-oriented tone whenever appropriate while continuing to address the unique needs of his situation.

Below the goal statement, we find relevant navigation tabs or screens listed across the top of the table and the content types down the side. For this particular execution I’ve only listed two types of content, main and supporting, but it’s entirely possible there could be additional types depending on the size, scope and situation requiring it.

In each cell, we articulate the specific content needed for each page, which should follow directly from the Personal-Situational Context exploration we’ve completed. The scenario documents are expandable and may include entirely different fields depending on the device used to access them. For example, the fields would be VERY different for mobile or gaming browsers.

All that said, these templates are only as useful as you make them. They’re brilliant for A/B Testing, refining content or for creating new work that’s executing against the same or similar situations, but won’t be useful if your site designers don’t account for very specific content needs. This is why the communication between builders and strategists is so crucial. If the work can be done upfront, situational-behavioral context can be designed into the architecture of the site, allowing for highly custom content delivery.

Content scenarios should be modified based on evolving user habits. They should be held up against the success metric and analytic dashboard to evaluate if they’re still relevant. They should be modified or added to if you find content that wasn’t originally in the scenario begins contributing the success metrics or goals. They can also be incredibly helpful when you pair them with Responsive Web Design and factor in for other ambient factors (the final post in this crazy series).

Bottom line? They’re helpful little buggers. We’ll get into deploying them for ambient situations next week, which leaves me with one housekeeping note and plea to content strategists everywhere. Seriously folks, if you haven’t heard about the work being done in responsive web design, it’s going to blow the lid off all things digital – I promise you. In 2010, we started to get a look at the true power of HTML 5 and CSS3 and how Responsive Web Design can deliver truly custom experiences that vary by screen.

I urge all content strategists to do their homework on this stuff. There are some really smart folks writing about the evolving digital space right now, and it makes entirely perfect sense for content strategy to help pave the way, so long as we’re thinking of contextually relevant strategies that vary by device and situation. The screens are many and people will look at content in VERY different ways on each of them. I contend it’s our job to help sort out the content needs that will undoubtedly be left behind. Let’s wrap this thing up next week!