Unified Customer Experience: Developing The Case For A Winning Product Strategy

January 7, 2020

Few people realize that although we pay for blankets, we are, in fact, buying warmth. That’s a relevant analogy to extrapolate over the customer journey in general. Buyers and users are not the same, we know. But why is innovation and product development traditionally limited to what customers pay for, without spending enough time to understand what they want to buy?



Questions surrounding ‘how to sell’ clearly outnumber ‘what does the customer want’ in most PD discussions, (by a ratio of 5.39: 1 if you’re going by an ordinary query search). And while no business can afford a portfolio of dissatisfied customers, for players in the technology space, the risk is magnified many times over.

As global output shifts from manufacturing to services, and from tangible goods, to high-yielding purchasable experiences, profitability can be correlated to how frequently a business revisits its buyer(s) journey and maps the unified customer experience against it. Research supports this. Now if UCE works so well, and seems to offer such a wide range of benefits, why aren’t all businesses adopting it?

A common obstacle companies report is system integration. How does one successfully migrate from the legacy system without disrupting customer experience along the way? The solution has less to do with technology and logistics, and more with culture.

The resistance in unifying customer experience comes mostly from teams that have worked either in isolation or in competition. Coming together to provide an identically high level of satisfaction will not be easy. This has been witnessed especially within retail outfits undergoing digitization. In many cases, branch/outlet employees were instructed to help their customers migrate to online solutions. But reports of poor onboarding experience and low conversion soon followed. The online solution was not the problem. The problem was how threatened the employees felt by the wave of automation. To complicate this further, service levels underwent a temporary peak, possibly to further discourage automation.

The second challenge businesses face is data management. Collecting diagnostic or prescriptive data on the customer journey is not the problem. Storing and correctly processing it is. This is particularly problematic if clients use different credentials across different platforms, or if the data cleansing process before transformation has not been thorough enough. Even something as trivial as using initials in your name, or abbreviating can produce inaccurate and highly fragmented output. The more apathetic an organization’s approach to customer experience data, the more it will be burdened with processing, storing and securing largely useless datasets—resulting in unnecessary infrastructure costs. For many organizations, the costs alone of bringing their systems under one experience umbrella are daunting enough.

Enabling Unified Customer Experience Measuring Right Vs Measuring The Right Thing: Let’s start by asking a question: Who is responsible for a unified customer experience in your organization? (Correct answer: Everyone). Second question: How do you ascertain if the experience is as standardized as it is supposed to be? In other words, how do you ensure that experience excellence does not have an unhealthy dependence on a specific product, market forces, an exceptionally well-performing employee, or catchy promotions? Does everyone interacting with your products—tangibly and digitally, (and both), derive an identical level of satisfaction?

In the services industry, these questions can be harder to answer. Experiences are highly customized (and therefore subjective). They are perishable. They cannot be replicated. How then, does one start with the process of creating a unified customer experience?

One way is to ask the right questions: Internally, we measure performance by Key Performance Indicators (KPIs). Flip that over when addressing external stakeholders. Start by asking: What would definitely make us fail? What would ensure the customer switches over to a competitor? What is the lowest level of satisfaction we can commit to, and still retain our customers?

These are uncomfortable questions. But their answers will tell you what really matters to the customer. Build on these, and you’ve got the CSFs for your product. Critical Success Factors, or CSFs are different from performance indicators in their sense of urgency. A KPI will measure performance on a continuum. CSFs are binary, which helps clear the grey area in experience design. It establishes thresholds on which products can be measured and enhanced.

Start Small: When it comes to customer experience, consistency—rather than volume—is key.

This is demonstrated in the following example of a product design experience amalgamated from several banks: “Bank XYZ” realizes it needs to increase its consumer deposits by 200% in one year. The bank decides that there are many ways to increase the number of depositors in its territory:
• Open more branches
• Offer mobile banking
• Introduce an app
• Convert its social channels as an extension of its customer service center—posting notifications and public documents, answering queries and so on.
• Add a chatbot and contact form to its website
• Advertise the new banking channels

Confident that it will meet its target of increased deposits this way, the bank implements all the methods above. In other words, at least 7 new touchpoints between the customer and the bank have been added in one year. But was the target achieved?

New branches (aided by advertising) increased footfall and increased the bank’s overall consumer portfolio. But with the tradeoff between liability products and big-ticket customers seeking overdrafts, the deposits target was not met.

Mobile banking increased banking activity but did not lead to a material gain in transactions: Dominated by bill payments (which customers were already doing via ATMs), the mobile banking solution added to maintenance costs, and did not lead to any new customers.

An annual subscription fee discouraged existing customers from downloading the app. Even those who downloaded the app rarely logged in during the year. The bank, did however, collect many datapoints about customer location, phone activity, contacts etc.

The social channels did not serve as an extension to the customer service center as expected. On the contrary, poor community management, slow (or no) responses, and poor complaint management resulted in high follower attrition and trolling activity. Decision makers had clearly not factored in customer reluctance to share personal account details on social media. Nor did they map their existing/prospective customers’ level of social engagement.

The example above highlights two facts: Unified customer experience does not depend on densely populated touchpoints. Its success depends on existing, recognized touchpoints that deliver consistently, few as they may be. Start small and probe. Build where necessary.

Second: Actual insights into the buyer’s journey come from qualitative assessments. Quantitative data highlights and validates qualitative findings.

Prioritize Monitoring: Existing approaches to product development tend to emphasize the visible, ‘in motion’ parts of the PD process, without paying enough attention to underlying, typically intangible aspects of the buyer journey. They will for instance, have enough data for a first-level analysis, but not enough to explain the subsequent second-level analysis, that is doubtlessly needed for effective productive development.

The first-level analysis is exploratory: Which touchpoints worked better than others? By how much? What was the frequency and uniqueness of engagement?

Second-level analyses are descriptive. They probe into some aspects to gain a better understanding of the buyer journey as a whole: for instance, after receiving figures on most aggressively targeted touchpoints, a product manager may ask if traditional and virtual touchpoints might be designed too differently to create a unified recognition in the user’s perception. Product teams may then also explore if the web mobile version of their product is cannibalizing the app, or verse vica.

Being equipped with data on how your customer interacts with one or several touchpoints is important. (And tech-products generally do well here). The problem arises when they make assumptions about the reasons behind the interaction touchpoint. Arriving at the right conclusion about a unified experience requires treating data monitoring as an integral part of product management. As PD delivery teams will report, they usually consign data monitoring to crisis mode—to be done only when there’s a problem that developers can’t solve, and operations can’t explain.

Sounds like self-sabotage? It is.

Without regularly monitoring customer-touchpoint interaction, product teams fail to keep the right variables on their radar. They end up investing in forms of innovation that may seem exciting from a technical (and investor) POV but mean little to the end-user.

And such cases are all too common in the technology domain: How many of us have been thrilled by the introductory ‘MVP’ pilot of an app, impressed by its simplicity and user-friendliness, only to be disappointed by a series of clunky, unnecessary functionalities in the next version—that slow down our experience, and make the app difficult to navigate? Don’t blame the developer. That’s a design problem.

Embedding unified experience principles into product design serves another purpose: Businesses can practice delayed differentiation without upsetting their product’s core offering. In other words, unified customer experience allows you to build your product on your own terms, on your own resources—not as a reaction to competitor activity. In the case of some consumer goods in particular, everything—from packaging and pricing to promotional practices—seems to be a ‘bounce off’ between manufacturers. The buyer has to choose from a near-identical set of offerings, whereas the user struggles to see what is different between these supposedly competing products.

What problem was the product solving again?
Delayed differentiation offers innovators the rationale with which they can stand their own ground in the face of short-lived, ineffective market fads. In other words, they can build on their intrinsic product strengths instead of struggling to underplay the weaknesses of their products (in comparison to competitors). This is only possible, of course, if they’re regularly tapping into customer experience to see what really matters to the user, based on actual user experience.

Recent Posts
case study img
Sustaining Business Value With Digitization

A Roadmap For Digitizing Operations And Unlocking Hidden Value For Consumers In Their Buying Journey Digitization For Process Transformation When it comes to business transformation efforts, the adage, “fools rush in where angels fear to tread” holds true. The best example comes from digitization in contemporary practice, and no one is exempt. In the past… View Article

Read more
case study img
New Analytics Solution Offers Preventive Wellness Breakthrough

21st June 2019, San Antonio, Texas: Arthur Lawrence launched Health Grades Analytics (HGA), a preventive wellness screening product, for clinics and hospices here on Friday.

Read more