Customer Experience – It’s the full journey that matters most

We all know that individual touchpoints with customers matter, but ultimately it’s the complete experience that companies fail to focus on – and what really makes the difference. The Harvard Business Review covers this in their white paper, “The Truth About Customer Experience” by Alex Rawson, Ewan Duncan and Conor Jones.

The benefits of improving customer journeys include enhanced customer and employee satisfaction, reduced churn, increased revenues, lowered costs, and improved collaboration across the company.

We have found this to be true in working with our insurance clients – an effective omni-channel customer engagement solution streamlines interaction over time and across touchpoints because each customer scripts their own journey.

Read the full white paper

The Customer-Centric Insurance Company: Who is the Customer?

When we talk about Insurance companies moving to a “Customer-Centric” model, we often initiate a dialog around the topic: “Who is the Customer?”.

The point was driven home for me a few years ago in a conversation with my friend and industry veteran Bill Jenkins, co-founder of Agile Insurance Analytics and a former CIO of several insurance companies. Bill mentioned that for many insurance carriers who write business through Independent Agents, they are the Customer versus the Policyholder and the entire business revolves around making it easy for Independent Agents to do business with the carrier. In these insurance carriers, if you talk to any executive in the C-Suite and ask “Who is the Customer”, you’ll get a unanimous answer, “the Independent Agent.”

When we talk about Customer-Centric, however, we encourage a definition of “Customer” that is not absolutely literal. In fact, we recommend looking at any person who “touches” your business, internal or external, and to view them as if they were a Customer. That means expanding the definition of Customer to include, for instance: Independent Agents, Brokers, Producers, Claims Adjustors, Third-party Administrators, Appraisers, Customer Service Representatives, Underwriters, etc. At Return on Intelligence we use the term, Actors, for this expanded list of Customers. Actors is not a perfect term, but it helps to broaden the thinking beyond the traditional notion of Customer.

When businesses expand their definition of Customer, creating this broad list of Actors for their business, the discussion starts to move towards a) what is their expectation and experience of your business, b) how do you measure this, and c) how do you improve their experience over time. Leading edge technologies facilitate the interactions with these multiple Actors, and the result is that everyone who interacts with your business feels a level of personalization and operational efficiency, and over time their perception of your business as Customer-focused, increases. This leads to increased business, longer retention and lower acquisition costs. And that is why “the Customer-Centric Insurance Company” is just “good business”.

The Customer-Centric Insurance Company: Rip & Replace?

Every insurance company has existing technologies that support the business functions of today, such as distribution, customer acquisition and servicing. These technologies may include websites, portals, mobile applications and web-based business applications. When moving towards a Customer-centric model, is it necessary to “rip & replace” existing technology, or is there an alternative approach to implement a solution?

In the majority of cases that we see, we strive to leverage existing infrastructure and to “surround” the existing technology with a platform, and incrementally migrate or retire systems and technologies that don’t fit into the “go forward” technology roadmap. Why? Because our objective is to minimize the disruption to the business, leverage existing technology expenditures, provide a clear migration path to a new Customer-centric architecture while minimizing implementation risk. Platforms are more flexible than custom development or specialized portal solutions because they provide the depth and breadth of functionality to execute a migration to a new Customer-centric environment without demanding that an insurance company “rip & replace” their existing investment.

The Customer-Centric Insurance Company: Build versus Buy

As more and more insurance companies move from a Product-centric to a Customer-centric strategic focus, we’re often asked, is it better to build, i.e. custom develop front office solutions, or should the company buy, i.e. purchase a best of breed platform.

We typically recommend buying a best of breed platform because we see the benefits first hand: solutions tend to be more robust, can be deployed faster, are easier to maintain and scale.

Conversely, we have seen many insurance companies who develop their own applications end up with IT nightmares: data is inconsistent across applications, portals and point solution proliferate and the user experience suffers, IT cannot keep up with the demands of the business users and maintenance costs explode.

In our view, the choice is simple: for leading edge Customer-centric solutions, the choice is to Buy, not Build.

14 Insurance Industry Predictions

Insurance Networking News

Jonathan Kalman, President of Return on Intelligence’s Insurance Solutions Business Unit, offered the following predictions, takeaways and advice at the IASA 2014 Educational Conference and Business Show in Indianapolis last week.

  1. Customer centricity will be the dominant strategy among insurers, and new business processes will need to be designed, implemented and adapted.
  2. Customer expectations will grow exponentially higher, so design from the customer’s perspective and for flexibility.
  3. Risk management methods will shift from operational to financial and transactional, so you’ll need data at your fingertips.
  4. Market volatility will continue unabated, creating pressure on earnings and as well as investment opportunities, so insurers will be under a larger microscope.
  5. Fraudulent behavior will evolve and become more sophisticated, affecting all insurance products.
  6. The pace of industry consolidation and M&A will accelerate, so the ability to consolidate systems and processes quickly will need to become a core competency.
  7. Insurers will use analytics at the center of their business strategies, and those that do will outperform their competitors.
  8. Digital channels will dominate all customer acquisition and servicing.
  9. The window of enjoying enhanced margins on a given insurance product will shorten as new technology emerges.
  10. The pace of disintermediation will increase and accelerate (for example, Google is buying a consolidator) so technology and processes must improve.
  11. New entrants will be emboldened as barriers to entry drop, so watch non-traditional companies trying to come into your space.
  12. Niche markets will continue to offer specialized products – you must be able to compete with products not yet invented.
  13. The “Internet of Things” will open doors to new product offerings, so watch Silicon Valley’s investments carefully to stay ahead of risk management.
  14. Insurers will need top talent, so acquire what you can in an acquisition, and other familiar sources.

Read original article.

Integrating Types of Data for Customer Centric Applications

It is not news to anyone that information and media has exploded in the last decade, largely as an effect of new technological capabilities and the Internet.  Although consistent with past technology trends, most Application Architectures (i.e. how applications are designed and structured) have addressed each information type (structured and unstructured, numbers, text, image, audio and video, as independent concepts) with applications that don’t fundamentally integrate the information types.

There are three basic design approaches that yield the information integration needed for the modern Customer Centric application:

  • Front End integration,
  • Back End integration, and
  • Mid-Tier integrationTypes of Integration

Modern Customer Centric applications are increasingly called upon to apply situationally optimized combinations of these to bridge the myriad types of information in creating successful systems.

For an example, envision an application for a mobile service professional that makes it possible to look into the Customer database to see what product versions a Customer owns (Structured), linked to the schematics of those devices and videos of repair procedures (Media), referencing posts from other service professionals about problems and resolutions for that device, perhaps keyed to specific elements on the schematic (Unstructured).

Various data types

The power of integrating the information types radically improves the usefulness and usability of that application, potentially improving customer service and lowering costs.

Creating the new application architectures that embrace integrated information types requires new approaches to information design.  Traditional data analysis, while well honed for use in Structured data environments, is not fully sufficient because of limitations in describing unstructured information.  Fortunately the rapidly emerging practice of Semantic Web analysis and modeling appears to be a methodology that encompasses all the information types, and facilitates discovery of the linkages across the types.  When skilled practitioners, with the assistance of the rapidly emerging set of tools available, perform Semantic Web analysis across the functional space and existing information artifacts for that space, it can be seamlessly used within an agile development methodology, yielding benefits to the application design without adding significantly to cost or schedule.

The implementation neutral information design is only the starting point.  The technologies that support the information types have developed independently – the new application architecture needs to rationalize and embrace these technologies where appropriate. While application architectures have commonly focused on using of each of these technologies independently, to deliver full advantage of the business benefits enabled by integrating the information types, Architects will need to design to make use of the strengths of each within the context of a shared application architecture, selecting a design approach that is for the business situation (see Sidebar).

There is no single right answer to the question of which design approach is best.  Each approach is highly skill set and desired business outcome dependent.  Recognizing the balance between the needs for integrated information compared to the potential cost in resources and schedule, in addition to the state of current information assets and the desire to refresh them will lead to the best choice for the given situation.  Furthermore, the rapidly changing technology landscape requires building applications that can absorb change in the future.

Despite the potential costs and uncertainty, creating Application Architectures that integrate the information types is the only path to delivering the business benefits of the Customer Centric Applications.

 Andrew Weiss
November 11, 2013

Andrew Weiss is a research and consulting fellow of the Return on Intelligence Research Institute.  He has served as Head of Technology R&D at Fannie Mae, as Chief Architect and COO of two software firms, and as SVP IT Strategy at Bank of America

Finding Value in the Internet of Everything

The first known reference to the Internet of Things dates back to Kevin Ashton in the late 1990s.  Speaking, at that time, about early RFID tagging of goods throughout a distribution chain the metaphor was descriptive and compelling, but, did not generate an immediate, broad market reaction.

In recent years as the Internet has become a fundamental of life and the number of connected devices has exploded more and more attention has been paid to the idea.  A very recent addition to the many written analyses of the phenomenon and its potential was published by The Economist as “The Internet of Things Business Index.” According to the report:

  1. Most companies are exploring the IoE (Internet of Everything)
  2. Two in five members of the C-Suite are talking about it at least once a month
  3. Investment in the IoE remains mixed

Step 2 imageThe Economist says that there is a quiet revolution underway but that many important unknowns remain.  Companies are preparing for the future IoE with research, by filling their knowledge gaps, and by working with governments and trade associations on the definition and adoption of standards that will be needed to enable real leverage.  The Economist believes the IoE to be, “an ecosystem play,” by which they mean networks of companies creating new industries, new economics, new value definitions. The “productization” of these networks and what they do is the biggest economic opportunity. But they don’t say how to get there.

In talking with our clients, most companies are discussing the IoE but without consensus on what it is or what to do about it.  We have a suggestion.  Companies should apply “Design Thinking” methods to systematically find the best targets; to increase your chances of achieving disruptive success. Design Thinking describes the orchestration of a group of well-known techniques (see below), with some adjustments, to find those opportunities that are truly transformational and successful.Design Thinking

It is purposefully different from asking mobile phone users in 2003 if they want a camera, a GPS and a sound system in their phone, or asking the casual coffee drinker in 1983 (the year Starbucks began) if they thought paying $4.00, eighteen times per month, for a cuppa’ joe, was an attractive idea.

Instead of asking how the IoE can transform our world we should take a series of business challenges, identify how our customers actually experience them, develop an array of ways to transform the customers’ experience for the better (with the IoE in mind), select the best ones and build models and low-fidelity prototypes with customers refining and extending as we go.Stella Modeling

This approach works for the IoE, for Big Data, for Social Business: instead of starting with the technology, start with the business challenges that are not being solved with traditional analyses and solutions; start with the “mysteries.” See how the underlying, visceral customer needs can be better served and how IoE might be part of it.  Visualize and prototype as you go.  Iterate, iterate, iterate.

Doug Brockway
November 6, 2013

The Return on Intelligence Free Library

Welcome to our blog-site:  Perhaps you’ve already read a post like Jim Anderson’s The Case for Feature Driven Development.  Or perhaps this post is the impetus for your first visit? Regardless, you should know that the site includes more than blog posts.  Introducing the Return on Intelligence Free Library – its open 24×7.

You are invited to visit and contribute content, comments and questions as often as you like. There are currently 200 links in the library.  It has doubled since it opened in July.  The materials are from a wide array of sources covering topics from Lean, Innovation and Disruption, through Security, Wearables and Smart Machines, to each element of I-SMAC. If you want to read something a bit off the beaten path, but interesting and useful nonetheless, read (and watch) Leadership Lessons from the Dancing Guy.

Free Library

If you have material that is relevant to technology, its transformative potential or impact, we’d like to have it. You can simply put a link or a set of links into a comment at the bottom of this post or any post.  Or, the most desirable method is to use the link at the top of the library shown below:

Free Library

Please make use of this and similar links to add to the shared resource.  The resource will be richer the more contributions we get from more people, each with their unique view into what may be useful.  Whether you contribute or not please use the Return on Intelligence Free Library.  Whether you start an investigation there, or end there on the off chance there’s something you missed, it will be available.

Doug Brockway
Partner, Management Consulting/Research Institute
Return on Intelligence, Inc.

PS – and do use the comments fields to let us know what you think.

The Case for Feature Driven Development

Despite the best efforts of architects, engineers, planners and CIOs there continue to be an uncomfortably high rate of systems development flops and failures.  According to recent research by Oxford University and McKinsey 87% of IT projects with an investment of more than $15 million fail and 23% of IT projects run more than 80% over budget[1].

As recent data in the Insurance Industry from Forrester shows there is little consensus on what to do.  Of the respondents almost half have either no defined approach to systems development or are using waterfall, which was forward thinking in the 1970’s.  Encouragingly, half the industry is trying to control the scope of efforts, keep failures scope-boxed and time-boxed through some version of Agile or Iterative development. There are many versions of either.

Forrester bar chart

In the Insurance industry the challenge tends to be implementing a functionally rich “core systems” solution managing the relationship from policy issuance through claims; similar in scope to an LOS in mortgage or an ERP in manufacturing and distribution. In an attempt to be responsive and modern, nearly all of the insurance solution providers will state that their implementation methodology is based on Agile.  Some are more “pure” Agile than others.  Regardless of their orthodoxy the challenges can consist of the following:

  • Failure to recognize the business users commitment levels required for true agile development.  “Product Owner” means something critical.
  • Missing requirements due to not understanding the entire insurance value chain. An example is defining a unique, strategic distribution channel but not understanding the need and role of CRM in it.
  • Called what occurs with core systems “development” is a challenge. Clients are licensing commercially available solutions that have significant functionality. The task is configuring, feature selection, enhancing with add-ons.  It is not “green fields.”
  • Cost – These implementations are expensive.  It is not unusual for regional players to spend $6m to implement a claims module.  Mortgage originators spend similar amounts on POS and LOS solutions.  As a share of revenue or equity these are substantive efforts.
  • But, due to the perspectives of the teams doing the work and the lack of proper business participation the requirements are not aligned to business and projects all too often fail.

For these reasons we find it imperative to work from an Agile Feature-Driven Development “AFDD” Approach. Key elements include:

  • Stringent alignment of business processes to system requirements.

You need a process driven approach to requirements identification and prioritization.  Inherent to this are dynamic reusable business process models that drive the identification of services, appropriate flexibility and reusability, alignment with enterprise business process management systems, and the identification of tangible benefits and requirement prioritization

  • Focus on the features of the solution – the prioritize requirements are then assess to the base features of the solution.  Gap analysis and development dependencies are identified and estimated
  • Testing of features are driven by the business processes / requirements

We believe there to be 5 main phases of Feature Driven Development, the middle three are most classically “Agile” and the first and last phase are traditional in their structure. In our view, Feature Driven Development requires, it is based on, starting by clearly understanding the business requirements and features for the new solution. The process begins with requirements from a business perspective and develops an executable roadmap and governance for a successful implementation.  Whether your business direction is described in products and markets, Critical Success Factors, or Strategic Vectors and Do-Wells, that direction defines and informs things the business needs to have and do.  Those things drive the roadmap.  This is not an “Agile” process, it has a timeline of its own[2].

SAFe process

In Elaboration and Design those business requirements and feature definitions are extended and verified.  Missing business requirements and features, user experience descriptions, system integration and potential data conversion routines are discovered and documented. This phase takes a good idea and creates a definition of a “whole product.”  The scope of what must be done is clear enough from the Inception Planning that the Elaboration can be properly “sized” and structured. This is an iterative, “agile,” sprint-driven process, time boxed with “product owners” approving the work to be done and the results from each sprint.

Configuration and Construction involves a series of “traditional” Agile sprints, coordinated around the defined features, the “release train,” to deliver what is defined as the business objective.  Since the construction is Feature-based so is the Testing and Acceptance of the solution (feature or full solution) in a control and predictive environment.  This includes user acceptance test and performance test.  Once this phase is complete the system or feature of the system is ready for deployment.

This is an “agile” approach.  It expects that the initial design may mature during the development process.  It also expects that despite best efforts of all parties some of what is built is different from the intended design.  For these reasons we recommend conducting a product pilot test in a control production environment.  During this test resources are assembled to quickly resolve and business or technical issue raised during the pilot period.  Once this has been completed the system is ready for deployment.

For Core Systems efforts we favor Feature Driven Development because it minimizes unconstructive “green fields” thinking in a known space while maximizes the inventive, iterative, differentiation of deployed feature/function that Agile emphasizes. The resulting implementations are more focused on delivering solution features that are aligned to business concepts rather that counting the number of story points that have been delivered. Business people can better understand the project status and can react accordingly.

Jim Anderson
(610) 247-8092

[1] The Art of Project Portfolio Management by Meskendahl, Jonas, Kock, and Gemunden

[2] There are methods, like Dean Leffingwell’s SAFe, which attempt to use Kanban methods to add Agility in this phase as well

Analytics: The Next Step on the Road to the Smart Grid

Smart Grids are among the top of priorities for electric utilities and the communities they serve around the world.  There is a lot of activity.  Dept of Energy Report 2013In the US alone a recent Department of Energy report revealed that investment in smart grid projects has resulted in almost $7 billion in total economic output, benefiting a wide variety of industrial sectors and creating 47,000 jobs. On the other hand, at a recent seminar at MIT on the Smart Grid the panelists were both enthusiastic for the long run and skeptical or worried in the short run, especially in the consumer sphere.

The panelists from NSTAR, Schneider Electric, Peregrine and FirstFuel said that 22% of load on the grid is consumer load. Smart Grid design and capability goals include the ability to measure, control and bill at the circuit level inside a home or business, but the Smart Grid may have small economic impact there. According to the panel the best estimates of consumer savings from Smart Grid are $100/year.

In the realm of Small/Medium buildings there are substantive potential benefits but the panelists say not enough to have an Energy Manager on staff to make the changes, do the engineering, the monitoring, and the implementation. There isn’t enough benefit for them to focus on it.  If the building and all the tenants don’t act then the Smart Grid benefits will be hard to capture.

Given these challenges what is the next step for Smart Grid?  Some of the answer can be found in a recent article on how good ideas spread by Atul Guwande. He is a surgeon, a writer, and a public-health researcher.  Atul GuwandeIn this article he compares the lightning-fast spread of the invention of ether-based anesthesia with the long, slow adoption of clean operating rooms, washed hands, fresh gowns and Listerine.  In brief, anesthesia solved a problem that doctors and hospitals had with screaming and thrashing patients, emotionally draining surgical procedures.  Doctors wanted a change.  With antiseptics and cleanliness, the dangers were un-seen to the doctors, involved a lot of procedural changes, and solved a problem only the patients had; survival. For antiseptics the change only came, more than 30 years after the invention, when German doctors took it upon themselves to treat surgery as science. Science needed precision and cleanliness. This included white gowns, masks, antiseptics, fresh gloves, clean rooms. After a long dormancy the now-obvious idea “went viral.”

Consumer level investments and benefits from the Smart Grid don’t appear to be ready, yet.  Regulators and providers and distributors of power are looking for the returns.  Looking to consumer solutions is a bit like starting the computer revolution in the late 1950’s with personal computers. It didn’t and couldn’t happen that way.

In business, IT has gone through multiple eras in the way it transforms then supports an enterprise – think mainframes then client server then the internet and now mobility, big data, the cloud: I-SMAC.  Within each era, as with anything else in life, the first systems built are those with big payoffs.  For the Smart Grid this is in the industrial, the corporate, and the municipal, state and federal forms of consumption.

When we talk to utility companies the current focus area related to the Smart Grid relates to data.  Just as in other industries like pharmaceuticals, the grids, transformers, meters and controllers already deployed are producing more data than companies can deal with, and it will get worse.  Newer equipment is being installed or existing equipment being outfitted with more and better sensors.  Data can be captured in smaller and smaller time increments, isolated to smaller and smaller grid footprints.  All of the analysis done produces more meta data and the opportunities to learn yet more.  As a client says, utilities are not struggling with connectivity [to devices] as much as they are struggling with analysis of device borne data.

In addition to the volume of data there are myriad data analysis techniques that can be applied. Common predictive modeling techniques include classification trees and linear and logistic regression to leverage underlying statistical distributions to estimate future outcomes. New, more CPU-intensive techniques, such as advancements in neural networks, can mimic the way a biological nervous system, such as the brain, processes information.  Which to use, when and why?  Utility executives say they have only started using a very few out of the many techniques at their disposal.

A small delay, or speeding up, of an energy buy, can greatly change the profitability of that trade.  Small adjustments in voltage delivered, by time of day, can greatly change the economics of delivery and, if done properly, without materially affecting use.  Knowing which of these and other actions to take, precisely and specifically when, requires significant expansion of analytic activities by utilities.  But it is well worth it.  Expect much more of this long before your electricity provider asks permission to alter the fan speed on your refrigerator.