Integrating Types of Data for Customer Centric Applications

It is not news to anyone that information and media has exploded in the last decade, largely as an effect of new technological capabilities and the Internet.  Although consistent with past technology trends, most Application Architectures (i.e. how applications are designed and structured) have addressed each information type (structured and unstructured, numbers, text, image, audio and video, as independent concepts) with applications that don’t fundamentally integrate the information types.

There are three basic design approaches that yield the information integration needed for the modern Customer Centric application:

  • Front End integration,
  • Back End integration, and
  • Mid-Tier integrationTypes of Integration

Modern Customer Centric applications are increasingly called upon to apply situationally optimized combinations of these to bridge the myriad types of information in creating successful systems.

For an example, envision an application for a mobile service professional that makes it possible to look into the Customer database to see what product versions a Customer owns (Structured), linked to the schematics of those devices and videos of repair procedures (Media), referencing posts from other service professionals about problems and resolutions for that device, perhaps keyed to specific elements on the schematic (Unstructured).

Various data types

The power of integrating the information types radically improves the usefulness and usability of that application, potentially improving customer service and lowering costs.

Creating the new application architectures that embrace integrated information types requires new approaches to information design.  Traditional data analysis, while well honed for use in Structured data environments, is not fully sufficient because of limitations in describing unstructured information.  Fortunately the rapidly emerging practice of Semantic Web analysis and modeling appears to be a methodology that encompasses all the information types, and facilitates discovery of the linkages across the types.  When skilled practitioners, with the assistance of the rapidly emerging set of tools available, perform Semantic Web analysis across the functional space and existing information artifacts for that space, it can be seamlessly used within an agile development methodology, yielding benefits to the application design without adding significantly to cost or schedule.

The implementation neutral information design is only the starting point.  The technologies that support the information types have developed independently – the new application architecture needs to rationalize and embrace these technologies where appropriate. While application architectures have commonly focused on using of each of these technologies independently, to deliver full advantage of the business benefits enabled by integrating the information types, Architects will need to design to make use of the strengths of each within the context of a shared application architecture, selecting a design approach that is for the business situation (see Sidebar).

There is no single right answer to the question of which design approach is best.  Each approach is highly skill set and desired business outcome dependent.  Recognizing the balance between the needs for integrated information compared to the potential cost in resources and schedule, in addition to the state of current information assets and the desire to refresh them will lead to the best choice for the given situation.  Furthermore, the rapidly changing technology landscape requires building applications that can absorb change in the future.

Despite the potential costs and uncertainty, creating Application Architectures that integrate the information types is the only path to delivering the business benefits of the Customer Centric Applications.

 Andrew Weiss
andrew.weiss@returnonintelligence.com
November 11, 2013

Andrew Weiss is a research and consulting fellow of the Return on Intelligence Research Institute.  He has served as Head of Technology R&D at Fannie Mae, as Chief Architect and COO of two software firms, and as SVP IT Strategy at Bank of America

Print Friendly, PDF & Email
Tweet about this on TwitterShare on LinkedInShare on Google+Share on TumblrEmail this to someone

Finding Value in the Internet of Everything

The first known reference to the Internet of Things dates back to Kevin Ashton in the late 1990s.  Speaking, at that time, about early RFID tagging of goods throughout a distribution chain the metaphor was descriptive and compelling, but, did not generate an immediate, broad market reaction.

In recent years as the Internet has become a fundamental of life and the number of connected devices has exploded more and more attention has been paid to the idea.  A very recent addition to the many written analyses of the phenomenon and its potential was published by The Economist as “The Internet of Things Business Index.” According to the report:

  1. Most companies are exploring the IoE (Internet of Everything)
  2. Two in five members of the C-Suite are talking about it at least once a month
  3. Investment in the IoE remains mixed

Step 2 imageThe Economist says that there is a quiet revolution underway but that many important unknowns remain.  Companies are preparing for the future IoE with research, by filling their knowledge gaps, and by working with governments and trade associations on the definition and adoption of standards that will be needed to enable real leverage.  The Economist believes the IoE to be, “an ecosystem play,” by which they mean networks of companies creating new industries, new economics, new value definitions. The “productization” of these networks and what they do is the biggest economic opportunity. But they don’t say how to get there.

In talking with our clients, most companies are discussing the IoE but without consensus on what it is or what to do about it.  We have a suggestion.  Companies should apply “Design Thinking” methods to systematically find the best targets; to increase your chances of achieving disruptive success. Design Thinking describes the orchestration of a group of well-known techniques (see below), with some adjustments, to find those opportunities that are truly transformational and successful.Design Thinking

It is purposefully different from asking mobile phone users in 2003 if they want a camera, a GPS and a sound system in their phone, or asking the casual coffee drinker in 1983 (the year Starbucks began) if they thought paying $4.00, eighteen times per month, for a cuppa’ joe, was an attractive idea.

Instead of asking how the IoE can transform our world we should take a series of business challenges, identify how our customers actually experience them, develop an array of ways to transform the customers’ experience for the better (with the IoE in mind), select the best ones and build models and low-fidelity prototypes with customers refining and extending as we go.Stella Modeling

This approach works for the IoE, for Big Data, for Social Business: instead of starting with the technology, start with the business challenges that are not being solved with traditional analyses and solutions; start with the “mysteries.” See how the underlying, visceral customer needs can be better served and how IoE might be part of it.  Visualize and prototype as you go.  Iterate, iterate, iterate.

Doug Brockway
doug.brockway@returnonintelligence.com
November 6, 2013

Print Friendly, PDF & Email
Tweet about this on TwitterShare on LinkedInShare on Google+Share on TumblrEmail this to someone

The Case for Feature Driven Development

Despite the best efforts of architects, engineers, planners and CIOs there continue to be an uncomfortably high rate of systems development flops and failures.  According to recent research by Oxford University and McKinsey 87% of IT projects with an investment of more than $15 million fail and 23% of IT projects run more than 80% over budget[1].

As recent data in the Insurance Industry from Forrester shows there is little consensus on what to do.  Of the respondents almost half have either no defined approach to systems development or are using waterfall, which was forward thinking in the 1970’s.  Encouragingly, half the industry is trying to control the scope of efforts, keep failures scope-boxed and time-boxed through some version of Agile or Iterative development. There are many versions of either.

Forrester bar chart

In the Insurance industry the challenge tends to be implementing a functionally rich “core systems” solution managing the relationship from policy issuance through claims; similar in scope to an LOS in mortgage or an ERP in manufacturing and distribution. In an attempt to be responsive and modern, nearly all of the insurance solution providers will state that their implementation methodology is based on Agile.  Some are more “pure” Agile than others.  Regardless of their orthodoxy the challenges can consist of the following:

  • Failure to recognize the business users commitment levels required for true agile development.  “Product Owner” means something critical.
  • Missing requirements due to not understanding the entire insurance value chain. An example is defining a unique, strategic distribution channel but not understanding the need and role of CRM in it.
  • Called what occurs with core systems “development” is a challenge. Clients are licensing commercially available solutions that have significant functionality. The task is configuring, feature selection, enhancing with add-ons.  It is not “green fields.”
  • Cost – These implementations are expensive.  It is not unusual for regional players to spend $6m to implement a claims module.  Mortgage originators spend similar amounts on POS and LOS solutions.  As a share of revenue or equity these are substantive efforts.
  • But, due to the perspectives of the teams doing the work and the lack of proper business participation the requirements are not aligned to business and projects all too often fail.

For these reasons we find it imperative to work from an Agile Feature-Driven Development “AFDD” Approach. Key elements include:

  • Stringent alignment of business processes to system requirements.

You need a process driven approach to requirements identification and prioritization.  Inherent to this are dynamic reusable business process models that drive the identification of services, appropriate flexibility and reusability, alignment with enterprise business process management systems, and the identification of tangible benefits and requirement prioritization

  • Focus on the features of the solution – the prioritize requirements are then assess to the base features of the solution.  Gap analysis and development dependencies are identified and estimated
  • Testing of features are driven by the business processes / requirements

We believe there to be 5 main phases of Feature Driven Development, the middle three are most classically “Agile” and the first and last phase are traditional in their structure. In our view, Feature Driven Development requires, it is based on, starting by clearly understanding the business requirements and features for the new solution. The process begins with requirements from a business perspective and develops an executable roadmap and governance for a successful implementation.  Whether your business direction is described in products and markets, Critical Success Factors, or Strategic Vectors and Do-Wells, that direction defines and informs things the business needs to have and do.  Those things drive the roadmap.  This is not an “Agile” process, it has a timeline of its own[2].

SAFe process

In Elaboration and Design those business requirements and feature definitions are extended and verified.  Missing business requirements and features, user experience descriptions, system integration and potential data conversion routines are discovered and documented. This phase takes a good idea and creates a definition of a “whole product.”  The scope of what must be done is clear enough from the Inception Planning that the Elaboration can be properly “sized” and structured. This is an iterative, “agile,” sprint-driven process, time boxed with “product owners” approving the work to be done and the results from each sprint.

Configuration and Construction involves a series of “traditional” Agile sprints, coordinated around the defined features, the “release train,” to deliver what is defined as the business objective.  Since the construction is Feature-based so is the Testing and Acceptance of the solution (feature or full solution) in a control and predictive environment.  This includes user acceptance test and performance test.  Once this phase is complete the system or feature of the system is ready for deployment.

This is an “agile” approach.  It expects that the initial design may mature during the development process.  It also expects that despite best efforts of all parties some of what is built is different from the intended design.  For these reasons we recommend conducting a product pilot test in a control production environment.  During this test resources are assembled to quickly resolve and business or technical issue raised during the pilot period.  Once this has been completed the system is ready for deployment.

For Core Systems efforts we favor Feature Driven Development because it minimizes unconstructive “green fields” thinking in a known space while maximizes the inventive, iterative, differentiation of deployed feature/function that Agile emphasizes. The resulting implementations are more focused on delivering solution features that are aligned to business concepts rather that counting the number of story points that have been delivered. Business people can better understand the project status and can react accordingly.

Jim Anderson
james.anderson@returnonintelligence.com
(610) 247-8092


[1] The Art of Project Portfolio Management by Meskendahl, Jonas, Kock, and Gemunden

[2] There are methods, like Dean Leffingwell’s SAFe, which attempt to use Kanban methods to add Agility in this phase as well

Print Friendly, PDF & Email
Tweet about this on TwitterShare on LinkedInShare on Google+Share on TumblrEmail this to someone

Analytics: The Next Step on the Road to the Smart Grid

Smart Grids are among the top of priorities for electric utilities and the communities they serve around the world.  There is a lot of activity.  Dept of Energy Report 2013In the US alone a recent Department of Energy report revealed that investment in smart grid projects has resulted in almost $7 billion in total economic output, benefiting a wide variety of industrial sectors and creating 47,000 jobs. On the other hand, at a recent seminar at MIT on the Smart Grid the panelists were both enthusiastic for the long run and skeptical or worried in the short run, especially in the consumer sphere.

The panelists from NSTAR, Schneider Electric, Peregrine and FirstFuel said that 22% of load on the grid is consumer load. Smart Grid design and capability goals include the ability to measure, control and bill at the circuit level inside a home or business, but the Smart Grid may have small economic impact there. According to the panel the best estimates of consumer savings from Smart Grid are $100/year.

In the realm of Small/Medium buildings there are substantive potential benefits but the panelists say not enough to have an Energy Manager on staff to make the changes, do the engineering, the monitoring, and the implementation. There isn’t enough benefit for them to focus on it.  If the building and all the tenants don’t act then the Smart Grid benefits will be hard to capture.

Given these challenges what is the next step for Smart Grid?  Some of the answer can be found in a recent article on how good ideas spread by Atul Guwande. He is a surgeon, a writer, and a public-health researcher.  Atul GuwandeIn this article he compares the lightning-fast spread of the invention of ether-based anesthesia with the long, slow adoption of clean operating rooms, washed hands, fresh gowns and Listerine.  In brief, anesthesia solved a problem that doctors and hospitals had with screaming and thrashing patients, emotionally draining surgical procedures.  Doctors wanted a change.  With antiseptics and cleanliness, the dangers were un-seen to the doctors, involved a lot of procedural changes, and solved a problem only the patients had; survival. For antiseptics the change only came, more than 30 years after the invention, when German doctors took it upon themselves to treat surgery as science. Science needed precision and cleanliness. This included white gowns, masks, antiseptics, fresh gloves, clean rooms. After a long dormancy the now-obvious idea “went viral.”

Consumer level investments and benefits from the Smart Grid don’t appear to be ready, yet.  Regulators and providers and distributors of power are looking for the returns.  Looking to consumer solutions is a bit like starting the computer revolution in the late 1950’s with personal computers. It didn’t and couldn’t happen that way.

In business, IT has gone through multiple eras in the way it transforms then supports an enterprise – think mainframes then client server then the internet and now mobility, big data, the cloud: I-SMAC.  Within each era, as with anything else in life, the first systems built are those with big payoffs.  For the Smart Grid this is in the industrial, the corporate, and the municipal, state and federal forms of consumption.

When we talk to utility companies the current focus area related to the Smart Grid relates to data.  Just as in other industries like pharmaceuticals, the grids, transformers, meters and controllers already deployed are producing more data than companies can deal with, and it will get worse.  Newer equipment is being installed or existing equipment being outfitted with more and better sensors.  Data can be captured in smaller and smaller time increments, isolated to smaller and smaller grid footprints.  All of the analysis done produces more meta data and the opportunities to learn yet more.  As a client says, utilities are not struggling with connectivity [to devices] as much as they are struggling with analysis of device borne data.

In addition to the volume of data there are myriad data analysis techniques that can be applied. Common predictive modeling techniques include classification trees and linear and logistic regression to leverage underlying statistical distributions to estimate future outcomes. New, more CPU-intensive techniques, such as advancements in neural networks, can mimic the way a biological nervous system, such as the brain, processes information.  Which to use, when and why?  Utility executives say they have only started using a very few out of the many techniques at their disposal.

A small delay, or speeding up, of an energy buy, can greatly change the profitability of that trade.  Small adjustments in voltage delivered, by time of day, can greatly change the economics of delivery and, if done properly, without materially affecting use.  Knowing which of these and other actions to take, precisely and specifically when, requires significant expansion of analytic activities by utilities.  But it is well worth it.  Expect much more of this long before your electricity provider asks permission to alter the fan speed on your refrigerator.

Print Friendly, PDF & Email
Tweet about this on TwitterShare on LinkedInShare on Google+Share on TumblrEmail this to someone

A Model for Transition to IoE in Manufacturing

In a recent interview, executives from Robert Bosch GmbH and McKinsey discussed the Internet of Everything (IoE) and its impact on manufacturing.  They described significant changes to the production process and to the management of supply chains from this “fourth industrial revolution.”  The IoE allows for the interconnection of factories within and across regions and the exposure or “display” of the status of each component of each product for each customer via each distribution method.  Sensors in machines and in components will be able to keep universally in synch about what has to be done, what has been done and how well it was done.

A global decentralization of production control is now possible. Creating this reality will require new forms of intercompany and interdisciplinary collaboration.  The buyer, seller and distributor will all be involved in product design, engineering, and logistics.

GE Industrial InternetToday, physical flows and financial flows and information flows are different for manufacturing.  The IoE vision has them increasingly fusing together.  This transformation to what GE calls the Industrial Internet begs a set of questions: In this future how will orders be placed and with whom?  Who or what verifies the accuracy of an order or a deliverable across a network of suppliers, manufacturers and distributors that is formed, of an instant, down to the level of at an order at a time?

In this coming future state information, via the cloud, will be real-time available to all concerned parties.  The decisions to be made based on this information will be subtle, situation-sensitive, and so voluminous and time dependent that people won’t be making them. Algorithms running in machine-machine (M2M) systems will.  On first consideration this all seems overwhelmingly complicated.  We’ll need a model, an example to build from, on how to make the transition.  It turns out we have one.

Changing the trading cycles for Wall Street are recent, real examples that provide a roadmap for the manufacturing transition.  In that world the number of days allowed to settle a trade, the “settlement cycle,” has undergone major transitions. The most notable was from 5 days to 3 days, so-called T+5 to T+3, occurred in 1995. That change required almost every firm in the US to make some changes to their processing flows and systems.  Since the move to T+3 various exchanges have made further improvements towards T+1.  The table below shows some of the major changes, the before and after, that were accomplished:

T-5 to T-1 Table

T+1, even if never mandated, can be viewed as an example of industry opportunity through dislocation. At some level, IoE capabilities can enable dramatic cycle time gains by unlinking end-to-end dependencies (e.g. I no longer need to “affirm” trades based upon evaluating “confirm trade” messages). Some entities/roles will become more independent, some more dependent. Some may disappear if they no longer add value.

The parallels for manufacturing in an Internet of Everything world are clear (though some elements used in trading may not be used here or at the same level of emphasis).  Cross-industry governance will be needed on the format and import of transactions, acceptable technical modes of sending and receiving the messages, management of the quality and timing of the messages both in content and technically, and how to handle disputes.

Douglas Brockway
doug.brockway@returnonintelligence.com

Ira Feinberg
ira.feinberg@returnonintelligence.com

July 16, 2015

Print Friendly, PDF & Email
Tweet about this on TwitterShare on LinkedInShare on Google+Share on TumblrEmail this to someone

You Are HERE

Now what?..

The Internet of Everything has recently joined Big Data Analytics, Social and Mobile technologies and the Cloud as subjects that one can bring up in a general business or social situation and be reasonably sure people will know what it is or quickly understand it.  What is also becoming generally understood is that these elements are connected.  We call them I-SMAC.  They feed on each other and the combinations are creating new businesses and “disrupting” old ones.

That there is a new opportunity, or, if you’re of a different mind-set, a new threat, raises the question among business leaders, “where are we and what should we be doing?”  There’s a framework that dates back to the days before “Enterprise IT” was called Enterprise IT that can help.  First laid out in a Harvard Business Review article in the mid-70’s, the “Stages Theory” proposes four “growth processes” that managers can use to track the evolution of IT in support of business.

On the “Demand Side” are included the Using Community, their use, participation and understanding of technology, and the “Applications Portfolio”, now including both applications and services, that make up the functional, now including process and analytic capabilities that an organization (or market) does or could use.

Growth Processes

On the “Supply Side” are the Resources brought to bear:  technologies, personnel inside and, now, outside the organization, and other elements like facilities and supplies, along with Management Practices which range from strategy and governance through development and support to daily operation and break/fix.

On a cross-industry basis the Applications Portfolio for I-SMAC is still in an early stage.  In some companies and industries, like retail bookselling or personal photography, it has passed the early experimentation stage and a full ramp up in capability is underway.  In no case are these portfolios “mature” Stage IV portfolios. Over recent months we have seen a subtle but clear shift in the awareness of I-SMAC opportunities.  Still, the Using community tends to be either unaware or artificially enthusiastic or doubtful and combative.  This is consistent with the early stage nature of the portfolios.  Lots of promise but not yet enough history to show unquestioned benefit.

For the most parts the Resources being brought to bear are new and rapidly changing.  There is a very short half-life of the preferred vendor or technology for a given task, or there is not yet an implicit and emerging standard, in most cases.  The staff, in-house or in service providers, are skilled in what they are working on but, as the technologies around them are kaleidoscopically changing, are having to spend large amounts of time keeping up.  Management Practices are currently updates-with-Band-Aids of what went before.  The best way to build I-SMAC systems and to manage them at scale is not yet proven.

You Are HERE Stages

What should you do in your case?  First, set a baseline that reflects your industry or market overall and shows the position of your company.  However detailed and analytic you wish or need to make it, the baseline should cover the state of each of the Growth Processes.  You will typically find that they are at a similar stage, but not identical.  Spend more think time if one growth process is Stage III and one Stage I.  Such mis-matches are trouble.  Do a compare-and-contrast analysis between your status and an industry synopsis.  Make decisions about whether you are ahead or behind and what you should do about it.

Second, with a “light touch,” explore the I-SMAC efforts underway within your company today. This basic inventory, by the way, is a Stage II practice. You are building organizational awareness of how you are trying to take advantage of, or face down a threat from I-SMAC.  You need to know what these efforts are, but as they are almost certainly early-Stage efforts you need to avoid the urge to pull the plug because you can’t yet see the mature market value. Make sure you’re in the game. If you are not at least trying to make use of some combination of Internet of Everything generated Data via Mobile platforms, leveraging Social technology via the Cloud you are exposed to competitors and new entrants who will.

Douglas Brockway
July 15, 2013
doug.brockway@returnonintelligence.com

 

Print Friendly, PDF & Email
Tweet about this on TwitterShare on LinkedInShare on Google+Share on TumblrEmail this to someone

Understanding the Dynamics of IT Spending in an I-SMAC World

We recently read another analysis of the purported inefficiency and waste in IT spending, this time the authors were aghast that 80-85% of IT spending is used, in Gartner Group terms, to “Keep the Lights On.”  They described this spending as wasteful maintenance and “troubling” ongoing enhancements.

One could argue that maintenance and break-fix is “keeping the lights on,” but making continual adjustments in application function to align with market and business need is a competitive imperative.  It’s not waste.

In many IT organizations the total IT budget is very tightly managed, often fixed and rarely rising more than a percentage point or three.  If a company keeps spending money on new systems it quickly comes to a cross-roads.  Either the amount of new systems development must be cut or the amount of maintenance, enhancement and operations must be cut, or, the enhancement and operations must become much more efficient.Spending model insert

Beyond the radical up-tick in systems capabilities, a key reason companies are pursuing I-SMAC-based solutions is that they have the ability to radically change the IT Activity-based Funding Model’s dynamics.  On the one hand I-SMAC tends to result in building systems faster which means a shorter time to the on-going enhancement and operations costs.  On the other hand the social, mobile and cloud costs are delivered as services with ongoing unit cost reductions.  The analytic costs mirror traditional enhancement costs but the returns are worth it.

Just as when we transitioned from the mainframe/mini era to client server, then to the internet and now to I-SMAC, the scope of what can be developed for a dollar invested has taken a significant leap ahead.  Also, the unit costs and gross costs, of maintaining and enhancement each unit, or dollar, of developed function has gone down.  The economics of this means that everything will change.  When I-SMAC matures we’ll be able to look back and see that significant amounts of spending still go to “keeping the lights on.” But, there will be so many more lights that it will clearly be money well spent.

Print Friendly, PDF & Email
Tweet about this on TwitterShare on LinkedInShare on Google+Share on TumblrEmail this to someone

Telematics Data – Changing The Insurance Underwriting and Actuarial Environment

Telematics and specifically the usage based data it generates, significantly improves the ability to rate and price automobile insurance, by adding a deeper level of granularity to the data commonly used today.

Companies in the forefront of using telematics data, are beginning to understand the value of its many indicators as they relate to policyholder driving behavior, and how that behavior positive or negative, directly affects overall policy administration cost.

This advantage though, also comes with a possible disadvantage – higher volumes of data being added to already burdened processing resources. A single vehicle generates approximately 2.6 MB of data per week.  If 50,000 auto policies are on the books, accumulating that data for a year results in 6.8 TB per year.

Pay How You Drive Data

Given that the use of telematics data from automobiles is on the rise in insurance companies, to be followed by telematics data generated from wireless sensors in personal and commercial use; a solution for processing huge volumes of data quickly is indicated.

Most likely that solution is SAP/HANA based, processes the data and analytics together in main-memory, provides underwriters and actuaries a technological advantage to their business – real-time rating and pricing, a solution that doesn’t exist with traditional methods.

Jim Janavich

Print Friendly, PDF & Email
Tweet about this on TwitterShare on LinkedInShare on Google+Share on TumblrEmail this to someone

The Big Data Challenge – What’s Your Point?

Recently I attended a seminar at MIT’s Enterprise Forum on data and analytics, Big Data, in the pharmaceutical and health care industries and I learned a thing or two.  The panel included an investor, and leaders in research and IT from Pfizer, Astra Zeneca and a joint effort between Harvard University and MIT called The Broad Institute.

The moderator described how there are a number of success stories that relate to the harvesting of previously un-known or managed data.  Still, there are many technical, human, and organizational challenges to widespread success. Unwisely, in his view, many, instead of participating, are sitting on the sidelines waiting for the clear path to be sorted out. In a bit of hyperbole he said, “If you don’t like change, you’re going to really hate being irrelevant.”

Everyone agreed that we are in the early days of big data and related analytics.  Whatever we think of the volume, velocity and variety of the data we’re dealing with, our knowledge regarding what it is and what to do is in its infancy.  That said, in the past few years of trying the panelists have learned to distinguish between the technical issues of data volume and velocity and the human capital issue of data variety.  They believe that the large, constantly changing data universe will be increasingly manageable as our technologies try and catch up.

An Industry Note

Within the Pharmaceutical and Healthcare spheres per se there are challenges involving creating knowledge/data about drugs or genomes and the willingness or others to pay for access to that knowledge, which comes at a cost. This is especially true with genomic testing. Normally one pays each time you run a blood test or a CAT scan… each test is different and the analysis relates to that test. With genomes, how do we create policy around this data where you test once and the data are used and viewed many times by others?

The variety of data is something that must be dealt with by people. It comes in different forms (one example: structured v unstructured) from different sources, some from the analysis you just invented, and the uses and potentials are constantly changing. The panelists believe that our ability to understand, examine and use the variety of data is limited mostly by human skills, insight, experience and knowledge.

There was an extended discussion about the volume of data that got me to thinking.  It is agreed by all that we have more data available than we know what to do with.  And, each time we do an analysis we create more data.  The data volumes are increasing faster than our abilities to store, manage, inquire, and analyze the volumes.  The data volumes are beyond our ability to cope and are growing faster than our abilities grow.

For all practical purposes this means that data volumes are infinite.  Whatever our skill and technology scope the volume of data exceeds it today and will do so for some time.  We have to keep trying to catch up but understanding and analyzing all of our data will never be a productive goal.

The strategic differentiation in analytics will come from what my colleague Allan Frank describes as “answering outcome-based questions.” In the context of the panel’s observations, the skills and insights needed to address big data may well include technical data scientists and writers of algorithms and more. But, success will certainly hinge on the ability to distill what business outcomes you want, why, and what you need to know in order to service those outcomes. Our friend Bruce Rogow puts it perhaps more emphatically.  He associates success with “defining your purpose.”

If you want to have strategic success in the area of big data an analytics we recommend some familiar frameworks applied to this space:

  1. Whether you’re responsible for a small business unit or for an enterprise, understand your business vision.  If is already prepared, get a copy.  Break it down into the strategic vectors and “do-wells” or if you prefer, your critical success factors, and describe the business capabilities needed to succeed and the technology ecosystem, in this case data and analytical ecosystem, needed to support them.
  2. Start organizing and iterating 6-12 week cycles that scaled agile world calls “release trains.”  Have a subset of the business narratives and related segment of the ecosystem taken to the next level, designed and built.  At the end of this cycle you have a working environment that examines real data, produces real results.
  3. Determine what about the effort was successful, what needed help or more data or more analysis or a better defined business purpose.  Define another analytical release train. Do it again.

Doug Brockway
doug.brockway@returnonintelligence.com

Print Friendly, PDF & Email
Tweet about this on TwitterShare on LinkedInShare on Google+Share on TumblrEmail this to someone

Consumerization and BYOD – Transformation Catalysts

By Doug Brockway and Ilja Vinogradov

The consumerization of IT, which the use of third-party cloud services and applications such as cloud storage and social media and Bring Your Own Device (BYOD), are driving an irreversible trend in the way businesses and their staffs produce and consume information.  The impact goes far beyond satisfying the desires of individuals to use their own devices and not be hassled about it.  In acceding to that trend business also stands up to the need to change the way the information and transactions in corporate systems are consumed. This leads to transformations in applications portfolios, in business process and business results.

The first steps in BYOD came with Wintel notebooks, then Macs were added, and now mobile devices, tablets and smart phones.  As-is, the information displayed is not consumable by mobile devices.  The different screen sizes require different UI layouts.  The point and click interactions upon which “industrial” systems rely is confounded by the touch interaction of a tablet; your iPad has no right click, it’s hard to double click on your Android.

But, while most companies now spend considerable time and energy on UI and UX for a mobile device world across multiple vendors there’s a deeper issue, a deeper opportunity at play.  The core systems that run our corporations and our institutions are “Functional Systems” or as Clay Shirky has called them, “Web School” systems, where scalability, generality, and completeness were the key virtues. They use “enterprise design practices” in that from the back-end to the UX the designs put all the function one might need to cover all the situations one might encounter across a homogenous set of “users.”  Web School systems are “closed” systems.  Their function is designed for consumption only in a pre-defined manner using an application UI. These web-enabled apps are designed to provide maximum functionality with minimal amount of screens, for the most part to reduce development cost per user.

Increasingly we are finding that breaking this paradigm by combining “situational design” front-end systems with “cleanly implemented” core systems creates the optimal solutions.  This means designing mobile apps that are optimized to allow increasingly targeted groups to accomplish particular tasks as quickly as possible.  These apps sacrifice some functionality found in Web School systems in return for targeted relevance (Economies of Scope). These systems are “open” in that their function can be consumed not only by different humans but by other applications as well. Think of localization not just for nationalities and languages.  Think of localization in the sense of the engineering data needed in the field is different from that in the lab or the timeliness of CRM data and the sales reporting needs of a SME channel are different from that of selling to large corporations.

In this world good design keeps the mobile part of the technology ecosystem as simple as possible from implementation point of view.  The complexity is pushed to backend, to middleware and to so-called “smart process apps.” This is where the different transactions are created, the different views to data.  A useful analog is the concept of “software agents” – making business process components that respond to individualized environments, i.e. software that enables decisions of real and tangible value.

Because users are on the move and business needs are in constant flux they need capabilities to be developed quickly, customized to immediate need, and at a low enough cost to have very short payback period.  These systems may be in use for some time but the economics allow them to be treated as throw-away solutions.  Marc Andreesen showed in “Why Software is Eating the World” that the costs of building such targeted-use systems has dropped and will continue to drop precipitously.  This also allows for design and development emphasis on time to market, especially time to materially, positively affect employee productivity.

It is the customization, the lower costs of ownership, the continuous alignment to business need, the “enterprise agility” that makes strategically thoughtful actions to take advantage of consumerization and BYOD transformational.

Print Friendly, PDF & Email
Tweet about this on TwitterShare on LinkedInShare on Google+Share on TumblrEmail this to someone