A Model for Transition to IoE in Manufacturing

In a recent interview, executives from Robert Bosch GmbH and McKinsey discussed the Internet of Everything (IoE) and its impact on manufacturing.  They described significant changes to the production process and to the management of supply chains from this “fourth industrial revolution.”  The IoE allows for the interconnection of factories within and across regions and the exposure or “display” of the status of each component of each product for each customer via each distribution method.  Sensors in machines and in components will be able to keep universally in synch about what has to be done, what has been done and how well it was done.

A global decentralization of production control is now possible. Creating this reality will require new forms of intercompany and interdisciplinary collaboration.  The buyer, seller and distributor will all be involved in product design, engineering, and logistics.

GE Industrial InternetToday, physical flows and financial flows and information flows are different for manufacturing.  The IoE vision has them increasingly fusing together.  This transformation to what GE calls the Industrial Internet begs a set of questions: In this future how will orders be placed and with whom?  Who or what verifies the accuracy of an order or a deliverable across a network of suppliers, manufacturers and distributors that is formed, of an instant, down to the level of at an order at a time?

In this coming future state information, via the cloud, will be real-time available to all concerned parties.  The decisions to be made based on this information will be subtle, situation-sensitive, and so voluminous and time dependent that people won’t be making them. Algorithms running in machine-machine (M2M) systems will.  On first consideration this all seems overwhelmingly complicated.  We’ll need a model, an example to build from, on how to make the transition.  It turns out we have one.

Changing the trading cycles for Wall Street are recent, real examples that provide a roadmap for the manufacturing transition.  In that world the number of days allowed to settle a trade, the “settlement cycle,” has undergone major transitions. The most notable was from 5 days to 3 days, so-called T+5 to T+3, occurred in 1995. That change required almost every firm in the US to make some changes to their processing flows and systems.  Since the move to T+3 various exchanges have made further improvements towards T+1.  The table below shows some of the major changes, the before and after, that were accomplished:

T-5 to T-1 Table

T+1, even if never mandated, can be viewed as an example of industry opportunity through dislocation. At some level, IoE capabilities can enable dramatic cycle time gains by unlinking end-to-end dependencies (e.g. I no longer need to “affirm” trades based upon evaluating “confirm trade” messages). Some entities/roles will become more independent, some more dependent. Some may disappear if they no longer add value.

The parallels for manufacturing in an Internet of Everything world are clear (though some elements used in trading may not be used here or at the same level of emphasis).  Cross-industry governance will be needed on the format and import of transactions, acceptable technical modes of sending and receiving the messages, management of the quality and timing of the messages both in content and technically, and how to handle disputes.

Douglas Brockway
doug.brockway@returnonintelligence.com

Ira Feinberg
ira.feinberg@returnonintelligence.com

July 16, 2015

Tweet about this on TwitterShare on LinkedInShare on Google+Share on TumblrEmail this to someone

You Are HERE

Now what?..

The Internet of Everything has recently joined Big Data Analytics, Social and Mobile technologies and the Cloud as subjects that one can bring up in a general business or social situation and be reasonably sure people will know what it is or quickly understand it.  What is also becoming generally understood is that these elements are connected.  We call them I-SMAC.  They feed on each other and the combinations are creating new businesses and “disrupting” old ones.

That there is a new opportunity, or, if you’re of a different mind-set, a new threat, raises the question among business leaders, “where are we and what should we be doing?”  There’s a framework that dates back to the days before “Enterprise IT” was called Enterprise IT that can help.  First laid out in a Harvard Business Review article in the mid-70’s, the “Stages Theory” proposes four “growth processes” that managers can use to track the evolution of IT in support of business.

On the “Demand Side” are included the Using Community, their use, participation and understanding of technology, and the “Applications Portfolio”, now including both applications and services, that make up the functional, now including process and analytic capabilities that an organization (or market) does or could use.

Growth Processes

On the “Supply Side” are the Resources brought to bear:  technologies, personnel inside and, now, outside the organization, and other elements like facilities and supplies, along with Management Practices which range from strategy and governance through development and support to daily operation and break/fix.

On a cross-industry basis the Applications Portfolio for I-SMAC is still in an early stage.  In some companies and industries, like retail bookselling or personal photography, it has passed the early experimentation stage and a full ramp up in capability is underway.  In no case are these portfolios “mature” Stage IV portfolios. Over recent months we have seen a subtle but clear shift in the awareness of I-SMAC opportunities.  Still, the Using community tends to be either unaware or artificially enthusiastic or doubtful and combative.  This is consistent with the early stage nature of the portfolios.  Lots of promise but not yet enough history to show unquestioned benefit.

For the most parts the Resources being brought to bear are new and rapidly changing.  There is a very short half-life of the preferred vendor or technology for a given task, or there is not yet an implicit and emerging standard, in most cases.  The staff, in-house or in service providers, are skilled in what they are working on but, as the technologies around them are kaleidoscopically changing, are having to spend large amounts of time keeping up.  Management Practices are currently updates-with-Band-Aids of what went before.  The best way to build I-SMAC systems and to manage them at scale is not yet proven.

You Are HERE Stages

What should you do in your case?  First, set a baseline that reflects your industry or market overall and shows the position of your company.  However detailed and analytic you wish or need to make it, the baseline should cover the state of each of the Growth Processes.  You will typically find that they are at a similar stage, but not identical.  Spend more think time if one growth process is Stage III and one Stage I.  Such mis-matches are trouble.  Do a compare-and-contrast analysis between your status and an industry synopsis.  Make decisions about whether you are ahead or behind and what you should do about it.

Second, with a “light touch,” explore the I-SMAC efforts underway within your company today. This basic inventory, by the way, is a Stage II practice. You are building organizational awareness of how you are trying to take advantage of, or face down a threat from I-SMAC.  You need to know what these efforts are, but as they are almost certainly early-Stage efforts you need to avoid the urge to pull the plug because you can’t yet see the mature market value. Make sure you’re in the game. If you are not at least trying to make use of some combination of Internet of Everything generated Data via Mobile platforms, leveraging Social technology via the Cloud you are exposed to competitors and new entrants who will.

Douglas Brockway
July 15, 2013
doug.brockway@returnonintelligence.com

 

Tweet about this on TwitterShare on LinkedInShare on Google+Share on TumblrEmail this to someone

Telematics Data – Changing The Insurance Underwriting and Actuarial Environment

Telematics and specifically the usage based data it generates, significantly improves the ability to rate and price automobile insurance, by adding a deeper level of granularity to the data commonly used today.

Companies in the forefront of using telematics data, are beginning to understand the value of its many indicators as they relate to policyholder driving behavior, and how that behavior positive or negative, directly affects overall policy administration cost.

This advantage though, also comes with a possible disadvantage – higher volumes of data being added to already burdened processing resources. A single vehicle generates approximately 2.6 MB of data per week.  If 50,000 auto policies are on the books, accumulating that data for a year results in 6.8 TB per year.

Pay How You Drive Data

Given that the use of telematics data from automobiles is on the rise in insurance companies, to be followed by telematics data generated from wireless sensors in personal and commercial use; a solution for processing huge volumes of data quickly is indicated.

Most likely that solution is SAP/HANA based, processes the data and analytics together in main-memory, provides underwriters and actuaries a technological advantage to their business – real-time rating and pricing, a solution that doesn’t exist with traditional methods.

Jim Janavich

Tweet about this on TwitterShare on LinkedInShare on Google+Share on TumblrEmail this to someone