Analytics: The Next Step on the Road to the Smart Grid

Smart Grids are among the top of priorities for electric utilities and the communities they serve around the world.  There is a lot of activity.  Dept of Energy Report 2013In the US alone a recent Department of Energy report revealed that investment in smart grid projects has resulted in almost $7 billion in total economic output, benefiting a wide variety of industrial sectors and creating 47,000 jobs. On the other hand, at a recent seminar at MIT on the Smart Grid the panelists were both enthusiastic for the long run and skeptical or worried in the short run, especially in the consumer sphere.

The panelists from NSTAR, Schneider Electric, Peregrine and FirstFuel said that 22% of load on the grid is consumer load. Smart Grid design and capability goals include the ability to measure, control and bill at the circuit level inside a home or business, but the Smart Grid may have small economic impact there. According to the panel the best estimates of consumer savings from Smart Grid are $100/year.

In the realm of Small/Medium buildings there are substantive potential benefits but the panelists say not enough to have an Energy Manager on staff to make the changes, do the engineering, the monitoring, and the implementation. There isn’t enough benefit for them to focus on it.  If the building and all the tenants don’t act then the Smart Grid benefits will be hard to capture.

Given these challenges what is the next step for Smart Grid?  Some of the answer can be found in a recent article on how good ideas spread by Atul Guwande. He is a surgeon, a writer, and a public-health researcher.  Atul GuwandeIn this article he compares the lightning-fast spread of the invention of ether-based anesthesia with the long, slow adoption of clean operating rooms, washed hands, fresh gowns and Listerine.  In brief, anesthesia solved a problem that doctors and hospitals had with screaming and thrashing patients, emotionally draining surgical procedures.  Doctors wanted a change.  With antiseptics and cleanliness, the dangers were un-seen to the doctors, involved a lot of procedural changes, and solved a problem only the patients had; survival. For antiseptics the change only came, more than 30 years after the invention, when German doctors took it upon themselves to treat surgery as science. Science needed precision and cleanliness. This included white gowns, masks, antiseptics, fresh gloves, clean rooms. After a long dormancy the now-obvious idea “went viral.”

Consumer level investments and benefits from the Smart Grid don’t appear to be ready, yet.  Regulators and providers and distributors of power are looking for the returns.  Looking to consumer solutions is a bit like starting the computer revolution in the late 1950’s with personal computers. It didn’t and couldn’t happen that way.

In business, IT has gone through multiple eras in the way it transforms then supports an enterprise – think mainframes then client server then the internet and now mobility, big data, the cloud: I-SMAC.  Within each era, as with anything else in life, the first systems built are those with big payoffs.  For the Smart Grid this is in the industrial, the corporate, and the municipal, state and federal forms of consumption.

When we talk to utility companies the current focus area related to the Smart Grid relates to data.  Just as in other industries like pharmaceuticals, the grids, transformers, meters and controllers already deployed are producing more data than companies can deal with, and it will get worse.  Newer equipment is being installed or existing equipment being outfitted with more and better sensors.  Data can be captured in smaller and smaller time increments, isolated to smaller and smaller grid footprints.  All of the analysis done produces more meta data and the opportunities to learn yet more.  As a client says, utilities are not struggling with connectivity [to devices] as much as they are struggling with analysis of device borne data.

In addition to the volume of data there are myriad data analysis techniques that can be applied. Common predictive modeling techniques include classification trees and linear and logistic regression to leverage underlying statistical distributions to estimate future outcomes. New, more CPU-intensive techniques, such as advancements in neural networks, can mimic the way a biological nervous system, such as the brain, processes information.  Which to use, when and why?  Utility executives say they have only started using a very few out of the many techniques at their disposal.

A small delay, or speeding up, of an energy buy, can greatly change the profitability of that trade.  Small adjustments in voltage delivered, by time of day, can greatly change the economics of delivery and, if done properly, without materially affecting use.  Knowing which of these and other actions to take, precisely and specifically when, requires significant expansion of analytic activities by utilities.  But it is well worth it.  Expect much more of this long before your electricity provider asks permission to alter the fan speed on your refrigerator.

Tweet about this on TwitterShare on LinkedInShare on Google+Share on TumblrEmail this to someone

Telematics Data – Changing The Insurance Underwriting and Actuarial Environment

Telematics and specifically the usage based data it generates, significantly improves the ability to rate and price automobile insurance, by adding a deeper level of granularity to the data commonly used today.

Companies in the forefront of using telematics data, are beginning to understand the value of its many indicators as they relate to policyholder driving behavior, and how that behavior positive or negative, directly affects overall policy administration cost.

This advantage though, also comes with a possible disadvantage – higher volumes of data being added to already burdened processing resources. A single vehicle generates approximately 2.6 MB of data per week.  If 50,000 auto policies are on the books, accumulating that data for a year results in 6.8 TB per year.

Pay How You Drive Data

Given that the use of telematics data from automobiles is on the rise in insurance companies, to be followed by telematics data generated from wireless sensors in personal and commercial use; a solution for processing huge volumes of data quickly is indicated.

Most likely that solution is SAP/HANA based, processes the data and analytics together in main-memory, provides underwriters and actuaries a technological advantage to their business – real-time rating and pricing, a solution that doesn’t exist with traditional methods.

Jim Janavich

Tweet about this on TwitterShare on LinkedInShare on Google+Share on TumblrEmail this to someone

The Big Data Challenge – What’s Your Point?

Recently I attended a seminar at MIT’s Enterprise Forum on data and analytics, Big Data, in the pharmaceutical and health care industries and I learned a thing or two.  The panel included an investor, and leaders in research and IT from Pfizer, Astra Zeneca and a joint effort between Harvard University and MIT called The Broad Institute.

The moderator described how there are a number of success stories that relate to the harvesting of previously un-known or managed data.  Still, there are many technical, human, and organizational challenges to widespread success. Unwisely, in his view, many, instead of participating, are sitting on the sidelines waiting for the clear path to be sorted out. In a bit of hyperbole he said, “If you don’t like change, you’re going to really hate being irrelevant.”

Everyone agreed that we are in the early days of big data and related analytics.  Whatever we think of the volume, velocity and variety of the data we’re dealing with, our knowledge regarding what it is and what to do is in its infancy.  That said, in the past few years of trying the panelists have learned to distinguish between the technical issues of data volume and velocity and the human capital issue of data variety.  They believe that the large, constantly changing data universe will be increasingly manageable as our technologies try and catch up.

An Industry Note

Within the Pharmaceutical and Healthcare spheres per se there are challenges involving creating knowledge/data about drugs or genomes and the willingness or others to pay for access to that knowledge, which comes at a cost. This is especially true with genomic testing. Normally one pays each time you run a blood test or a CAT scan… each test is different and the analysis relates to that test. With genomes, how do we create policy around this data where you test once and the data are used and viewed many times by others?

The variety of data is something that must be dealt with by people. It comes in different forms (one example: structured v unstructured) from different sources, some from the analysis you just invented, and the uses and potentials are constantly changing. The panelists believe that our ability to understand, examine and use the variety of data is limited mostly by human skills, insight, experience and knowledge.

There was an extended discussion about the volume of data that got me to thinking.  It is agreed by all that we have more data available than we know what to do with.  And, each time we do an analysis we create more data.  The data volumes are increasing faster than our abilities to store, manage, inquire, and analyze the volumes.  The data volumes are beyond our ability to cope and are growing faster than our abilities grow.

For all practical purposes this means that data volumes are infinite.  Whatever our skill and technology scope the volume of data exceeds it today and will do so for some time.  We have to keep trying to catch up but understanding and analyzing all of our data will never be a productive goal.

The strategic differentiation in analytics will come from what my colleague Allan Frank describes as “answering outcome-based questions.” In the context of the panel’s observations, the skills and insights needed to address big data may well include technical data scientists and writers of algorithms and more. But, success will certainly hinge on the ability to distill what business outcomes you want, why, and what you need to know in order to service those outcomes. Our friend Bruce Rogow puts it perhaps more emphatically.  He associates success with “defining your purpose.”

If you want to have strategic success in the area of big data an analytics we recommend some familiar frameworks applied to this space:

  1. Whether you’re responsible for a small business unit or for an enterprise, understand your business vision.  If is already prepared, get a copy.  Break it down into the strategic vectors and “do-wells” or if you prefer, your critical success factors, and describe the business capabilities needed to succeed and the technology ecosystem, in this case data and analytical ecosystem, needed to support them.
  2. Start organizing and iterating 6-12 week cycles that scaled agile world calls “release trains.”  Have a subset of the business narratives and related segment of the ecosystem taken to the next level, designed and built.  At the end of this cycle you have a working environment that examines real data, produces real results.
  3. Determine what about the effort was successful, what needed help or more data or more analysis or a better defined business purpose.  Define another analytical release train. Do it again.

Doug Brockway
doug.brockway@returnonintelligence.com

Tweet about this on TwitterShare on LinkedInShare on Google+Share on TumblrEmail this to someone