Category Archives: Technology

Kicking the Copier Won’t Fix Your Problem

Kicking the Copier Won't Fix Your Problem

GUEST POST from John Bessant

Have you ever felt the urge to kick the photocopier? Or worse? That time when you desperately needed to make sixty copies of a workshop handout five minutes before your session begins. Or when you needed a single copy of your passport or driving license, it’s the only way you can prove your identity to the man behind the desk about not to approve your visa application? Remember the awful day when you were struggling to print your boarding passes for the long-overdue holiday; that incident meant you ended up paying way over the odds at the airport?

The copiers may change, the locations and contexts may differ but underneath is one clear unifying thread. The machines are out to get you. Perhaps it’s just a random failure and you are just the unlucky one who keeps getting caught. Or maybe it’s more serious, they’ve started issuing them with an urgency sensor which detects how critical your making a copy is and then adjusts the machine’s behavior to match this by refusing to perform.

Whatever the trigger you can be sure that it won’t be a simple easy to fix error like ‘out of paper’ which you just might be able to do something about. No, the kind of roadblock these fiendish devices are likely to hurl on to your path will be couched in arcane language displayed on the interface as ‘Error code 3b76 — please consult technician’.

Given the number of photocopiers in the world and the fact that we are still far from being a paperless society in spite of our digital aspirations, it’s a little surprising that the law books don’t actually contain a section on xeroxicide — the attempt or execution of terminal damage to the lives of these machines.

Help is at hand. Because whilst we may still have the odd close and not very enjoyable encounter with these devices the reality is that they are getting better all the time. Not only through adding a bewildering range of functionality so that you can do almost anything with them apart from cook your breakfast, but also because they are becoming more reliable. And that is, in large measure, down to something called a community of practice. One of the most valuable resources we have in the innovation management toolkit.

The term was originally coined by Etienne Wenger and colleagues who used it to describe “groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.” Which is where the idea of communities of practice comes in. It’s a simple enough idea, based on the principle that we learn some things better when we act together.

Shared learning helps, not least in those situations where knowledge is not necessarily explicit and easily available for the finding. It’s a little like mining for precious metals; the really valuable stuff is often invisible inside clumps of otherwise useless rock. Tiny flecks on the surface might give us the clue to something valuable being contained therein but it’s going to take quite a lot of processing to extract it in shiny pure form.

Knowledge is the same; it’s often not available in easy reach or plain sight. Instead it’s what Michael Polanyi called tacit as opposed to explicit. We sometimes can’t even speak about it, we just know it because we do it.

Which brings us back to our photocopiers. And to the work of Julian Orr who worked in the 1990s as a field service engineer in a large corporation specializing in office equipment. He was an ethnographer, interested in understanding how communities of people interact, rather as an anthropologist might study lost tribes in the Amazon. Only his research was in California, down the road from Silicon Valley and he was carrying out research on how work was organized.

He worked with the customer service teams, the roving field service engineers who criss-cross the country trying to fix the broken machine which you’ve just encountered with its ‘Error code 3b76 — please consult technician’ message. Assuming you haven’t already disassembled the machine forcibly they are the ones who patiently diagnose and repair it so that it once again behaves in sweetly obedient and obliging fashion.

They do this through deploying their knowledge, some of which is contained in their manuals (or these days on the tablets they carry around). But that’s only the explicit knowledge, the accumulation of what’s known, the FAQs which represent the troubleshooting solutions the designers developed when creating the machines. Behind this is a much less well-defined set of knowledge which comes from encountering new problems in the field and working out solutions to them — innovating. Over time this tacit knowledge becomes explicit and shared and eventually finds its way into an updated service manual or taught on the new version of the training course.

Orr noticed that in the informal interactions of the team, the coming together and sharing of their experiences, a great deal of knowledge was being exchanged. And importantly that these conversations often led to new problems and solutions being shared and solved. These were not formal meetings and would often happen in temporary locations, like a Monday morning meet-up for breakfast before the teams went their separate ways on their service calls.

You can imagine the conversations taking place across the coffee and doughnuts, ranging from catching up on the weekend experience, discussing the sports results, recounting stories about recalcitrant offspring and so on. But woven through would also be a series of exchanges about their work — complaining about a particular problem that had led to one of them getting toner splashed all over their overalls, describing proudly a work-around they had come up with, sharing hacks and improvised solutions.

There’d be a healthy skepticism about the company’s official repair manual and a pride in keeping the machines working in spite of their design. More important the knowledge each of them encountered through these interactions would be elaborated and amplified, shared across the community. And much of it would eventually find its way back to the designers and the engineers responsible for the official manual.

Orr’s work influenced many people including John Seely Brown (who went on to be Chief Scientist at Xerox) and Paul Duguid who explored further this social dimension to knowledge creation and capture. Alongside formal research and development tools the storytelling across communities of practice like these becomes a key input to innovation, particularly the long-haul incremental improvements which lie at the heart of effective performance.

Tacit Explicit KnowledgeAn important theme which Japanese researchers Ikujiro Nonaka and Hirotaka Takeuchi were aware of and formalised in their seminal book about ‘the knowledge creating company’. They offered a simple model through which tacit knowledge is made explicit, shared and eventually embedded into practice, a process which helped explain the major advantages offered by engaging a workforce in high involvement innovation. Systems which became the ‘lean thinking’ model which is in widespread use today have their roots in this process, with teams of workers acting as communities of practice.

Their model has four key stages in a recurring cycle:

  • Socialization — in which empathy and shared experiences create tacit knowledge (for example, the storytelling in our field service engineer teams)
  • Externalization — in which the tacit knowledge becomes explicit, converted into ideas and insights which others can work with
  • Combination — in which the externalized knowledge is organized and added to the stock of existing explicit knowledge — for example embedding it in a revised version of the manual
  • Internalization — in which the new knowledge becomes part of ‘the way we do things around here’ and the platform for further journeys around the cycle

CoPs are of enormous value in innovation, something which has been recognized for a long time. Think back to the medieval Guilds; their system was based on sharing practice and building a community around that knowledge exchange process. CoPs are essentially ‘learning networks’. They may take the form of an informal social group meeting up where learning is a by-product of their being together; that’s the model which best describes our photocopier engineers and many other social groups at work. Members of such groups don’t all have to be from the same company; much of the power of industrial clusters lies in the way they achieve not only collective efficiency but also the way they share and accumulate knowledge.

Small firms co-operate to create capabilities far beyond the sum of their parts — and communities of practice form an excellent alternative to having formal R&D labs. John Seely Brown’s later research looked at, for example, the motorcycle cluster around the city of Chongquing in China whose products now dominate the world market. Success here is in no small measure due to the knowledge sharing which takes place within a geographically close community of practice.

CoPs can also be formally ‘engineered’ created for the primary purpose of sharing knowledge and improving practice. This can be done in a variety of ways — for example by organizing sector level opportunities and programs to share experience and move up an innovation trajectory. This model was used very successfully in, for example, the North Sea oil industry first to enable cost-reduction and efficiency improvements over a ten-year period in the CRINE (Cost reduction for a new era) program. It resulted in cumulative savings of over 30% on new project costs and as a result a similar model was deployed to explore new opportunities to deploy the sector’s services elsewhere in the world as the original North Sea work ran down.

It can work inside a supply network where the overall performance on key criteria like cost, quality and delivery time depends on fast diffusion of innovation amongst all its members. One of Toyota’s key success factors has been in the way in which it mobilizes learning networks across its supplier base and the model has been widely applied in other sectors, using communities of practice as a core tool.

CoPs have been used to help small firms share and learn around some of the challenges in growth through innovation — for example in the highly successful Profitnet program in the UK. It’s a model which underpins the start-up support culture where expert mentoring can be complemented by teams sharing experiences and trying to help each other in their learning journeys towards successful launch. And it’s being used extensively in the not-for-profit sector where working at the frontier of innovation to deal with some of the world’s biggest humanitarian and development challenges can be strengthened by sharing insights and experiences through formal communities of practice.

At heart the idea of a community of practice is simple though it deals with a complex problem. Innovation is all about knowledge creation and deployment and we’ve learned that this is primarily a social process. So, working with the grain of human interaction, bringing people together to share experiences and build up knowledge collectively, seems an eminently helpful approach.

Which suggests that next time you are thinking of taking a chainsaw to the photocopier you might like to pause — and maybe channel your energies into thinking of ways to innovate out of the situation. A useful first step might be to find others with similar frustrations and mobilize your own community of practice.

You can find a podcast version of this here

If you’d like more songs, stories and other resources on the innovation theme, check out my website here

And if you’d like to learn with me take a look at my online course here

Image credit: FreePik

What's Next - New York City on November 17 2022

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Unlocking the Power of Cause and Effect

Unlocking the Power of Cause and Effect

GUEST POST from Greg Satell

In 2011, IBM’s Watson system beat the best human players in the game show, Jeopardy! Since then, machines have shown that they can outperform skilled professionals in everything from basic legal work to diagnosing breast cancer. It seems that machines just get smarter and smarter all the time.

Yet that is largely an illusion. While even a very young human child understands the basic concept of cause and effect, computers rely on correlations. In effect, while a computer can associate the sun rising with the day breaking, it doesn’t understand that one causes the other, which limits how helpful computers can be.

That’s beginning to change. A group of researchers, led by artificial intelligence pioneer Judea Pearl, are working to help computers understand cause and effect based on a new causal calculus. The effort is still in its nascent stages, but if they’re successful we could be entering a new era in which machines not only answer questions, but help us pose new ones.

Observation and Association

Most of what we know comes from inductive reasoning. We make some observations and associate those observations with specific outcomes. For example, if we see animals going to a drink at a watering hole every morning, we would expect to see them at the same watering hole in the future. Many animals share this type of low-level reasoning and use it for hunting.

Over time, humans learned how to store these observations as data and that’s helped us make associations on a much larger scale. In the early years of data mining, data was used to make very basic types of predictions, such as the likelihood that somebody buying beer at a grocery store will also want to buy something else, like potato chips or diapers.

The achievement over the last decade or so is that advancements in algorithms, such as neural networks, have allowed us to make much more complex associations. To take one example, systems that have observed thousands of mammograms have learned to associate the ones that show a tumor with a very high degree of accuracy.

However, and this is a crucial point, the system that detects cancer doesn’t “know” it’s cancer. It doesn’t associate the mammogram with an underlying cause, such as a gene mutation or lifestyle choice, nor can it suggest a specific intervention, such as chemotherapy. Perhaps most importantly, it can’t imagine other possibilities and suggest alternative tests.

Confounding Intervention

The reason that correlation is often very different from causality is the presence of something called a confounding factor. For example, we might find a correlation between high readings on a thermometer and ice cream sales and conclude that if we put the thermometer next to a heater, we can raise sales of ice cream.

I know that seems silly, but problems with confounding factors arise in the real world all the time. Data bias is especially problematic. If we find a correlation between certain teachers and low test scores, we might assume that those teachers are causing the low test scores when, in actuality, they may be great teachers who work with problematic students.

Another example is the high degree of correlation between criminal activity and certain geographical areas, where poverty is a confounding factor. If we use zip codes to predict recidivism rates, we are likely to give longer sentences and deny parole to people because they are poor, while those with more privileged backgrounds get off easy.

These are not at all theoretical examples. In fact, they happen all the time, which is why caring, competent teachers can, and do, get fired for those particular qualities and people from disadvantaged backgrounds get mistreated by the justice system. Even worse, as we automate our systems, these mistaken interventions become embedded in our algorithms, which is why it’s so important that we design our systems to be auditable, explainable and transparent.

Imagining A Counterfactual

Another confusing thing about causation is that not all causes are the same. Some causes are sufficient in themselves to produce an effect, while others are necessary, but not sufficient. Obviously, if we intend to make some progress we need to figure out what type of cause we’re dealing with. The way to do that is by imagining a different set of facts.

Let’s return to the example of teachers and test scores. Once we have controlled for problematic students, we can begin to ask if lousy teachers are enough to produce poor test scores or if there are other necessary causes, such as poor materials, decrepit facilities, incompetent administrators and so on. We do this by imagining counterfactual, such as “What if there were better materials, facilities and administrators?”

Humans naturally imagine counterfactuals all the time. We wonder what would be different if we took another job, moved to a better neighborhood or ordered something else for lunch. Machines, however, have great difficulty with things like counterfactuals, confounders and other elements of causality because there’s been no standard way to express them mathematically.

That, in a nutshell, is what Judea Pearl and his colleagues have been working on over the past 25 years and many believe that the project is finally ready to bear fruit. Combining humans innate ability to imagine counterfactuals with machines’ ability to crunch almost limitless amounts of data can really be a game changer.

Moving Towards Smarter Machines

Make no mistake, AI systems’ ability to detect patterns has proven to be amazingly useful. In fields ranging from genomics to materials science, researchers can scour massive databases and identify associations that a human would be unlikely to detect manually. Those associations can then be studied further to validate whether they are useful or not.

Still, the fact that our machines don’t understand concepts like the fact that thermometers don’t increase ice cream sales limits their effectiveness. As we learn how to design our systems to detect confounders and imagine counterfactuals, we’ll be able to evaluate not only the effectiveness of interventions that have been tried, but also those that haven’t, which will help us come up with better solutions to important problems.

For example, in a 2019 study the Congressional Budget Office estimated that raising the national minimum wage to $15 per hour would result in a decrease in employment from zero to four million workers, based on a number of observational studies. That’s an enormous range. However, if we were able to identify and mitigate confounders, we could narrow down the possibilities and make better decisions.

While still nascent, the causal revolution in AI is already underway. McKinsey recently announced the launch of CausalNex, an open source library designed to identify cause and effect relationships in organizations, such as what makes salespeople more productive. Causal approaches to AI are also being deployed in healthcare to understand the causes of complex diseases such as cancer and evaluate which interventions may be the most effective.

Some look at the growing excitement around causal AI and scoff that it is just common sense. But that is exactly the point. Our historic inability to encode a basic understanding of cause and effect relationships into our algorithms has been a serious impediment to making machines truly smart. Clearly, we need to do better than merely fitting curves to data.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why Amazon Wants to Sell You Robots

Why Amazon Wants to Sell You Robots

GUEST POST from Shep Hyken

It was recently announced that Amazon.com would be acquiring iRobot, the maker of the Roomba vacuum cleaner. There are still some “hoops” to jump through, such as shareholder and regulatory approval, but the deal looks promising. So, why does Amazon want to get into the vacuum cleaner business?

It doesn’t!

At least not for the purpose of simply selling vacuum cleaners. What it wants to do is to get further entrenched into the daily lives of its customers, and Amazon has done an excellent job of just that. There are more than 200 million Amazon Prime members, and 157.4 million of them are in the United States. According to an article in USA Today, written by David Chang of the Motley Fool, Amazon Prime members spend an average of $1,400 per year. Non-Amazon Prime members spend about $600 per year.

Want more numbers? According to a 2022 Feedvisor survey of 2,000-plus U.S. consumers, 56% visit Amazon daily or at least a few times a week, which is up from 47% in 2019. But visiting isn’t enough. Forty-seven percent of consumers make a purchase on Amazon at least once a week. Eight percent make purchases almost every day.

Amazon has become a major part of our lives. And does a vacuum cleaner company do this? Not really, unless it’s iRobot’s vacuum cleaner. A little history about iRobot might shed light on why Amazon is interested in this acquisition.

iRobot was founded in 1990 by three members of MIT’s Artificial Intelligence Lab. Originally their robots were used for space exploration and military defense. About ten years later, they moved into the consumer world with the Roomba vacuum cleaners. In 2016 they spun off the defense business and turned their focus to consumer products.

The iRobot Roomba is a smart vacuum cleaner that does the cleaning while the customer is away. The robotic vacuum cleaner moves around the home, working around obstacles such as couches, chairs, tables, etc. Over time, the Roomba, which has a computer with memory fueled by AI (artificial intelligence) learns about your home. And that means Amazon has the capability of learning about your home.

This is not all that different from how Alexa, Amazon’s smart device, learns about customers’ wants and needs. Just as Alexa remembers birthdays, shopping habits, favorite toppings on pizza, when to take medicine, what time to wake up and much more, the “smart vacuum cleaner” learns about a customer’s home. This is a natural extension of the capabilities found in Alexa, thereby giving Amazon the ability to offer better and more relevant services to its customers.

To make this work, Amazon will gain access to customers’ homes. No doubt, some customers may be uncomfortable with Amazon having that type of information, but let’s look at this realistically. If you are (or have been) one of the hundreds of millions of Amazon customers, it already has plenty of information about you. And if privacy is an issue, there will assuredly be regulations for Amazon to comply with. They already understand their customers almost better than anyone. This is just a small addition to what they already know and provides greater capability to deliver a very personalized experience.

And that is exactly what Amazon plans to do. Just as it has incorporated Alexa, Ring and eero Wi-Fi routers, the Roomba will add to the suite of connected capabilities from Amazon that makes life easier and more convenient for its customers.

If you take a look at the way Amazon has moved from selling books to practically everything else in the retail world, and you recognize its strategy to become part of the fabric of its customers’ lives, you’ll understand why vacuum cleaners, specifically iRobot’s machines, make sense.

This article originally appeared on Forbes

Image Credit: Shep Hyken

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Is Digital Different?

Is Digital Different?

GUEST POST from John Bessant

‘Now the chips are down…’

‘The robots are coming…’

‘Digitalize or die!’

There’s no shortage of scary headlines reminding us of the looming challenge of digital transformation. The message is clear. On the one hand if we don’t climb aboard the digital bandwagon we’ll be left behind in a kind of late Stone Age, slowly crumbling to dust while the winds of change blow all around us. On the other we’re facing some really big questions — about employment, skills, structures, the whole business model with which we compete. If we don’t have a clear digital strategy to deal with these we’re going to be in trouble.

And it’s not just the commercial world which is having to face up to these questions; the same is true in the public sector and in the not-for-profit world. The digital storm has arrived.

There aren’t any easy solutions to this which explains why so many conferences now have the digital word scrawled across their strap-lines. They provide focal points, create tents within which people can huddle and talk together, trying to work out exactly how they are going to manage this challenge. I’ve spent the past couple of weeks attending a couple — ‘Innovating in the digital world’ was the banner under which the ISPIM (the International Society for Professional Innovation Management) community gathered while ‘Leading digital transformation’ brought EURAM (the European Academy of Management) together. Close to a thousand people gathering for more than just a welcome post-Covid reunion; conferences like these are a good indication of the scale of the questions which digital transformation raises.

A Pause for Thought

But look again at those headlines at the start of this piece. They were actually newspaper cuttings from the 1980s, close on fifty years ago. Anxiety about the transformative potential of digital technology was running pretty high back then and for similar reasons. And yet their dire predictions of disaster and massive structural upheaval haven’t quite emerged. Somehow, we’ve made it through, we haven’t had mass unemployment, we haven’t been replaced by intelligent machines, and while income distribution remains very unequal the causes of that are not down to technological change.

Which is not to say that nothing has changed. Today’s world is radically different along so many dimensions, and not everyone has made it through the digital crisis. Plenty of organizations have failed, unable to come to terms with the new technology whilst others have emerged from nowhere to dominate the global landscape. (It’s worth reflecting that none of the FAANGS corporations (Facebook/Meta, Amazon, Apple, Netflix and Google were even born when those headlines were written). So, we’ve had change, yes, but it’s not necessarily been destructive or competence-destroying change.

If we’re serious about managing the continuing challenge then it’s worth taking a closer look at just what digital innovation involves. Is it really so revolutionary and transformative? The answer is a mixture. In terms of speed of arrival it’s been a very-slow paced change. Digital innovation isn’t new. Despite the hype around the disruptive potential of this technological wave the reality is that it’s been building for at least 70 years, ever since the invention of the transistor back in Bell Labs in 1947. And there’s a good argument for seeing it date back fifty years before that to when John Fleming and Lee DeForest began playing around with valves and enabling simple electronic circuits.

The idea of programmable control was around another hundred years before that; early on in the Industrial Revolution we saw mechanical devices increasingly substituting for human skill and intervention. Textile manufacturers were able to translate complex designs into weaving instructions for their looms through the use of punched card systems, an innovation pioneered by Joseph Marie Jaquard. Not for nothing did the Luddites worry about the impact technology might have on their livelihoods. And we should remember that it was in the nineteenth, not the twentieth century that the computer first saw the light of day in the form of the difference and analytical engines developed by Charles Babbage and Ada Lovelace.

In fact although there has been rapid acceleration in the application of digital technology over the past thirty years in many ways it has more in common with a number of other ‘revolutions’ like steam power or electricity where the pattern is what Andrew Hargadon calls ‘long fuse, big bang’. That is to say the process towards radical impact is slow but when it converges there can be significant waves of change flowing from it.

Riding the Long Waves of Change

Considerable interest was shown back in the 1980s (when the pace of the ‘IT revolution’ appeared to be accelerating) in the ideas of a Russian economist, Nikolai Kondratiev. He had observed patterns in economic activity cycles which seemed to have a long period (long waves) and which were linked to major technological shifts. The pattern suggested that major enabling technologies like steam power or electricity which had widespread application potential could trigger significant movements in economic growth. The model was applied to the idea of information technology and in particular Chris Freeman and Carlota Perez began developing the approach as a lens through which to explore major innovation-led changes. They argued that the role of technology as a driver had to be matched by a complementary change in social structures and expectations, a configuration which they called the ‘techno-economic paradigm’ .

Importantly the upswing of such a change would be characterised by attempts to use the new technologies in ways which mainly substituted for things which already happened, improving them and enhancing productivity. But at a key point the wave would break and completely new ways of thinking about and using the technologies would emerge, accelerating growth.

A parallel can be drawn to research on the emergence of electricity as a power source; for a sustained period it was deployed as a replacement for the large central steam engines in factories. Only when smaller electric motors were distributed around the factory did productivity growth rise dramatically. Essentially the move involved a change in perspective, a shift in paradigm.

Whilst the long wave model has its critics it offers a helpful lens through which to see the rise of digital innovation. In particular the earlier claims for revolutionary status seemed unfounded, reflecting the ‘substitution’ mode of an early TEP. Disappointment with the less than dramatic results of investing in the new wave would slow its progress — something which could be well-observed in the collapse of the Internet ‘bubble’ around 2000. The revolutionary potential of the underlying technologies was still there but it took a while to kick the engine back into life; this time the system level effects are beginning to emerge and there is a clearer argument for seeing digital innovation as transformative across all sectors of the economy.

This idea of learning to use the new technology in new ways underpins much of the discussion of what is sometimes called the ‘productivity paradox’ — the fact that extensive investment in new technologies does not always seem to contribute to expected rises in productivity. Over time the pattern shifts but — as was the case with electric power — the gap between introduction and understanding how to get the best out of new technology can be long, in that case over fifty years.

Surfer

Strategy Matters

This model underlines the need for strategy — the ability to ride out the waves of technological change, using them to advantage rather than being tossed and thrown by them, finally ending up in pieces on a beach somewhere. Digital technology is like any other set of innovations; it offers enormous opportunities but we need to think hard about how we are going to manage them. Riding this particular wave is going to stretch our capabilities as innovation managers, staying on the board will take a lot of skill and not a little improvisation in our technique.

It’s easy to get caught up in the flurry of dramatic words used to describe digital possibilities but we shouldn’t forget that underneath them the core innovation process hasn’t changed. It’s still a matter of searching for opportunities, selecting the most promising, implementing and capturing value from digital change projects. What we have to manage doesn’t change even though the projects may themselves be significant in their impact and scalable across large domains. There’s plenty of evidence for that; whilst there have been notable examples of old guard players who have had to retire into bankruptcy or disappearance (think Kodak, Polaroid, Blockbuster) many others continue to flourish in their new digital clothes. Their products and services enhanced, their processes revived and revitalised through strategic use of digital technologies.

If the conferences I’ve been attending are a good barometer of what’s happening then there’s a lot behind this. Organizations of all shapes and sizes are now deploying new digitally driven product and service models and streamlining their internal operations to enable efficient and effective global reach. If anything the Covid-19 pandemic has forced an acceleration in these trends, pushing us further and faster into a digital world. And it’s working in the public and third sector too; for example the field of humanitarian innovation has been transformed by the use of mobile apps, Big Data and maker technologies like 3D printing. Denmark even has a special ministry within government tasked with delivering digitally-based citizen innovation.

Digital Innovation Management

Perhaps what’s really changing — and challenging — is not the emerging set of innovations but rather the way we need to approach creating and delivering them — the way we manage innovation. And here the case for rethinking is strong; continuing with the old tried and tested routines may not get us too far. Instead we need innovation model innovation.

Take the challenge of search — how do we find opportunities for innovation in a vast sea of knowledge? Learning the new skills of ‘open innovation’ has been high on the innovation management agenda for organizations since Henry Chesbrough first coined the term nearly twenty years ago. We know that in a knowledge-rich world that ‘not all the smart people work for us’ and we’ve developed increasingly sophisticated and effective tools for helping us operate in this space.

Digital technologies make this much faster and easy to do. Internet searches allow us to access rich libraries of knowledge at the click of a mouse, social media and networks enable us to tap into rich and varied experience and to interact with it, co-creating solutions. ‘Recombinant’ innovation tools fuelled by machine learning algorithms scour the vast mines of knowledge which the patent system represents and dig out unlikely and fruitful new combinations, bridging different application worlds.

Broadcast search allows us to crowdsource the tricky business of sourcing diverse ideas from multiple different perspectives.  And collaboration platforms allow us to work with that crowd, harnessing collective intelligence and draw in knowledge, ideas, insights from employees, customers, suppliers and even competitors.

Digital innovation management doesn’t stop there; it can also help with the challenge of selection as well. We can use that same crowd to help focus on interesting and promising ideas, using idea markets. Think Kickstarter and a thousand other crowdfunding platforms and look at the increasing use of such approaches within organizations trying to sharpen up their portfolio management. Simulation and exploration technologies enable us to delay the freeze — to continue exploring and evaluating options for longer, assembling useful information on which to base our final decision about whether or not to invest.

And digital techniques blur the lines around implementation, bringing ideas to life. Instead of having to make a once for all commitment and then standing back and hoping we open up a range of choice. We can still kill off the project which isn’t working and has no chance — but we can also adapt in real time, pivoting around an emerging solution to sharpen it, refine it, help it evolve. Digital twins enable us to probe and learn, stress testing ideas to make sure they will work. And the whole ‘agile innovation’ philosophy stresses early testing of simple prototypes — ‘minimum viable products’ — followed by pivoting. Innovation becomes less dependent on a throw of the dice and a lot of hope; instead it is a guided series of experiments hunting for optimum solutions.

Capturing value is all about scale and the power of digital technologies is that they enable us to ‘turbocharge’ this phase. The physical limits on expansion and access are removed for many digital products and services and even physical supply chains and logistics networks can be enhanced with these approaches. Networks allow us not only to spread the word via multiple channels but also enable us to tap into the social processes of influence which shape diffusion. Innovation adoption is still heavily influenced by key opinion leaders but now those influencers can be mobilised much more rapidly and extensively.

The story of Tupperware is a reminder of this effect; it took a passionate woman (Brownie Wise) building a social system by herself in the 1950s to turn a great product into one of the most recognised in the world. Today’s social marketing technologies can draw on powerful tools and infrastructures from the start.

In the same way assembling complementary assets is essential — the big question is one of ‘who else/what else do we need to move to scale? In the past this was a process of finding and forming a series of relationships and carefully nurturing them to create an ecosystem. Today’s platform architectures and business models enable such networks to be quickly assembled and operated in digital space. Amazon didn’t invent remote retailing; that model emerged a century ago with the likes of Sears and Roebuck painstakingly building their system. But Amazon’s ability to quickly build and scale and then to diversify across to new areas deploying the same core elements depends on a carefully thought-out digital architecture.

Digital is Different?

So yes, digital is different in terms of the radically improved toolkit with which we can work in managing innovation. Central to this is a strategy — being clear where and why we might use these tools and what kind of organization we want to create. And being prepared to let go of our old models; even though they are tried and tested and have brought us a long way the reality is that we need innovation model innovation. That’s at the heart of the concept of ‘dynamic capability’ — the ability to configure and reconfigure our processes to create value from ideas.

The idea of innovation management routines is a double-edged sword. On the one hand routines enable us to systematise and codify the patterns of behaviour which help us innovate — how we search, select , implement and so on. That helps us repeat the innovation trick and means that we can build structures and processes and policies to strengthen our innovation capability. But we not only need to review and hone these routines, we also need the capacity to step back and challenge them and the courage to change or even abandon them if they are no longer appropriate. That’s the real key to successful digital transformation.


If you’re interested in more innovation stories please check out my website here
And if you’d like to listen to a podcast version you can find it here
Or follow my online course here

Image credits: FreePik

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Challenges of Artificial Intelligence Adoption, Dissemination and Implementation

Challenges of Artificial Intelligence Adoption, Dissemination and Implementation

GUEST POST from Arlen Meyers, M.D.

Dissemination and Implementation Science (DIS) is a growing research field that seeks to inform how evidence-based interventions can be successfully adopted, implemented, and maintained in health care delivery and community settings.

Here is what you should know about dissemination and implementation.

Sickcare artificial intelligence products and services have a unique set of barriers to dissemination and implementation.

Every sickcare AI entrepreneur will eventually be faced with the task of finding customers willing and able to buy and integrate the product into their facility. But, every potential customer or segment is not the same.

There are differences in:

  1. The governance structure
  2. The process for vetting and choosing a particular vendor or solution
  3. The makeup of the buying group and decision makers
  4. The process customers use to disseminate and implement the solution
  5. Whether or not they are willing to work with vendors on pilots
  6. The terms and conditions of contracts
  7. The business model of the organization when it comes to working with early-stage companies
  8. How stakeholders are educated and trained
  9. When and how which end users and stakeholders have input in the decision
  10. The length of the sales cycle
  11. The complexity of the decision-making process
  12. Whether the product is a point solution or platform
  13. Whether the product can be used throughout all parts of just a few of the sickcare delivery network
  14. A transactional approach v a partnership and future development one
  15. The service after the sale arrangement

Here is what Sales Navigator won’t tell you.

Here is why ColdLinking does not work.

When it comes to AI product marketing and sales, when you have seen one successful integration, you have seen one process to make it happen and the success of the dissemination and implentation that creates the promised results will vary from one place to the next.

Do your homework. One size does not fit all.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Resilience Conundrum

From the Webb Space Telescope to Dishwashing Liquids

The Resilience Conundrum

GUEST POST from Pete Foley

Many of us have been watching the spectacular photos coming from Webb Space Telescope this week. It is a breathtaking example of innovation in action. But what grabbed my attention almost as much as the photos was the challenge of deploying it at the L2 Lagrange point. That not only required extraordinary innovation of core technologies, but also building unprecedented resilience into the design. Deploying a technology a million miles from Earth leaves little room for mistakes, or the opportunity for the kind of repairs that rescued the Hubble mission. Obviously the Webb team were acutely aware of this, and were painstaking in identifying and pre-empting 344 single points of failure, any one of which had the potential to derail it. The result is a triumph.  But it is not without cost. Anticipating and protecting against those potential failures played a significant part in taking Webb billions over budget, and years behind it’s original schedule.

Efficiency versus Adaptability: Most of us will never face quite such an amazing but  daunting challenge, or have the corresponding time and budget flexibility. But as an innovation community, and a planet, we are entering a phase of very rapid change as we try to quickly address really big issues, such as climate change and AI. And the speed, scope and interconnected complexity of that change make it increasingly difficult to build resilience into our innovations. This is compounded because a need for speed and efficiency often drives us towards narrow focus and increased specialization.  That focus can help us move quickly, but we know from nature that the first species to go extinct in the face of environmental change are often the specialists, who are less able to adapt with their changing world. Efficiency often reduces resilience, it’s another conundrum.

Complexity, Systems Effects and Collateral Damage. To pile on the challenges a little, the more breakthrough an innovation is, the less we understand about how interacts at a systems level, or secondary effects it may trigger.  And secondary failures can be catastrophic. Takata airbags, or the batteries in Samsung Galaxy phones were enabling, not core technologies, but they certainly derailed the core innovations.

Designed Resiliency. One answer to this is to be more systematic about designing resilience into innovation, as the Webb team were. We may not be able to reach the equivalent of 344 points of failure, but we can be systematic about scenario planning, anticipating failure, and investing up front in buffering ourselves against risk. There are a number of approaches we can adopt to achieve this, which I’ll discuss in detail later.

The Resiliency Conundrum. But first let’s talk just a little more about the Resilience conundrum. For virtually any innovation, time and money are tight. Conversely, taking time to anticipate potential failures is often time consuming and expensive. Worse, it rarely adds direct, or at least marketable value. And when it does work, we often don’t see the issues it prevents, we only notice them when resiliency fails. It’s a classic trade off, and one we face at all levels of innovation. For example, when I worked on dishwashing liquids at P&G, a slightly less glamorous field than space exploration, an enormous amount of effort went into maintaining product performance and stability under extreme conditions. Product could be transported in freezing or hot temperatures, and had to work extreme water hardness or softness. These conditions weren’t typical, but they were possible. But the cost of protecting these outliers was often disproportionately high.

And there again lies the trade off. Design in too much resiliency, and we are become inefficient and/or uncompetitive. But too little, and we risk a catastrophic failure like the Takata airbags. We need to find a sweet spot. And finding it is still further complicated because we are entering an era of innovation and disruption where we are making rapid changes to multiple systems in parallel. Climate change is driving major structural change in energy, transport and agriculture, and advances in computing are changing how those systems are managed. With dishwashing, we made changes to the formula, but the conditions of use remained fairly constant, meaning we were pretty good at extrapolating what the product would have to navigate. The same applies with the Webb telescope, where conditions at the Lagrange point have not changed during the lifetime of the project. We typically have a more complex, moving target.

Low Carbon Energy. Much of the core innovation we are pursuing today is interdependent. As an example, consider energy. Simply replacing hydrocarbons with, for example, solar, is far more complex than simply swapping one source of energy for another. It impacts the whole energy supply system. Where and how it links into our grid, how we store it, unpredictable power generation based on weather, how much we can store, maintenance protocols, and how quickly we can turn up or down the supply are just a few examples. We also create new feedback loops, as variables such as weather can impact both power generation and power usage concurrently. But we are not just pursuing solar, but multiple alternatives, all of which have different challenges. And concurrent to changing our power source, we are also trying to switch automobiles and transport in general from hydrocarbons to electric power, sourced from the same solar energy. This means attempting significant change in both supply and a key usage vector, changing two interdependent variables in parallel. Simply predicting the weather is tricky, but adding it to this complex set of interdependent variables makes surprises inevitable, and hence dialing in the right degree of resilience pretty challenging.

The Grass is Always Greener: And even if we anticipate all of that complexity, I strongly suspect, we’ll see more, rather than less surprises than we expect.   One lesson I’ve learned and re-learned in innovation is that the grass is always greener. We don’t know what we don’t know, in part because we cannot see the weeds from a distance. The devil often really is in the details, and there is nothing like moving from theory to practice, or from small to large scale to ferret out all of the nasty little problems that plague nearly every innovation, but that are often unfathomable when we begin. Finding and solving these is an inherent part of virtually any innovation process, but it usually adds time and cost to the process. There are reasons why more innovations take longer than expected than are delivered ahead of schedule!

It’s an exciting, but also perilous time to be innovating. But ultimately this is all manageable. We have a lot of smart people working on these problems, and so most of the obvious challenges will have contingencies.   We don’t have the relative time and budget of the Webb Space Telescope, and so we’ll inevitably hit a few unanticipated bumps, and we’ll never get everything right. But there are some things we can do to tip the odds in our favor, and help us find those sweet spots.

  1. Plan for over capacity during transitions. If possible, don’t shut down old supply chins until the new ones are fully established. If that is not possible, stockpile heavily as a buffer during the transition. This sounds obvious, but it’s often a hard sell, as it can be a significant expense. Building inventory or capacity of an old product we don’t really want to sell, and leaving it in place as we launch doesn’t excite anybody, but the cost of not having a buffer can be catastrophic.
  2. In complex systems, know the weakest link, and focus resilience planning on it. Whether it’s a shortage of refills for a new device, packaging for a new product, or charging stations for an EV, innovation is only as good as its weakest link. This sounds obvious, but our bias is to focus on the difficult, core and most interesting parts of innovation, and pay less attention to peripherals. I’ve known a major consumer project be held up for months because of a problem with a small plastic bottle cap, a tiny part of a much bigger project. This means looking at resilience across the whole innovation, the system it operates in and beyond. It goes without saying that the network of compatible charging stations needs to precede any major EV rollout. But never forget, the weakest link may not be within our direct control. We recently had a bunch of EV’s stranded in Vegas because a huge group of left an event at a time when it was really hot. The large group overwhelmed our charging stations, and the high temperatures meant AC use limited the EV’s range, requiring more charging. It’s a classic multivariable issue where two apparently unassociated triggers occur at once.   And that is a case where the weakest link is visible. If we are not fully vertically integrated, resilience may require multiple sources or suppliers to protect against potential failure points we are not aware of, just to protect us against things we cannot control.
  3. Avoid over optimization too early. It’s always tempting to squeeze as much cost out of innovation prior to launch. But innovation by its very nature disrupts a market, and creates a moving target. It triggers competitive responses, changes in consumer behavior, supply chain, and raw material demand. If we’ve optimized to the point of removing flexibility, this can mean trouble. Of course, some optimization is always needed as part of the innovation process, but nailing it down too tightly and too early is often a mistake. I’ve lost count of the number of initiatives I’ve seen that had to re-tool or change capacity post launch at a much higher cost than if they’d left some early flexibility and fine-tuned once the initial dust had settled.
  4. Design for the future, not the now. Again this sounds obvious, but we often forget that innovation takes time, and that, depending upon our cycle-time, the world may be quite different when we are ready to roll out than it was when we started. Again, Webb has an advantage here, as the Lagrange point won’t have changed much even in the years the project has been active. But our complex, interconnected world is moving very quickly, especially at a systems level, and so we have to build in enough flexibility to account for that.
  5. Run test markets or real world experiments if at all possible. Again comes with trade offs, but no simulation or lab test beats real world experience. Whether its software, a personal care product, or a solar panel array, the real world will throw challenges at us we didn’t anticipate. Some will matter, some may not, but without real world experience we will nearly always miss something. And the bigger our innovation, generally the more we miss. Sometimes we need to slow down to move fast, and avoid having to back track.
  6. Engage devils advocates. The more interesting or challenging an innovation is, the easier it is to slip into narrow focus, and miss the big picture. Nobody loves having people from ‘outside’ poke holes in the idea they’ve been nurturing for months or years, but that external objectiveness is hugely valuable, together with different expertise, perspectives and goals. And cast the net as wide as possible. Try to include people from competing technologies, with different goals, or from the broad surrounding system. There’s nothing like a fierce competitor, or people we disagree with to find our weaknesses and sharpen an idea. Welcome the naysayers, and listen to them. Just because they may have a different agenda doesn’t mean the issues they see don’t exist.

Of course, this is all a trade off. I started this with the brilliant Webb Space telescope, which is amazing innovation with extraordinary resilience, enabled by an enormous budget and a great deal or time and resource. As we move through the coming years we are going to be attempting innovation of at least comparable complexity on many fronts, on a far more planetary scale, and with far greater implications if we get it wrong. Resiliency was a critical part of the Webb Telescopes success. But with stakes as high as they are with much of today’s innovation, I passionately believe we need to learn from that. And a lot of us can contribute to building that resiliency. It’s easy to think of Carbon neutral energy, EV’s, or AI as big, isolated innovations. But in reality they comprise and interface with many, many sub-projects. That’s a lot of innovation, a lot of complexity, a lot of touch-points, a lot of innovators, and a lot of potential for surprises. A lot of us will be involved in some way, and we can all contribute. Resiliency is certainly not a new concept for innovation, but given the scale, stakes and implications of what we are attempting, we need it more than ever.

Image Credit: NASA, ESA, CSA, and STScl

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Have Humans Evolved Beyond Nature and a Need for It?

Have Humans Evolved Beyond Nature and a Need for It?

GUEST POST from Manuel Berdoy, University of Oxford

Our society has evolved so much, can we still say that we are part of Nature? If not, should we worry – and what should we do about it? Poppy, 21, Warwick.

Such is the extent of our dominion on Earth, that the answer to questions around whether we are still part of nature – and whether we even need some of it – rely on an understanding of what we want as Homo sapiens. And to know what we want, we need to grasp what we are.

It is a huge question – but they are the best. And as a biologist, here is my humble suggestion to address it, and a personal conclusion. You may have a different one, but what matters is that we reflect on it.

Perhaps the best place to start is to consider what makes us human in the first place, which is not as obvious as it may seem.


This article is part of Life’s Big Questions

The Conversation’s new series, co-published with BBC Future, seeks to answer our readers’ nagging questions about life, love, death and the universe. We work with professional researchers who have dedicated their lives to uncovering new perspectives on the questions that shape our lives.


Many years ago, a novel written by Vercors called Les Animaux dénaturés (“Denatured Animals”) told the story of a group of primitive hominids, the Tropis, found in an unexplored jungle in New Guinea, who seem to constitute a missing link.

However, the prospect that this fictional group may be used as slave labour by an entrepreneurial businessman named Vancruysen forces society to decide whether the Tropis are simply sophisticated animals or whether they should be given human rights. And herein lies the difficulty.

Human status had hitherto seemed so obvious that the book describes how it is soon discovered that there is no definition of what a human actually is. Certainly, the string of experts consulted – anthropologists, primatologists, psychologists, lawyers and clergymen – could not agree. Perhaps prophetically, it is a layperson who suggested a possible way forward.

She asked whether some of the hominids’ habits could be described as the early signs of a spiritual or religious mind. In short, were there signs that, like us, the Tropis were no longer “at one” with nature, but had separated from it, and were now looking at it from the outside – with some fear.

It is a telling perspective. Our status as altered or “denatured” animals – creatures who have arguably separated from the natural world – is perhaps both the source of our humanity and the cause of many of our troubles. In the words of the book’s author:

All man’s troubles arise from the fact that we do not know what we are and do not agree on what we want to be.

We will probably never know the timing of our gradual separation from nature – although cave paintings perhaps contain some clues. But a key recent event in our relationship with the world around us is as well documented as it was abrupt. It happened on a sunny Monday morning, at 8.15am precisely.

A new age

The atomic bomb that rocked Hiroshima on August 6 1945, was a wake-up call so loud that it still resonates in our consciousness many decades later.

The day the “sun rose twice” was not only a forceful demonstration of the new era that we had entered, it was a reminder of how paradoxically primitive we remained: differential calculus, advanced electronics and almost godlike insights into the laws of the universe helped build, well … a very big stick. Modern Homo sapiens seemingly had developed the powers of gods, while keeping the psyche of a stereotypical Stone Age killer.

We were no longer fearful of nature, but of what we would do to it, and ourselves. In short, we still did not know where we came from, but began panicking about where we were going.

We now know a lot more about our origins but we remain unsure about what we want to be in the future – or, increasingly, as the climate crisis accelerates, whether we even have one.

Arguably, the greater choices granted by our technological advances make it even more difficult to decide which of the many paths to take. This is the cost of freedom.

I am not arguing against our dominion over nature nor, even as a biologist, do I feel a need to preserve the status quo. Big changes are part of our evolution. After all, oxygen was first a poison which threatened the very existence of early life, yet it is now the fuel vital to our existence.

Similarly, we may have to accept that what we do, even our unprecedented dominion, is a natural consequence of what we have evolved into, and by a process nothing less natural than natural selection itself. If artificial birth control is unnatural, so is reduced infant mortality.

I am also not convinced by the argument against genetic engineering on the basis that it is “unnatural”. By artificially selecting specific strains of wheat or dogs, we had been tinkering more or less blindly with genomes for centuries before the genetic revolution. Even our choice of romantic partner is a form of genetic engineering. Sex is nature’s way of producing new genetic combinations quickly.

Even nature, it seems, can be impatient with itself.

Our natural habitat? Shutterstock

Changing our world

Advances in genomics, however, have opened the door to another key turning point. Perhaps we can avoid blowing up the world, and instead change it – and ourselves – slowly, perhaps beyond recognition.

The development of genetically modified crops in the 1980s quickly moved from early aspirations to improve the taste of food to a more efficient way of destroying undesirable weeds or pests.

In what some saw as the genetic equivalent of the atomic bomb, our early forays into a new technology became once again largely about killing, coupled with worries about contamination. Not that everything was rosy before that. Artificial selection, intensive farming and our exploding population growth were long destroying species quicker than we could record them.

The increasing “silent springs” of the 1950s and 60s caused by the destruction of farmland birds – and, consequently, their song – was only the tip of a deeper and more sinister iceberg. There is, in principle, nothing unnatural about extinction, which has been a recurring pattern (of sometimes massive proportions) in the evolution of our planet long before we came on the scene. But is it really what we want?

The arguments for maintaining biodiversity are usually based on survival, economics or ethics. In addition to preserving obvious key environments essential to our ecosystem and global survival, the economic argument highlights the possibility that a hitherto insignificant lichen, bacteria or reptile might hold the key to the cure of a future disease. We simply cannot afford to destroy what we do not know.

Is it this crocodile’s economic, medical or inherent value which should be important to us? Shutterstock

But attaching an economic value to life makes it subject to the fluctuation of markets. It is reasonable to expect that, in time, most biological solutions will be able to be synthesised, and as the market worth of many lifeforms falls, we need to scrutinise the significance of the ethical argument. Do we need nature because of its inherent value?

Perhaps the answer may come from peering over the horizon. It is somewhat of an irony that as the third millennium coincided with decrypting the human genome, perhaps the start of the fourth may be about whether it has become redundant.

Just as genetic modification may one day lead to the end of “Homo sapiens naturalis” (that is, humans untouched by genetic engineering), we may one day wave goodbye to the last specimen of Homo sapiens genetica. That is the last fully genetically based human living in a world increasingly less burdened by our biological form – minds in a machine.

If the essence of a human, including our memories, desires and values, is somehow reflected in the pattern of the delicate neuronal connections of our brain (and why should it not?) our minds may also one day be changeable like never before.

And this brings us to the essential question that surely we must ask ourselves now: if, or rather when, we have the power to change anything, what would we not change?

After all, we may be able to transform ourselves into more rational, more efficient and stronger individuals. We may venture out further, have greater dominion over greater areas of space, and inject enough insight to bridge the gap between the issues brought about by our cultural evolution and the abilities of a brain evolved to deal with much simpler problems. We might even decide to move into a bodiless intelligence: in the end, even the pleasures of the body are located in the brain.

And then what? When the secrets of the universe are no longer hidden, what makes it worth being part of it? Where is the fun?

“Gossip and sex, of course!” some might say. And in effect, I would agree (although I might put it differently), as it conveys to me the fundamental need that we have to reach out and connect with others. I believe that the attributes that define our worth in this vast and changing universe are simple: empathy and love. Not power or technology, which occupy so many of our thoughts but which are merely (almost boringly) related to the age of a civilisation.

True gods

Like many a traveller, Homo sapiens may need a goal. But from the strengths that come with attaining it, one realises that one’s worth (whether as an individual or a species) ultimately lies elsewhere. So I believe that the extent of our ability for empathy and love will be the yardstick by which our civilisation is judged. It may well be an important benchmark by which we will judge other civilisations that we may encounter, or indeed be judged by them.

When we can change everything about ourselves, what will we keep? Shutterstock

There is something of true wonder at the basis of it all. The fact that chemicals can arise from the austere confines of an ancient molecular soup, and through the cold laws of evolution, combine into organisms that care for other lifeforms (that is, other bags of chemicals) is the true miracle.

Some ancients believed that God made us in “his image”. Perhaps they were right in a sense, as empathy and love are truly godlike features, at least among the benevolent gods.

Cherish those traits and use them now, Poppy, as they hold the solution to our ethical dilemma. It is those very attributes that should compel us to improve the wellbeing of our fellow humans without lowering the condition of what surrounds us.

Anything less will pervert (our) nature.

This article is republished from The Conversation under a Creative Commons license. Read the original article.

Image Credits: Pixabay, Shutterstock (via theconversation)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

What Latest Research Reveals About Innovation Management Software

What Latest Research Reveals About Innovation Management Software

GUEST POST from Jesse Nieminen

Our industry of innovation management software is quite an interesting one. It’s been around for a while, but it’s still not a mainstay that every organization would use, at least not in the same way as CRM and team communication software are.

Hence, there’s quite little independent research available out there to prove its efficacy, or even for determining which parts of it are the most valuable.

So, when I saw a new study, conducted jointly by a few German universities, come out on the topic, I was naturally curious to learn more.

In this article, I’ll share the key findings of the study with you, as well as some personal thoughts on the how and why behind these findings. We’ll also wrap up the discussion by considering how these findings relate to the wider trends within innovation management.

About the Study

Before we get to the results, let’s first briefly cover what the study was actually about and how it was conducted.

First, the focus of the study was to analyze the role of Innovation Management Software (IMS) adoption for New Product Development (NPD) effectiveness and efficiency, as well as the factors (software functionality and offered services) that actually led to successful adoption of said innovation management software.

The data was collected with an online questionnaire that was answered by innovation managers from 199 German firms of varying sizes, 45% of which used an Innovation Management Software, and 55% of which didn’t.

While this is the largest independent piece of research I’ve yet seen on innovation management software, we should remember that all research comes with certain limitations and caveats, and it’s important to understand and keep these in mind.

You can read the paper for a more detailed list, but in my opinion, this boils down to a few key things:

  • First, the study uses NPD performance as a proxy for innovation outcomes. This is an understandable choice to make the research practical, but in reality, innovation is much more than just NPD.
  • Second, while the sample size of companies is respectable, the demographic is quite homogenous as they are all German companies that employ an innovation manager, which obviously isn’t representative of every organization out there.
  • Third, the results are analyzed with regression analyses, which always brings up the age-old dilemma: correlation doesn’t imply causation. In other words, the study can tell us the “what”, not the “why” or “how”.
  • And finally, while the chosen variables are based on validated prior research, the questions still require subjective analysis from the respondent, which can introduce some bias to the results.

So, let’s keep these in mind and move on to the actual findings.

The Main Findings of the Study

The authors have done a great job in summarizing the hypothesis and respective results in a table, which you’ll also find reproduced below.

Innovation Management Software Research Results

Let’s break the results down by hypothesis and cover the main takeaways for each.

Innovation Management Software Adoption Leads to Better NPD Performance

The first hypothesis was that using an Innovation Management Software would lead to better New Product Development performance. This can further be broken down into two parts: efficiency and effectiveness.

The results show that IMS adoption does indeed improve NPD efficiency, but the impact on NPD effectiveness wasn’t significant.

Innovation Management Software improves New Product Development efficiency, but the impact on effectiveness isn’t significant.

Intuitively, this makes sense and is also well in line with our experience. Innovation, especially in terms of NPD, is hard and requires a lot of work and difficult decisions, usually in the face of significant uncertainty. No software can magically do that job for you, but a good tool can help keep track of the process and do some of the heavy lifting for you.

This naturally helps with efficiency which allows innovators to focus more of their efforts on things that will lead to better results, but those results still aren’t a given.

Functionality That Leads to Higher IMS Adoption

The second hypothesis is focused on the functionality provided by the innovation management software, and the impact of said functionality on overall IMS adoption.

To be more specific, the respondents were asked how important they considered each functionality to be for their firm.

Here, Idea Management was the only functionality that had an impact for these firms.

Idea Management was the only functionality that had a significant positive impact for the surveyed firms.

Again, that intuitively makes sense and is well in line with our experience. Idea management is the part that you embed in the organization’s daily processes and use across the organization to make ideation and innovation systematic. And as mentioned, it’s the part that does a lot of the heavy lifting, such as increasing transparency, communication and collecting and analyzing data, that would otherwise take up a lot of time from people running innovation, which naturally helps with efficiency.

So, while Strategy and Product Management capabilities do have their uses, they are not nearly as essential to IMS adoption, or innovation success for that matter.

In our experience, this primarily comes down to the fact that most companies can manage those capabilities just fine even without an IMS. The value-add provided by the software just isn’t nearly as high for most organizations there.

Services That Lead to Higher IMS Adoption

The third and final hypothesis focused on the importance of the services offered by IMS vendors for the respective firms.

Here the spectrum covered consulting, training, customer support, customizations, as well as software updates and upgrades.

Here, the only factor that made a positive difference for the respondents was software updated and upgrades. This category includes both minor improvements as well as new functionality for the software.

Interestingly enough, for consulting that relationship was negative. Or as the authors put it, adopters more alienate than appreciate such services.

Software updates and upgrades were the only service with a positive impact, whereas consulting actually had a negative one.

Let’s first cover the updates and upgrades as that is probably something everyone agrees on.

Good software obviously evolved quickly and as most companies have embraced the Software as a Service (SaaS) model, they’ve come to expect frequent bug fixes, usability and performance improvements, and even new features for free. Over the lifetime of the product, these make a huge difference.

Thus, most understand that you should choose a vendor that is committed and capable of delivering a frequent stream of updates and new capabilities.

Let’s then move on to consulting and discuss why it is detrimental to adoption.

While we’ve always kept professional services to a minimum at Viima, this still came as a bit of a surprise for me. As I’ve raised this point up in discussions with a couple of people in the industry, that do offer such services, they seem to respond with varying degrees of denial, dismissal, and perhaps even a hint of outrage. When such emotions are at play, it’s always a good time for an innovator to lean in and dig a bit deeper, so let’s do that!

Looking at this from the point of view of the customer, there are a few obvious problems:

  • Misaligned incentives
  • … which leads to focusing on the wrong issues
  • Lack of ownership

Each of these could be discussed in length, but let’s focus on covering the keys here.

First, it’s important to understand that every software company makes most of their profits from software licenses. Thus, while generally speaking modern SaaS models do incentivize the vendor to make you successful, that isn’t the whole picture. The focus is actually on keeping the customer using the software. With the right product, that will lead to good outcomes, but that isn’t necessarily always the case.

However, when you add consulting to the mix, it’s only natural that it focuses primarily on the usage of the software because that’s what they know best, and what’s also in their best interest.

And, while making the most out of the software is important, it’s usually not the biggest challenge organizations have with their innovation efforts. In our experience, these are usually in topics such as organizational structure, resource allocation, talent, culture, as well as leadership buy-in and understanding.

And, even if the vendor would focus more on some of these real challenges the customer has, they rarely are the best experts in these matters due to their experience coming from matters related to the product.

Advice on Innovation Management

Now, once you have a consultant come in, you of course want to listen to them. However, a consultant’s job is to give advice, it isn’t to get to the outcomes you want or need, and there’s a big difference there. That is one of the fundamental challenges in using consultants in general, and a big reason for why many don’t like to use them for long-term issues that are core to your future success, such as innovation.

Having said that, if you do use consultants, you can’t lose track of the fact you still need to take ownership for delivering those results. The consultant might be able to help you with that, or they might not. It’s still your job to make the decisions and execute on the chosen plan.

Put together, these reasons are also why we have been reluctant to do much consulting for our customers. We simply think the customer is best served by taking ownership of these matters themselves. We do, on the other hand, seek to provide them with the information, materials and advice they might need in navigating some of these decisions – with no additional cost through channels such as this blog and our online coaching program.

How do these findings relate to wider IMS trends?

Now that we’ve covered the key findings, let’s discuss how these are present in the wider trends within the Innovation Management Software industry.

In addition to what we hear in our discussions with customers and prospects, we’ve also discussed the topic quite extensively with industry analysts and would break these down into a few main trends.

Focus on enterprise-wide innovation

One of the big trends we see is that more and more companies are following in the footsteps of the giants like Tesla, Amazon, Apple and Google, and are moving innovation from separate silos to become more of a decentralized organization-wide effort.

This isn’t always necessary for pure NPD performance, which is what the study was focused on, but it is certainly key for scaling innovation in general, and one where efficient idea management can play a key role.

Once you embark on that journey, you’ll realize that your innovation team will initially be spread very thin. In that situation, it’s especially important to have easy-to-use tools that can empower people across the organization and improve efficiency.

Simultaneous need for ease of use and flexibility

That enterprise-wide innovation trend is also a big driver for the importance of intuitiveness, ease of use, and flexibility becoming more important.

In the past, you could have an innovation management software that is configured to match your stage-gate process for NPD. You might still need that, but it’s no longer enough. You probably want more agile processes for some of your innovation efforts, and more lightweight ones for some of the more incremental innovation many business units need to focus on.

If people across the organization don’t know how to use the software, or require extensive training to do so, you’ll face an uphill battle. What’s more, if you need to call the vendor whenever you need to make a change to the system, you’re in trouble. Top innovators often run dozens or even hundreds of different simultaneous innovation processes in different parts of the organization, so that quickly becomes very tedious and expensive.

Reducing operational complexity and costs

A big consideration for many is the operational complexity and running costs associated in running and managing their infrastructure and operations.

Extensive configuration work and on-premises installations significantly add to both of these, so even though they can be tempting for some organizations, the costs do pile up a lot over time, especially since it requires a lot more attention from your support functions like IT to manage.

What’s more, if you want to make changes or integrate these systems with new ones you may introduce, typically you only have one option: you need to turn to your IMS vendor.

As IMS tools have matured and off-the-shelf SaaS services have become much more capable, the compromises in increased rigidity, complexity and running costs, as well as less frequent updates are no longer worth it and off-the-shelf SaaS is now the way to go for almost everyone. With SaaS, you benefit immensely from economies of scale, and you are no longer held captive by the sunk cost fallacy of up-front license payments and extensive configuration and training work.

Commoditization in Idea Management

As the study pointed out, idea management is at the core of most innovation management software. However, in the last decade, the competition in the space has increased a lot.

There are now native SaaS platforms, like Viima, that are able to offer extremely competitive pricing due to efficient operations and a lean organizational structure. This has put a lot of pressure on many vendors to try to differentiate themselves and justify their higher price tags with additional professional services, as well as adjacent products and capabilities.

In our experience, while these might sound good on paper, they aren’t often leading to more value in real life, and the respondents of this study would seem to concur.

Conclusion

So, to conclude, what did we learn from the research?

In a nutshell, no innovation management software or vendor will miraculously turn you into a successful innovator. A good software, however, will help you become more efficient with your innovation efforts, as well as lead to softer benefits such as improvements in communication, knowledge transfer and culture. Put together, these can make your life a lot easier so that you can focus on actually driving results with innovation.

What then should you consider when choosing your innovation management vendor?

Well, the evidence shows that you should focus on idea management, as that’s where the biggest impact on the factors mentioned above come from. And therein, you should focus on vendors that continuously update and evolve their software with the help of modern technology and that has made all the above so easy and intuitive that they don’t need to sell you consulting.

And of course, ask them the tough questions. Ask to test the software in real life. If you can’t, that is a red flag in and of itself. See how flexible and easy-to-use their software really is. Does it require consulting or configuration by the vendor?

This article was originally published in Viima’s blog.

Image credits: Unsplash, Viima

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

We Must Rethink the Future of Technology

We Must Rethink the Future of Technology

GUEST POST from Greg Satell

The industrial revolution of the 18th century was a major turning point. Steam power, along with other advances in areas like machine tools and chemistry transformed industry from the work of craftsmen and physical labor to that of managing machines. For the first time in world history, living standards grew consistently.

Yet during the 20th century, all of that technology needed to be rethought. Steam engines gave way to electric motors and internal combustion engines. The green revolution and antibiotics transformed agriculture and medicine. In the latter part of the century digital technology created a new economy based on information.

Today, we are on the brink of a new era of innovation in which we will need to rethink technology once again. Much like a century ago, we are developing new, far more powerful technologies that will change how we organize work, identify problems and collaborate to solve them. We will have to change how we compete and even redefine prosperity itself.

The End of the Digital Revolution

Over the past few decades, digital technology has become almost synonymous with innovation. Every few years, a new generation of chips would come out that was better, faster and cheaper than the previous one. This opened up new possibilities that engineers and entrepreneurs could exploit to create new products that would disrupt entire industries.

Yet there are only so many transistors you can cram onto a silicon wafer and digital computing is nearing its theoretical limits. We have just a few generations of advancements left before the digital revolution grinds to a halt. There will be some clever workarounds to stretch the technology a bit further, but we’re basically at the end of the digital era.

That’s not necessarily a bad thing. In many ways, the digital revolution has been a huge disappointment. Except for a relatively brief period in the late nineties and early aughts, the rise of digital technology has been marked by diminished productivity growth and rising inequality. Studies have also shown that some technologies, such as social media, worsen mental health.

Perhaps even more importantly, the end of the digital era will usher in a new age of heterogeneous computing in which we apply different computing architectures to specific tasks. Some of these architectures will be digital, but others, such as quantum and neuromorphic computing, will not be.

The New Convergence

In the 90s, media convergence seemed like a futuristic concept. We consumed information through separate and distinct channels, such as print, radio and TV. The idea that all media would merge into one digital channel just felt unnatural. Many informed analysts at the time doubted that it would ever actually happen.

Yet today, we can use a single device to listen to music, watch videos, read articles and even publish our own documents. In fact, we do these things so naturally we rarely stop to think how strange the concept once seemed. The Millennial generation doesn’t even remember the earlier era of fragmented media.

Today, we’re entering a new age of convergence in which computation powers the physical, as well as the virtual world. We’re beginning to see massive revolutions in areas like materials science and synthetic biology that will reshape massive industries such as energy, healthcare and manufacturing.

The impact of this new convergence is likely to far surpass anything that happened during the digital revolution. The truth is that we still eat, wear and live in the physical world, so innovating with atoms is far more valuable than doing so with bits.

Rethinking Prosperity

It’s a strange anachronism that we still evaluate prosperity in terms of GDP. The measure, developed by Simon Kuznets in 1934, became widely adopted after the Bretton Woods Conference a decade later. It is basically a remnant of the industrial economy, but even back then Kuznets commented, “the welfare of a nation can scarcely be inferred from a measure of national income.”

To understand why GDP is problematic, think about a smartphone, which incorporates many technologies, such as a camera, a video player, a web browser a GPS navigator and more. Peter Diamandis has estimated that a typical smartphone today incorporates applications that were worth $900,000 when they were first introduced.

So, you can see the potential for smartphones to massively deflate GDP. First of all, the price of the smartphone itself, which is just a small fraction of what the technology in it would have once cost. Then there is the fact that we save fuel by not getting lost, rarely pay to get pictures developed and often watch media for free. All of this reduces GDP, but makes us better off.

There are better ways to measure prosperity. The UN has proposed a measure that incorporates 9 indicators, the OECD has developed an alternative approach that aggregates 11 metrics, UK Prime Minister David Cameron has promoted a well-being index and even the small city of Somerville, MA has a happiness project.

Yet still, we seem to prefer GDP because it’s simple, not because its accurate. If we continue to increase GDP, but our air and water are more polluted, our children less educated and less healthy and we face heightened levels of anxiety and depression, then what have we really gained?

Empowering Humans to Design Work for Machines

Today, we face enormous challenges. Climate change threatens to pose enormous costs on our children and grandchildren. Hyperpartisanship, in many ways driven by social media, has created social strife, legislative inertia and has helped fuel the rise of authoritarian populism. Income inequality, at its highest levels since the 1920s, threatens to rip shreds in the social fabric.

Research shows that there is an increasing divide between workers who perform routine tasks and those who perform non-routine tasks. Routine tasks are easily automated. Non-routine tasks are not, but can be greatly augmented by intelligent systems. It is through this augmentation that we can best create value in the new century.

The future will be built by humans collaborating with other humans to design work for machines. That is how we will create the advanced materials, the miracle cures and new sources of clean energy that will save the planet. Yet if we remain mired in an industrial mindset, we will find it difficult to harness the new technological convergence to solve the problems we need to.

To succeed in the 21st century, we need to rethink our economy and our technology and begin to ask better questions. How does a particular technology empower people to solve problems? How does it improve lives? In what ways does it need to be constrained to limit adverse effects through economic externalities?

As our technology becomes almost unimaginably powerful, these questions will only become more important. We have the power to shape the world we want to live in. Whether we have the will remains to be seen.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Sickcare AI Field Notes

Sickcare AI Field Notes

I recently participated in a conference on Artificial Intelligence (AI) in healthcare. It was the first onsite meeting after 900 days of the pandemic.

Here is a report from the front:

  1. AI has a way to go before it can substitute for physician judgment, intuition, creativity and empathy
  2. There seems to be an inherent conflict between using AI to standardize decisions compared to using it for mass customization. Efforts to develop customized care must be designed around a deep understanding of what happens at the ground level along the patient pathway and must incorporate patient engagement by focusing on such things as shared decision-making, definition of appointments, and self-management, all of which are elements of a “build-to-order” approach.
  3. When it comes to dissemination and implementation, culture eats strategy for lunch.
  4. The majority of the conversations had to do with the technical aspects and use cases for AI. A small amount was about how to get people in your organization to understand and use it.
  5. The goal is to empower clinical teams to collaborate with patient teams and that will take some work. Moving sick care to healthcare also requires changing a sprint mindset to a marathon relay race mindset with all the hazards and risks of dropped handoffs and referral and information management leaks.
  6. AI is a facilitating technology that cuts across many applications, use cases and intended uses in sick care. Some day we might be recruiting medical students, residents and other sick care workers using AI instead of those silly resumes.
  7. The value proposition of AI includes improving workflow and improving productivity
  8. AI requires large, clean data sets regardless of applications
  9. It will take a while to create trust in technology
  10. There needs to be transparency in data models
  11. There is a large repository of data from non-traditional sources that needs to be mined e.g social media sites, community based sites providing tests, like health clubs and health fairs, as well as post acute care facilities
  12. AI is enabling both the clinical and business models of value based care
  13. Cloud based AI is changing diagnostic imaging and pattern recognition which will change manpower dynamics
  14. There are potential opportunities in AI for quality outcome stratification, cost accounting and pricing of episodes of care, determining risk premiums and optimizing margins for a bundled priced procedure given geographic disparities in quality and cost.
  15. We are in the second era of AI that is based on deep learning v rules based algorithms
  16. Value based care requires care coordination, risk stratification, patient centricity and managing risk
  17. Machine learning is being used, like Moneyball, to pick startup winners and losers, with a dose of high touch.
  18. It is encouraging to see more and more doctors attending and speaking at these kinds of meetings and lending a much needed perspective and reality check to technologists and non-sick care entrepreneurs. There were few healthcare executives besides those who were invited to be on panels.
  19. Overcoming the barriers to AI in sick care have mostly to do with changing behavior and not dwelling on the technicalities, but, rather, focusing on the jobs that doctors need to get done.
  20. The costs of AI , particularly for small, independent practitioners, are often not affordable, particularly when bundled with crippling EMR expenses . Moore’s law has not yet impacted medicine
  21. The promise of using AI to get more done with less conflicts with the paradox of productivity
  22. Top of mind problems to be solved were how to increase revenuces, cut costs , fill the workforce pipelines and address burnout and behavioral health employee and patient problems with scarce resouces.
  23. Nurses, pharmacists, public health professionals and veterinarians were under represented
  24. Payers were scarce
  25. Patients were scarce
  26. Students, residents and clinicians were looking for ways to get side gigs, non-clinical careers and exit ramps if need be.
  27. 70% of AI applications are in radiology
  28. AI is migrating from shiny to standard, running in the background to power diverse remote care modalities
  29. Chronic disease management and behavioral health have replace infectious disease as the global care management challenges
  30. AI education and training in sickcare professional schools is still woefully absent but international sickcare professional schools are filling the gaps
  31. Process and workflow improvements are a necessary part of digital and AI transformation

At its core, AI is part of a sick care eco-nervous system “brain” that is designed to change how doctors and patients think, feel and act as part of continuous behavioral improvement. Outcomes are irrelevant without impact.

AI is another facilitating technology that is part and parcel of almost every aspect of sick care. Like other shiny new objects, it remains to be seen how much value it actually delivers on its promise. I look forward to future conferences where we will be discussing how, not if to use AI and comparing best practices and results, not fairy tales and comparing mine with yours.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.