Tag Archives: complexity

Top 10 Human-Centered Change & Innovation Articles of March 2025

Top 10 Human-Centered Change & Innovation Articles of March 2025Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are March’s ten most popular innovation posts:

  1. Turning Bold Ideas into Tangible Results — by Robyn Bolton
  2. Leading Through Complexity and Uncertainty — by Greg Satell
  3. Empathy is a Vital Tool for Stronger Teams — by Stefan Lindegaard
  4. The Role Platforms Play in Business Networks — by Geoffrey A. Moore
  5. Inspiring Innovation — by John Bessant
  6. Six Keys to Effective Teamwork — by David Burkus
  7. Product-Lifecycle Management 2.0 — by Dr. Matthew Heim
  8. 5 Business Myths You Cannot Afford to Believe — by Shep Hyken
  9. What Great Ideas Feel Like — by Mike Shipulski
  10. Better Decision Making at Speed — by Mike Shipulski

BONUS – Here are five more strong articles published in February that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

SPECIAL BONUS: While supplies last, you can get the hardcover version of my first bestselling book Stoking Your Innovation Bonfire for 44% OFF until Amazon runs out of stock or changes the price. This deal won’t last long, so grab your copy while it lasts!

Build a Common Language of Innovation on your team

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Leading Through Complexity and Uncertainty

Leading Through Complexity and Uncertainty

GUEST POST from Greg Satell

Leaders need to make decisions and we rarely get to choose the context. Most often, we need to take action without all the facts, in a rapidly changing environment and a compressed time frame. We need to do so with the knowledge that if we get it wrong, we will bear the blame and no one else. It will be our mess to clean up.

That’s a hard bridge to cross and many, if not most, are never quite able to get there. I think that’s why we admire great leaders so much, because they have the courage to take responsibility on their backs and be accountable, to inspire confidence even in an atmosphere of confusion and to point the way forward, even if they aren’t sure it’s the right direction.

The truth is that you can never really be certain until you take that step forward. The simple and inescapable truth is that to accomplish anything significant you need to travel on an uncertain journey. It is tautologically true that the well-trod path will take us nowhere new. We can never fully control uncertainty, but we can learn to lead through it.

How Things Get So Complicated And Uncertain

Generally, we prefer to operate with some degree of predictability, which is why we build structure into daily life. On a personal level, we create habits and routines to give us a sense of grounding. On a societal level, we create laws and norms, so that we know what to expect from our interactions with each other.

Yet in Overcomplicated, mathematician Sam Arbesman gives two reasons why uncertainty is, to a great extent, unavoidable. The first is accretion. We build systems, like the Internet or the laws set down in the US Constitution, to perform a limited number of tasks. Yet to scale those systems, we need to build on top of them to expand their initial capabilities. As systems become larger, they get more complex and uncertain.

The second force is interaction. We may love the simplicity of an iPhone, but don’t want to be restricted to its capabilities alone. So we increase its functionality by connecting it to millions of apps. Those apps, in turn, connect to each other as well as to other systems. Every connection increases complexity and makes things harder to predict.

These two forces lead to what Benoit Mandelbrot called Noah effects and Joseph effects. Joseph effects, as in the biblical story, support long periods of continuity. Noah effects, on the other hand, are like a big storm creating a massive flood of discontinuity, washing away the previous order. Uncertainty, for better or worse, will always be somewhat unavoidable.

The Problem With Simplicity

The most straightforward solution to complexity and uncertainty is to boil things down and make them more simple. Politicians are fond of highlighting the thousands of pages pieces of legislation contain, because complexity is widely seen as a fatal flaw. “If it was thought through clearly, why couldn’t it have been devised more simply?” is the implication.

Yet while we yearn for simple rules, those rules often lead us astray. As Ludwig Wittgenstein explained in his rule following paradox, “no course of action could be determined by a rule because every course of action can be made out to accord with the rule.” Simple rules tend to be necessarily vague, which limits their usefulness.

Something similar happens when we try to tame complexity by summarizing it through identifying patterns. Random points of data, if there are enough of them, will always generate patterns as well, so we can never be quite sure if we are revealing an underlying truth or just creating a convincing illusion. To discern between the two is, unfortunately, complex.

In Why Information Grows, MIT’s Cesar Hidalgo explains that it is through emergent complexity that we create value. To understand what he means, let’s take another look at an iPhone. Its simple design belies incredible complexity, not only in the technology it contains, but in what it connects to, a complex ecosystem of apps, servers and data.

Steve Jobs didn’t intend to create an App Store, because he wanted to keep the iPhone simple. However, eventually he was convinced that by limiting complexity he was curtailing the potential value of his creation and, ultimately, he relented. It is through managing complexity, not avoiding it, that we can most effectively impact the world.

Narrowing Scope And Limiting Variables

The Franciscan friar William of Occam is best remembered for Occam’s razor, which he didn’t exactly invent, but did much to popularize. The technique, which is often mischaracterized as “the simplest solution is often the best,” actually had a lot more to do with variables and assumptions, which he advises to keep to a minimum.

It’s an interesting distinction that makes a big difference. William wasn’t advising us to ignore complexity, but to avoid increasing it by injecting things that don’t need to be there. We can acknowledge the messiness of the world and still tidy up our little corner of it, by narrowing our scope and limiting the variables we deal with.

Steve Blank advises startups to develop minimum viable products to test assumptions, rather than investing resources into a full-featured prototype. The idea is by narrowing scope you can get a better idea of the marketplace and then increase complexity from there. In our work helping organizations drive transformation, we advise our clients to start out with a keystone change, rather than rolling out everything all at once.

Whatever strategy you use, the key, as William of Occam pointed out long ago, is to limit variables where you can, while still recognizing that the universe is far more complex than our scaled down model of it. Or, as the statistician George Box put it, “all models are wrong, but some are useful.”

Innovation Is Exploration

The truth is that uncertainty is only a problem if you try to control it. The framers of the US Constitution designed it to be a guide, not a blueprint. That’s been the key to its success. They recognized it would have to evolve and grow over time and designed a system of checks and balances to curb the human potential for malice.

We need to start thinking less like engineers, designing just the right combination of levers and pulleys to account for every eventuality, and more like gardeners, seeding and nurturing ecosystems, pruning as we go. Gardeners don’t need to know the exact outcome of everything they plant, but can seek to improve the harvest each season.

In a world driven by networks and ecosystems, we can no longer treat strategy as if it were a game of chess, planning out each move with near perfect precision and foresight. The world moves far too fast for that. By the time we’ve put the final touches on the master plan, the assumptions upon which it was made are often no longer true.

Rather, we must constantly explore, widening and deepening connections to ecosystems of talent, technology and information. That’s how we uncover new paths that are often unseen from our usual perch and leverage complexity to our advantage. Breakthrough innovations arise out of unexpected encounters.

The next big thing always starts out looking like nothing at all. Today, competitive advantage is no longer the sum of all efficiencies, but the sum of all connections.

— Article courtesy of the Digital Tonto blog
— Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Unintended Consequences.  The Hidden Risk of Fast-Paced Innovation

Unintended Consequences.  The Hidden Risk of Fast-Paced Innovation

GUEST POST from Pete Foley

Most innovations go through a similar cycle, often represented as an s-curve.

We start with something potentially game changing. It’s inevitably a rough-cut diamond; un-optimized and not fully understood.  But we then optimize it. This usually starts with a fairly steep leaning curve as we address ‘low hanging fruit’ but then evolves into a fine-tuning stage.  Eventually we squeeze efficiency from it to the point where the incremental cost of improving it becomes inefficient.  We then either commoditize it, or jump to another s-curve.

This is certainly not a new model, and there are multiple variations on the theme.  But as the pace of innovation accelerates, something fundamentally new is happening with this s-curve pattern.  S-curves are getting closer together. Increasingly we are jumping to new s-curves before we’ve fully optimized the previous one.  This means that we are innovating quickly, but also that we are often taking more ‘leaps into the dark’ than ever before.

This has some unintended consequences of its own:

1. Cumulative Unanticipated Consequences. No matter how much we try to anticipate how a new technology will fare in the real world, there are always surprises.  Many surprises emerge soon after we hit the market, and create fires than have to be put out quite quickly (and literally in the cases of some battery technologies).  But other unanticipated effects can be slower burn (pun intended).  The most pertinent example of this is of course greenhouse gasses from Industrialization, and their impact on our climate. This of course took us years to recognize. But there are many more examples, including the rise of antibiotic resistance, plastic pollution, hidden carcinogens, the rising cost of healthcare and the mental health issues associated with social media. Just as the killer application for a new innovation is often missed at its inception, it’s killer flaws can be too.  And if the causal relationship between these issues and the innovation are indirect, they can accumulate across multiple s-curves before we notice them.  By the time we do, technology is often so entrenched it can be a huge challenge to extract ourselves from it.

2.  Poorly understood complex network effects.  The impact of new innovation is very hard to predict when it is introduced into a complex, multivariable system.  A butterfly flapping its wings can cascade and amplify through a system, and when the butterfly is transformative technology, the effect can be profound.  We usually have line of sight of first generation causal effects:  For example, we know that electric cars use an existing electric grid, as do solar energy farms.  But in today’s complex, interconnected world, it’s difficult to predict second, third or fourth generation network effects, and likely not cost effective or efficient for an innovator to try and do so. For example, the supply-demand interdependency of solar and electric cars is a second-generation network effect that we are aware of, but that is already challenging to fully predict.  More causally distant effects can be even more challenging. For example, funding for the road network without gas tax, the interdependency of gas and electric cost and supply as we transition, the impact that will have on broader on global energy costs and socio political stability.  Then add in complexities supply of new raw materials needed to support the new battery technologies.  These are pretty challenging to model, and of course, are the challenges we are at least aware of. The unanticipated consequences of such a major change are, by definition, unanticipated!

3. Fragile Foundations.  In many cases, one s-curve forms the foundation of the next.  So if we have not optimized the previous s-curve sufficiently, flaws potentially carry over into the next, often in the form of ‘givens’.  For example, an electric car is a classic s-curve jump from internal combustion engines.  But for reasons that include design efficiency, compatibility with existing infrastructure, and perhaps most importantly, consumer cognitive comfort, much of the supporting design and technology carries over from previous designs. We have redesigned the engine, but have only evolved wheels, breaks, etc., and have kept legacies such as 4+ seats.  But automotives are in many, one of our more stable foundations. We have had a lot of time to stabilize past s-curves before jumping to new ones.  But newer technologies such as AI, social media and quantum computing have enjoyed far less time to stabilize foundational s-curves before we dance across to embrace closely spaced new ones.  That will likely increase the chances of unintended consequences. And we are already seeing the canary in the coal mine with some, with unexpected mental health and social instability increasingly associated with social media

What’s the Answer?  We cannot, or should not stop innovating.  We face too many fundamental issues with climate, food security and socio political stability that need solutions, and need them quite quickly.

But the conundrum we face is that many, if not all of these issue are rooted in past, well intentioned innovation, and the unintended consequences that derive from it. So a lot of our innovation efforts are focused on solving issues created by previous rounds of innovation.  Nobody expected or intended the industrial revolution to impact our climate, but now much of our current innovation capability is rightly focused on managing the fall out it has created (again, pun intended).  Our challenge is that we need to continue to innovate, but also to break the cycle of todays innovation being increasingly focused on fixing yesterdays!

Today new waves of innovation associated with ‘sustainable’ technology, genetic manipulation, AI and quantum computing are already crashing onto our shores. These interdependent innovations will likely dwarf the industrial revolution in scale and complexity, and have the potential for massive impact, both good and bad. And they are occurring at a pace that gives us little time to deal with anticipated consequences, let alone unanticipated ones.

We’ll Find a Way?  One answer is to just let it happen, and fix things as we go. Innovation has always been a bumpy road, and humanity has a long history of muddling through. The agricultural revolution ultimately allowed humans to exponentially expand our population, but only after concentrating people into larger social groups that caused disease to ravage many societies. We largely solved that by dying in large numbers and creating herd immunity. It was a solution, but not an optimum one.  When London was in danger of being buried in horse poop, the internal combustion engine saved us, but that in turn ultimately resulted in climate change. According to projections from the Club of Rome in the 70’s, economic growth should have ground to a halt long ago, mired in starvation and population contraction.  Instead advances in farming technology have allowed us to keep growing.  But that increase in population contributes substantially to our issues with climate today.  ‘We’ll find a way’ is an approach that works until it doesn’t.  and even when it works, it is usually not painless, and often simply defers rather than solves issues.

Anticipation?    Another option is that we have to get better at both anticipating issues, and at triaging the unexpected. Maybe AI will give us the processing power to do this, provided of course that it doesn’t become our biggest issue in of itself.

Slow Down and Be More Selective?  In a previous article I asked if ‘just because we can do it, does it mean we should?’.  That was through a primarily moral lens.  But I think unintended consequences make this an even bigger question for broader innovation strategy.  The more we innovate, the more consequences we likely create.  And the faster we innovate, the more vulnerable we are to fragility. Slowing down creates resilience, speed reduces it.  So one option is to be more choiceful about innovations, and look more critically at benefit risk balance. For example, how badly do we need some of the new medications and vaccines being rushed to market?  Is all of our gene manipulation research needed? Do we really need a new phone every two years?   For sure, in some cases the benefits are clear, but in other cases, is profit driving us more than it should?

In a similar vein, but to be provocative, are we also moving too quickly with renewable energy?  It certainly something we need.  But are we, for example, pinning too much on a single, almost first generation form of large scale solar technology?  We are still at that steep part of the learning curve, so are quite likely missing unintended consequences.  Would a more staged transition over a decade or so add more resilience, allow us to optimize the technology based on real world experience, and help us ferret out unanticipated issues? Should we be creating a more balanced portfolio, and leaning more on more established technology such as nuclear? Sometimes moving a bit more slowly ultimately gets you there faster, and a long-term issue like climate is a prime candidate for balancing speed, optimization and resilience to ultimately create a more efficient, robust and better understood network.

The speed of AI development is another obvious question, but I suspect more difficult to evaluate.  In this case, Pandora’s box is open, and calls to slow AI research would likely mean responsible players would stop, but research would continue elsewhere, either underground or in less responsible nations.  A North Korean AI that is superior to anyone else’s is an example where the risk of not moving likely outweighs the risk of unintended consequences

Regulation?  Regulation is a good way of forcing more thoughtful evaluation of benefit versus risk. But it only works if regulators (government) understand technology, or at least its benefits versus risks, better than its developers.  This can work reasonably well in pharma, where we have a long track record. But it is much more challenging in newer areas of technology. AI is a prime example where this is almost certainly not the case.  And as the complexity of all innovation increases, regulation will become less effective, and increasingly likely to create unintended consequences of its own.

I realize that this may all sound a bit alarmist, and certainly any call to slow down renewable energy conversion or pharma development is going to be unpopular.  But history has shown that slowing down creates resilience, while speeding up creates instability and waves of growth and collapse.  And an arms race where much of our current innovative capability is focused on fixing issues created by previous innovations is one we always risk losing.  So as unanticipated consequences are by definition, really difficult to anticipate, is this a point in time where we in the innovation community need to have a discussion on slowing down and being more selective?  Where should we innovate and where not?  When should we move fast, and when we might be better served by some productive procrastination.  Do we need better risk assessment processes? It’s always easier to do this kind of analysis in hindsight, but do we really have that luxury?

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Resilience Conundrum

From the Webb Space Telescope to Dishwashing Liquids

The Resilience Conundrum

GUEST POST from Pete Foley

Many of us have been watching the spectacular photos coming from Webb Space Telescope this week. It is a breathtaking example of innovation in action. But what grabbed my attention almost as much as the photos was the challenge of deploying it at the L2 Lagrange point. That not only required extraordinary innovation of core technologies, but also building unprecedented resilience into the design. Deploying a technology a million miles from Earth leaves little room for mistakes, or the opportunity for the kind of repairs that rescued the Hubble mission. Obviously the Webb team were acutely aware of this, and were painstaking in identifying and pre-empting 344 single points of failure, any one of which had the potential to derail it. The result is a triumph.  But it is not without cost. Anticipating and protecting against those potential failures played a significant part in taking Webb billions over budget, and years behind it’s original schedule.

Efficiency versus Adaptability: Most of us will never face quite such an amazing but  daunting challenge, or have the corresponding time and budget flexibility. But as an innovation community, and a planet, we are entering a phase of very rapid change as we try to quickly address really big issues, such as climate change and AI. And the speed, scope and interconnected complexity of that change make it increasingly difficult to build resilience into our innovations. This is compounded because a need for speed and efficiency often drives us towards narrow focus and increased specialization.  That focus can help us move quickly, but we know from nature that the first species to go extinct in the face of environmental change are often the specialists, who are less able to adapt with their changing world. Efficiency often reduces resilience, it’s another conundrum.

Complexity, Systems Effects and Collateral Damage. To pile on the challenges a little, the more breakthrough an innovation is, the less we understand about how interacts at a systems level, or secondary effects it may trigger.  And secondary failures can be catastrophic. Takata airbags, or the batteries in Samsung Galaxy phones were enabling, not core technologies, but they certainly derailed the core innovations.

Designed Resiliency. One answer to this is to be more systematic about designing resilience into innovation, as the Webb team were. We may not be able to reach the equivalent of 344 points of failure, but we can be systematic about scenario planning, anticipating failure, and investing up front in buffering ourselves against risk. There are a number of approaches we can adopt to achieve this, which I’ll discuss in detail later.

The Resiliency Conundrum. But first let’s talk just a little more about the Resilience conundrum. For virtually any innovation, time and money are tight. Conversely, taking time to anticipate potential failures is often time consuming and expensive. Worse, it rarely adds direct, or at least marketable value. And when it does work, we often don’t see the issues it prevents, we only notice them when resiliency fails. It’s a classic trade off, and one we face at all levels of innovation. For example, when I worked on dishwashing liquids at P&G, a slightly less glamorous field than space exploration, an enormous amount of effort went into maintaining product performance and stability under extreme conditions. Product could be transported in freezing or hot temperatures, and had to work extreme water hardness or softness. These conditions weren’t typical, but they were possible. But the cost of protecting these outliers was often disproportionately high.

And there again lies the trade off. Design in too much resiliency, and we are become inefficient and/or uncompetitive. But too little, and we risk a catastrophic failure like the Takata airbags. We need to find a sweet spot. And finding it is still further complicated because we are entering an era of innovation and disruption where we are making rapid changes to multiple systems in parallel. Climate change is driving major structural change in energy, transport and agriculture, and advances in computing are changing how those systems are managed. With dishwashing, we made changes to the formula, but the conditions of use remained fairly constant, meaning we were pretty good at extrapolating what the product would have to navigate. The same applies with the Webb telescope, where conditions at the Lagrange point have not changed during the lifetime of the project. We typically have a more complex, moving target.

Low Carbon Energy. Much of the core innovation we are pursuing today is interdependent. As an example, consider energy. Simply replacing hydrocarbons with, for example, solar, is far more complex than simply swapping one source of energy for another. It impacts the whole energy supply system. Where and how it links into our grid, how we store it, unpredictable power generation based on weather, how much we can store, maintenance protocols, and how quickly we can turn up or down the supply are just a few examples. We also create new feedback loops, as variables such as weather can impact both power generation and power usage concurrently. But we are not just pursuing solar, but multiple alternatives, all of which have different challenges. And concurrent to changing our power source, we are also trying to switch automobiles and transport in general from hydrocarbons to electric power, sourced from the same solar energy. This means attempting significant change in both supply and a key usage vector, changing two interdependent variables in parallel. Simply predicting the weather is tricky, but adding it to this complex set of interdependent variables makes surprises inevitable, and hence dialing in the right degree of resilience pretty challenging.

The Grass is Always Greener: And even if we anticipate all of that complexity, I strongly suspect, we’ll see more, rather than less surprises than we expect.   One lesson I’ve learned and re-learned in innovation is that the grass is always greener. We don’t know what we don’t know, in part because we cannot see the weeds from a distance. The devil often really is in the details, and there is nothing like moving from theory to practice, or from small to large scale to ferret out all of the nasty little problems that plague nearly every innovation, but that are often unfathomable when we begin. Finding and solving these is an inherent part of virtually any innovation process, but it usually adds time and cost to the process. There are reasons why more innovations take longer than expected than are delivered ahead of schedule!

It’s an exciting, but also perilous time to be innovating. But ultimately this is all manageable. We have a lot of smart people working on these problems, and so most of the obvious challenges will have contingencies.   We don’t have the relative time and budget of the Webb Space Telescope, and so we’ll inevitably hit a few unanticipated bumps, and we’ll never get everything right. But there are some things we can do to tip the odds in our favor, and help us find those sweet spots.

  1. Plan for over capacity during transitions. If possible, don’t shut down old supply chins until the new ones are fully established. If that is not possible, stockpile heavily as a buffer during the transition. This sounds obvious, but it’s often a hard sell, as it can be a significant expense. Building inventory or capacity of an old product we don’t really want to sell, and leaving it in place as we launch doesn’t excite anybody, but the cost of not having a buffer can be catastrophic.
  2. In complex systems, know the weakest link, and focus resilience planning on it. Whether it’s a shortage of refills for a new device, packaging for a new product, or charging stations for an EV, innovation is only as good as its weakest link. This sounds obvious, but our bias is to focus on the difficult, core and most interesting parts of innovation, and pay less attention to peripherals. I’ve known a major consumer project be held up for months because of a problem with a small plastic bottle cap, a tiny part of a much bigger project. This means looking at resilience across the whole innovation, the system it operates in and beyond. It goes without saying that the network of compatible charging stations needs to precede any major EV rollout. But never forget, the weakest link may not be within our direct control. We recently had a bunch of EV’s stranded in Vegas because a huge group of left an event at a time when it was really hot. The large group overwhelmed our charging stations, and the high temperatures meant AC use limited the EV’s range, requiring more charging. It’s a classic multivariable issue where two apparently unassociated triggers occur at once.   And that is a case where the weakest link is visible. If we are not fully vertically integrated, resilience may require multiple sources or suppliers to protect against potential failure points we are not aware of, just to protect us against things we cannot control.
  3. Avoid over optimization too early. It’s always tempting to squeeze as much cost out of innovation prior to launch. But innovation by its very nature disrupts a market, and creates a moving target. It triggers competitive responses, changes in consumer behavior, supply chain, and raw material demand. If we’ve optimized to the point of removing flexibility, this can mean trouble. Of course, some optimization is always needed as part of the innovation process, but nailing it down too tightly and too early is often a mistake. I’ve lost count of the number of initiatives I’ve seen that had to re-tool or change capacity post launch at a much higher cost than if they’d left some early flexibility and fine-tuned once the initial dust had settled.
  4. Design for the future, not the now. Again this sounds obvious, but we often forget that innovation takes time, and that, depending upon our cycle-time, the world may be quite different when we are ready to roll out than it was when we started. Again, Webb has an advantage here, as the Lagrange point won’t have changed much even in the years the project has been active. But our complex, interconnected world is moving very quickly, especially at a systems level, and so we have to build in enough flexibility to account for that.
  5. Run test markets or real world experiments if at all possible. Again comes with trade offs, but no simulation or lab test beats real world experience. Whether its software, a personal care product, or a solar panel array, the real world will throw challenges at us we didn’t anticipate. Some will matter, some may not, but without real world experience we will nearly always miss something. And the bigger our innovation, generally the more we miss. Sometimes we need to slow down to move fast, and avoid having to back track.
  6. Engage devils advocates. The more interesting or challenging an innovation is, the easier it is to slip into narrow focus, and miss the big picture. Nobody loves having people from ‘outside’ poke holes in the idea they’ve been nurturing for months or years, but that external objectiveness is hugely valuable, together with different expertise, perspectives and goals. And cast the net as wide as possible. Try to include people from competing technologies, with different goals, or from the broad surrounding system. There’s nothing like a fierce competitor, or people we disagree with to find our weaknesses and sharpen an idea. Welcome the naysayers, and listen to them. Just because they may have a different agenda doesn’t mean the issues they see don’t exist.

Of course, this is all a trade off. I started this with the brilliant Webb Space telescope, which is amazing innovation with extraordinary resilience, enabled by an enormous budget and a great deal or time and resource. As we move through the coming years we are going to be attempting innovation of at least comparable complexity on many fronts, on a far more planetary scale, and with far greater implications if we get it wrong. Resiliency was a critical part of the Webb Telescopes success. But with stakes as high as they are with much of today’s innovation, I passionately believe we need to learn from that. And a lot of us can contribute to building that resiliency. It’s easy to think of Carbon neutral energy, EV’s, or AI as big, isolated innovations. But in reality they comprise and interface with many, many sub-projects. That’s a lot of innovation, a lot of complexity, a lot of touch-points, a lot of innovators, and a lot of potential for surprises. A lot of us will be involved in some way, and we can all contribute. Resiliency is certainly not a new concept for innovation, but given the scale, stakes and implications of what we are attempting, we need it more than ever.

Image Credit: NASA, ESA, CSA, and STScl

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Accelerating Complexity vs. Accelerating Change

Accelerating Complexity vs. Accelerating Change

Does the quickening pace of change or the accelerating pace of complexity pose a greater threat for humans and organizations?

Change can be incredibly disruptive to both humans and organizations.

So much so that I decided to create a more modern, visual, collaborative and effective set of methods and tools to help organizations beat the 70% change failure rate and better keep pace with the accelerating pace of change – the Change Planning Toolkit™ – introduced in my latest book Charting Change.

In the book I highlight that the pace of change is accelerating, and use the increasing rate of change in the S&P 500’s membership as a proof point:

Innosight Average Company Lifespan

Another proof point is the fact that all of our high technology has been developed in roughly the last 100 years.

There can be no doubt that the pace of change and disruption is quickening.

But how much of the accelerating disruption that we see can be attributed to what I see as an increasing pace of complexity?

If anyone doubts that we live in a time of accelerating complexity, I encourage you to check out the book The Toaster Project by Thomas Thwaites, or this TED talk given by Thomas:

I find this video quite frightening because it highlights how fragile our high-technology society is, and how much we need each other.

If a single person can’t make the simplest of electrical appliances by themselves, even over the course of a year, then imagine the complexity that organizations must manage to make even more complicated products.

Imagine the challenge of making changes to our organizations after we’ve optimized things to successfully manage this complexity.

If both complexity and change are accelerating, how can we cope?

Here are four key ways to better manage complexity and change:

  1. Choose carefully which complexity to inflict upon the organization
  2. Learn how to architect the organization for continuous change
  3. Continuously evaluate your organization’s trade-offs between flexibility and fixedness
  4. Leverage the modern, visual, and collaborative tools from the Change Planning Toolkit™ that are easily adapted to our new virtual work environment

Grab the ten free tools from the Change Planning Toolkit™ before purchasing a license so you can keep these three key frameworks front and center as you plan a more modular and conscious approach to managing the growing complexity in your organization:

  • PCC Change Readiness Framework
  • Organizational Agility Framework
  • Architecting the Organization for Continuous Change

Download the 10 Free Tools

Keep innovating!


Accelerate your change and transformation success

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.