Category Archives: Change

Making Decisions in Uncertainty

This 25-Year-Old Tool Actually Works

Making Decisions in Uncertainty

GUEST POST from Robyn Bolton

Just as we got used to VUCA (volatile, uncertain, complex, ambiguous) futurists now claim “the world is BANI now.”  BANI (brittle, anxious, nonlinear, incomprehensible) is much worse than VUCA and reflects “the fractured, unpredictable state of the modern world.”

Not to get too Gen X on the futurists who coined and are spreading this term but…shut up.

Is the world fractured and unpredictable? Yes.

Does it feel brittle? Are we more anxious than ever? Are things changing at exponential speed, requiring nonlinear responses? Does the world feel incomprehensible? Yes, to all.

Naming a problem is the first step in solving it. The second step is falling in love with the problem so that we become laser focused on solving it. BANI does the first but fails at the second. It wallows in the problem without proposing a path forward. And as the sign says, “Ain’t nobody got time for this.”

(Re)Introducing the Cynefin Framework

The Cynefin framework recognizes that leadership and problem-solving must be contextual to be effective. Using the Welsh word for “habitat,” the framework is a tool to understand and name the context of a situation and identify the approaches best suited for managing or solving the situation.

It’s grounded in the idea that every context – situation, challenge, problem, opportunity – exists somewhere on a spectrum between Ordered and Unordered. At the Ordered end of the spectrum, cause and affect are obvious and immediate and the path forward is based on objective, immutable facts. Unordered contexts, however, have no obvious or immediate relationship between cause and effect and moving forward requires people to recognize patterns as they emerge.

Both VUCA and BANI point out the obvious – we’re spending more time on the Unordered end of the spectrum than ever. Unlike the acronyms, Cynefin helps leaders decide and act.

Five Contexts, Five Ways Forward

The Cynefin framework identifies five contexts, each with its own best practices for making decisions and progress.

On the Ordered end of the spectrum:

  • Simple contexts are characterized by stability and obvious and undisputed right answers. Here, patterns repeat, and events are consistent. This is where leaders rely on best practices to inform decisions and delegation, and direct communication to move their teams forward.
  • Complicated contexts have many possible right answers and the relationship between cause and effect isn’t known but can be discovered. Here, leaders need to rely on diverse expertise and be particularly attuned to conflicting advice and novel ideas to avoid making decisions based on outdated experience.

On the Unordered end of the spectrum:

  • Complex contexts are filled with unknown unknowns, many competing ideas, and unpredictable cause and effects. The most effective leadership approach in this context is one that is deeply uncomfortable for most leaders but familiar to innovators – letting patterns emerge. Using small-scale experiments and high levels of collaboration, diversity, and dissent, leaders can accelerate pattern-recognition and place smart bets.
  • Chaos are contexts fraught with tension. There are no right answers or clear cause and effect. There are too many decisions to make and not enough time. Here, leaders often freeze or make big bold decisions. Neither is wise. Instead, leaders need to think like emergency responders and rapidly response to re-establish order where possible to bring the situation into a Complex state, rather than trying to solve everything at once.

The final context is Disorder. Here leaders argue, multiple perspectives fight for dominance, and the organization is divided into fractions. Resolution requires breaking the context down into smaller parts that fit one of the four previous contexts and addressing them accordingly.

The Only Way Out is Through

Our VUCA/BANI world isn’t going to get any simpler or easier. And fighting it, freezing, or fleeing isn’t going to solve anything. Organizations need leaders with the courage to move forward and the wisdom and flexibility to do so in a way that is contextually appropriate. Cynefin is their map.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Learning Business and Life Lessons from Monkeys

Learning Business and Life Lessons from Monkeys

GUEST POST from Greg Satell

Franz Kafka was especially skeptical about parables. “Many complain that the words of the wise are always merely parables and of no use in daily life,” he wrote. “When the sage says: ‘Go over,’ he does not mean that we should cross to some actual place… he means some fabulous yonder…that he cannot designate more precisely, and therefore cannot help us here in the very least.

Business pundits, on the other hand, tend to favor parables, probably because telling simple stories allows for the opportunity to seem both folksy and wise at the same time. When Warren Buffet says “Only when the tide goes out do you discover who’s been swimming naked,” it doesn’t sound so much like an admonishment.

Over the years I’ve noticed that some of the best business parables involve monkeys. I’m not sure why that is, but I think it has something to do with taking intelligence out of the equation. We’re often prone to imagining ourselves as the clever hero of our own story and we neglect simple truths. That may be why monkey parables have so much to teach us.

1. Build The #MonkeyFirst

When I work with executives, they often have a breakthrough idea they are excited about. They begin to tell me what a great opportunity it is and how they are perfectly positioned to capitalize on it. However, when I begin to dig a little deeper it appears that there is some major barrier to making it happen. When I try to ask about it, they just shut down.

One reason that this happens is that there is a fundamental tension between innovation and operations. Operational executives tend to focus on identifying clear benchmarks to track progress. That’s fine for a typical project, but when you are trying to do something truly new and different, you have to directly confront the unknown.

At Google X, the tech giant’s “moonshot factory,” the mantra is #MonkeyFirst. The idea is that if you want to get a monkey to recite Shakespeare on a pedestal, you start by training the monkey, not building the pedestal, because training the monkey is the hard part. Anyone can build a pedestal.

The problem is that most people start with the pedestal, because it’s what they know and by building it, they can show early progress against a timeline. Unfortunately, building a pedestal gets you nowhere. Unless you can actually train the monkey, working on the pedestal is wasted effort.

The moral: Make sure you address the crux of the problem and don’t waste time with peripheral issues.

2. Don’t Get Taken In By Coin Flipping Monkeys

We live in a world that worships accomplishment. Sports stars who have never worked in an office are paid large fees to speak to corporate audiences. Billionaires who have never walked a beat speak out on how to fight crime (even as they invest in gun manufacturers). Others like to espouse views on education, although they have never taught a class.

Many say that you can’t argue with success, but consider this thought experiment: Put a million monkeys in a coin flipping contest. The winners in each round win a dollar and the losers drop out. After twenty rounds, there will only be two monkeys left, each winning $262,144. The vast majority of the other monkeys leave with merely pocket change.

How much would you pay the winning monkeys to speak at your corporate event? Would you invite them to advise your company? Sit on your board? Would you be interested in their views about how to raise your children, invest your savings or make career choices? Would you try to replicate their coin-flipping success? (Maybe it’s all in the wrist).

The truth is that chance and luck play a much bigger part in success than we like to admit. Einstein, for example, became the most famous scientist of the 20th century not just because of his discoveries but also due to an unlikely coincidence. True accomplishment is difficult to evaluate, so we look for signals of success to guide our judgments.

The moral: Next time you judge someone, either by their success or lack thereof, ask yourself whether you are judging actual accomplishment or telltale signs of successful coin flipping. It’s harder to tell the difference than you’d think.

3. The Infinite Monkey Theorem

There is an old thought experiment called the Infinite Monkey Theorem, which is eerily disturbing. The basic idea is that if there were an infinite amount of monkeys pecking away on an infinite amount of keyboards they would, in time, produce the complete works of Shakespeare, Tolstoy and every other literary masterpiece.

It’s a perplexing thought because we humans pride ourselves on our ability to recognize and evaluate patterns. The idea that something we value so highly could be randomly generated is extremely unsettling. Yet there is an entire branch of mathematics, called Ramsey Theory, devoted to the study of how order emerges from random sets of data.

While the infinite monkey theorem is, of course, theoretical, technology is forcing us to confront the very real dilemma’s it presents. For example, music scholar and composer David Cope has been able to create algorithms that produce original works of music that are so good even experts can’t tell they are computer generated. So what is the value of human input?

The moral: Much like the coin flipping contest, the infinite monkey theorem makes us confront what we value and why. What is the difference between things human produced and identical works that are computer generated? Are Tolstoy’s words what give his stories meaning? Or is it the intent of the author and the fact that a human was trying to say something important?

Imagining Monkeys All Around Us

G. H. Hardy, widely considered a genius, wrote that “For any serious purpose, intelligence is a very minor gift.” What he meant was that even in purely intellectual pursuits, such as his field of number theory, there are things that are far more important. It was, undoubtedly, intellectual humility that led Hardy to Ramanujuan, perhaps his greatest discovery of all.

Imagining ourselves to be heroes of our own story can rob us of the humility we need to succeed and prosper. Mistaking ourselves for geniuses can often get us into trouble. People who think they’re playing it smart tend to make silly mistakes, both because they expect to see things that others don’t and because they fail to look for and recognize trouble signs.

Parables about monkeys can be useful because nobody expects them to be geniuses, which demands that we ask ourselves hard questions. Are we doing the important work, or the easiest tasks to show progress on? If monkeys flipping coins can simulate professional success, what do we really celebrate? If monkeys tapping randomly on typewriters can create masterworks, what is the value of human agency?

The truth is that humans are prone to be foolish. We are unable, outside a few limited areas of expertise, to make basic distinctions in matters of importance. So we look for signals of prosperity, intelligence, shared purpose and other things we value to make judgments about what information we should trust. Imagining monkeys around us helps us to be more careful.

Sometimes the biggest obstacle between where we are now and the fabulous yonder we seek is just the few feet in front of us.

— Article courtesy of the Digital Tonto blog
— Image credit: Flickr

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Top 10 Human-Centered Change & Innovation Articles of September 2025

Top 10 Human-Centered Change & Innovation Articles of September 2025Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are September’s ten most popular innovation posts:

  1. McKinsey is Wrong That 80% Companies Fail to Generate AI ROI — by Robyn Bolton
  2. Back to Basics for Leaders and Managers — by Robyn Bolton
  3. Growth is Not the Answer — by Mike Shipulski
  4. The Most Challenging Obstacles to Achieving Artificial General Intelligence — by Art Inteligencia
  5. Charlie Kirk and Innovation — by Art Inteligencia
  6. You Just Got Starbucked — by Braden Kelley
  7. Metaphysics Philosophy — by Geoffrey Moore
  8. Invention Through Co-Creation — by Janet Sernack
  9. Sometimes Ancient Wisdom Needs to be Left Behind — by Greg Satell
  10. The Crisis Innovation Trap — by Braden Kelley and Art Inteligencia

BONUS – Here are five more strong articles published in August that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Build a Common Language of Innovation on your team

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






FLASH SALE – 50% off the best book for Planning Change & Transformation

48 hours only!

Charting Change Second EditionExciting news!

The publisher of my second book – Charting Change – is having a 24-hour FLASH SALE and so you can get the hardcover, softcover or the eBook for 50% off the list price using CODE 50FLSH until October 3, 2025, 11:59PM EDT. The new second edition includes loads of new content including additional guest expert sections and chapters on business architecture, project and portfolio management, and digital and business transformations!

I stumbled across this and wanted to share with everyone so if you haven’t already gotten a copy of this book to power your digital transformation or your latest project or change initiative to success, now you have no excuse!

Click here to get your copy of Charting Change for 50% off using CODE 50FLSH

Of course you can get 10 free tools here from the book, but if you buy the book and contact me I will send you 26 free tools from the 50+ tools in the Change Planning Toolkit™ – including the Change Planning Canvas™!

*If discount is not applied automatically, please use this code: 50FLSH. The discount is available through October 3, 2025. This offer is valid for English-language Springer, Palgrave & Apress books & eBooks. The discount is redeemable on link.springer.com only. Titles affected by fixed book price laws, forthcoming titles and titles temporarily not available on link.springer.com are excluded from this promotion, as are reference works, handbooks, encyclopedias, subscriptions, or bulk purchases. The currency in which your order will be invoiced depends on the billing address associated with the payment method used, not necessarily your home currency. Regional VAT/tax may apply. Promotional prices may change due to exchange rates.

This offer is valid for individual customers only. Booksellers, book distributors, and institutions such as libraries and corporations please visit springernature.com/contact-us. This promotion does not work in combination with other discounts or gift cards.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Identity is Crucial to Change

Identity is Crucial to Change

GUEST POST from Greg Satell

In an age of disruption, the only viable strategy is to adapt. Today, we are undergoing major shifts in technology, resources, migration and demography that will demand that we make changes in how we think and what we do. The last time we saw this much change afoot was during the 1920s and that didn’t end well. The stakes are high.

In a recent speech, the EU’s High Representative for Foreign Affairs and Security Policy Josep Borrell highlighted the need for Europe to change and adapt to shifts in the geopolitical climate. He also pointed out that change involves far more than interests and incentives, carrots and sticks, but even more importantly, identity.

“Remember this sentence,” he said. “’It is the identity, stupid.’ It is no longer the economy, it is the identity.” What he meant was that human beings build attachments to things they identify with and, when those are threatened, they are apt to behave in a visceral, reactive and violent way. That’s why change and identity are always inextricably intertwined.

“We can’t define the change we want to pursue until we define who we want to be.” — Greg Satell

The Making Of A Dominant Model

Traditional models come to us with such great authority that we seldom realize that they too once were revolutionary. We are so often told how Einstein is revered for showing that Newton’s mechanics were flawed it is easy to forget that Newton himself was a radical insurgent, who rewrote the laws of nature and ushered in a new era.

Still, once a model becomes established, few question it. We go to school, train for a career and hone our craft. We make great efforts to learn basic principles and gain credentials when we show that we have grasped them. As we strive to become masters of our craft we find that as our proficiency increases, so does our success and status.

The models we use become more than mere tools to get things done, but intrinsic to our identity. Back in the nineteenth century, the miasma theory, the notion that bad air caused disease, was predominant in medicine. Doctors not only relied on it to do their job, they took great pride in their mastery of it. They would discuss its nuances and implications with colleagues, signaling their membership in a tribe as they did.

In the 1840s, when a young doctor named Ignaz Semmelweis showed that doctors could prevent infections by washing their hands, many in the medical establishment were scandalized. First, the suggestion that they, as men of prominence, could spread something as dirty as disease was insulting. Even more damaging, however, was the suggestion that their professional identity was, at least in part, based on a mistake.

Things didn’t turn out well for Semmelweis. He railed against the establishment, but to no avail. He would eventually die in an insane asylum, ironically of an infection he contracted under care, and the questions he raised about the prevailing miasma paradigm went unanswered.

A Gathering Storm Of Accumulating Evidence

We all know that for every rule, there are exceptions and anomalies that can’t be explained. As the statistician George Box put it, “all models are wrong, but some are useful.” The miasma theory, while it seems absurd today, was useful in its own way. Long before we had technology to study bacteria, smells could alert us to their presence in unsanitary conditions.

But Semmelweis’s hand-washing regime threatened doctors’ view of themselves and their role. Doctors were men of prominence, who saw disease emanating from the smells of the lower classes. This was more than a theory. It was an attachment to a particular view of the world and their place in it, which is one reason why Semmelweis experienced such backlash.

Yet he raised important questions and, at least in some circles, doubts about the miasma theory continued to grow. In 1854, about a decade after Semmelweis instituted hand washing, a cholera epidemic broke out in London and a miasma theory skeptic named John Snow was able to trace the source of the infection to a single water pump.

Yet once again, the establishment could not accept evidence that contradicted its prevailing theory. William Farr, a prominent medical statistician, questioned Snow’s findings. Besides, Snow couldn’t explain how the water pump was making people sick, only that it seemed to be the source of some pathogen. Farr, not Snow, won the day.

Later it would turn out that a septic pit had been dug too close to the pump and the water had been contaminated with fecal matter. But for the moment, while doubts began to grow about the miasma theory, it remained the dominant model and countless people would die every year because of it.

Breaking Through To A New Paradigm

In the early 1860s, as the Civil War was raging in the US, Louis Pasteur was researching wine-making in France. While studying the fermentation process, he discovered that microorganisms spoiled beverages such as beer and milk. He proposed that they be heated to temperatures between 60 and 100 degrees Celsius to avoid spoiling, a process that came to be called pasteurization

Pasteur guessed that the similar microorganisms made people sick which, in turn, led to the work of Robert Koch and Joseph Lister. Together they would establish the germ theory of disease. This work then led to not only better sanitary practices, but eventually to the work of Alexander Fleming, Howard Florey and Ernst Chain and development of antibiotics.

To break free of the miasma theory, doctors needed to change the way they saw themselves. The miasma theory had been around since Hippocrates. To forge a new path, they could no longer be the guardians of ancient wisdom, but evidence-based scientists, and that would require that everything about the field be transformed.

None of this occurred in a vacuum. In the late 19th century, a number of long-held truths, from Euclid’s Geometry to Aristotle’s logic, were being discarded, which would pave the way for strange new theories, such as Einstein’s relativity and Turing’s machine. To abandon these old ideas, which were considered gospel for thousands of years, was no doubt difficult. Yet it was what we needed to do to create the modern world.

Moving From Disruption to Resilience

Today, we stand on the precipice of a new paradigm. We’ve suffered through a global financial crisis, a pandemic and the most deadly conflict in Europe since World War II. The shifts in technology, resources, migration and demography are already underway. The strains and dangers of these shifts are already evident, yet the benefits are still to come.

To successfully navigate the decade ahead, we must make decisions not just about what we want, but who we want to be. Nowhere is this playing out more than in Ukraine right now, where the war being waged is almost solely about identity. Russians want to deny Ukrainian identity and to defy what they see as the US-led world order. Europeans need to take sides. So do the Chinese. Everyone needs to decide who they are and where they stand.

This is not only true in international affairs, but in every facet of society. Different eras make different demands. The generation that came of age after World War II needed to rebuild and they did so magnificently. Yet as things grew, inefficiencies mounted and the Boomer Generation became optimizers. The generations that came after worshiped disruption and renewal. These are, of course, gross generalizations, but the basic narrative holds true.

What should be clear is that where we go from here will depend on who we want to be. My hope is that we become protectors who seek to make the shift from disruption to resilience. We can no longer simply worship market and technological forces and leave our fates up to them as if they were gods. We need to make choices and the ones we make will be greatly influenced by how we see ourselves and our role.

As Josep Borrell so eloquently put it: It is the identity, stupid. It is no longer the economy, it is the identity.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Don’t Fall for the Design Squiggle Lie

Don't Fall for the Design Squiggle Lie

GUEST POST from Robyn Bolton

Last night, I lied to a room full of MBA students. I showed them the Design Squiggle, and explained that innovation starts with (what feels like) chaos and ends with certainty.

The chaos part? Absolutely true.

The certainty part? A complete lie.

Nothing is Ever Certain (including death and taxes)

Last week I wrote about the different between risk and uncertainty.  Uncertainty occurs when we cannot predict what will happen when acting or not acting.  It can also be broken down into Unknown uncertainty (resolved with more data) and Unknowable uncertainty (which persists despite more data).

But no matter how we slice, dice, and define uncertainty, it never goes away.

It may be higher or lower at different times,

More importantly, it changes focus.

Four Dimensions of Uncertainty

Something new that creates value (i.e. an innovation) is multi-faceted and dynamic. Treating uncertainty as a single “thing”  therefore clouds our understanding and ability to find and addresses root causes.

That’s why we need to look at different dimensions of uncertainty.

Thankfully, the ivory tower gives us a starting point.

WHAT: Content uncertainty relates to the outcome or goal of the innovation process. To minimize it, we must address what we want to make, what we want the results to be, and what our goals are for the endeavor.

WHO: Participation uncertainty relates to the people, partners, and relationships active at various points in the process. It requires constant re-assessment of expertise and capabilities required and the people who need to be involved.

HOW: Procedure uncertainty focuses on the process, methods, and tools required to make progress. Again, it requires constant re-assessment of how we progress towards our goals.

WHERE: Time-space uncertainty focuses on the fact that the work may need to occur in different locations and on different timelines, requiring us to figure out when to start and where to work.

It’s tempting to think each of these are resolved in an orderly fashion, by clear decisions made at the start of a project, but when has a decision made on Day 1 ever held to launch day?

Uncertainty in Pharmaceutical Development

 Let’s take the case of NatureComp, a mid-sized company pharmaceutical company and the uncertainties they navigated while working to replicate, develop, and commercialize a natural substance to target and treat heart disease.

  1. What molecule should the biochemists research?
  2. How should the molecule be produced?
  3. Who has the expertise and capability to synthetically poduce the selected molecule because NatureComp doesn’t have the experience required internally?
  4. Where to produce that meets the synthesization criteria and could produce cost-effectively at low volume?
  5. What target disease specifically should the molecule target so that initial clincial trials can be developed and run?
  6. Who will finance the initial trials and, hopefully, become a commercialization partner?
  7. Where would the final commercial entity exist (e.g. stay in NatureComp, move to partner, stand-alone startup) and the molecule produced?

 And those are just the highlights.

It’s all a bit squiggly

The knotty, scribbly mess at the start of the Design Squiggle is true. The line at the end is a lie because uncertainty never goes away. Instead, we learn and adapt until it feels manageable.

Next week, you’ll learn how.

Image credit: The Process of Design Squiggle by Damien Newman, thedesignsquiggle.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

What We See Influences How We’ll Act

What We See Influences How We'll Act

GUEST POST from Greg Satell

“Practical men, who believe themselves to be quite exempt from any intellectual influences, are usually slaves of some defunct economist,” John Maynard Keynes, himself a long dead economist, once wrote. We are, much more than we’d like to admit, creatures of our own age, taking our cues from our environment.

That’s why we need to be on the lookout for our own biases. The truth, as we see it, is often more of a personalized manifestation of the zeitgeist than it is the product of any real insight or reflection. As Richard Feynman put it, “The first principle is that you must not fool yourself—and you are the easiest person to fool. So you have to be very careful about that.”

We can’t believe everything we think. We often seize upon the most easily available information, rather than the most reliable sources. We then seek out information that confirms those beliefs and reject evidence that contradicts existing paradigms. That’s what leads to bad decisions. If what we see determines how we act, we need to look carefully.

The Rise And Fall Of Social Darwinism

In the 1860s, in response to Darwin’s ideas, Herbert Spencer and others began promoting the theory of Social Darwinism. The basic idea was that “survival of the fittest” meant that society should reflect a Hobbesian state of nature, in which most can expect a life that is “nasty, brutish and short,” while an exalted few enjoy the benefits of their superiority.

This was, of course, a gross misunderstanding of Darwin’s work. First, Darwin never used the term, “survival of the fittest,” which was actually coined by Spencer himself. Secondly, Darwin never meant to suggest that there are certain innate qualities that make one individual better than others, but that as the environment changes, certain traits tend to be propagated which, over time, can lead to a new species.

Still, if you see the world as a contest for individual survival, you will act accordingly. You will favor a laissez-faire approach to society, punishing the poor and unfortunate and rewarding the rich and powerful. In some cases, such as Nazi Germany and in the late Ottoman empire, Social Darwinism was used as a justification for genocide.

While some strains of Social Darwinism still exist, for the most part it has been discredited, partly because of excesses such as racism, eugenics and social inequality, but also because more rigorous approaches, such as evolutionary psychology, show that altruism and collaboration can themselves be adaptive traits.

The Making Of The Modern Organization

When Alfred Sloan created the modern corporation at General Motors in the early 20th century, what he really did was create a new type of organization. It had centralized management, far flung divisions and was exponentially more efficient at moving around men and material than anything that had come before.

He called it “federal decentralization.” Management would create operating principles, set goals and develop overall strategy, while day-to-day decisions were performed by people lower down in the structure. While there was some autonomy, it was more like an orchestra than a jazz band, with the CEO as conductor.

Here again, what people saw determined how they acted. Many believed that a basic set of management principles, if conceived and applied correctly, could be adapted to any kind of business, which culminated in the “Nifty Fifty” conglomerates of the 60’s and 70’s. It was, in some sense, an idea akin to Social Darwinism, implying that there are certain innate traits that make an organization more competitive.

Yet business environments change and, while larger organizations may be able to drive efficiencies, they often find it hard to adapt to changing conditions. When the economy hit hard times in the 1970s, the “Nifty Fifty” stocks vastly under-performed the market. By the time the 80s rolled around, conglomerates had fallen out of fashion.

Industries and Value Chains

In 1985, a relatively unknown professor at Harvard Business School named Michael Porter published a book called Competitive Advantage, which explained that by optimizing every facet of the value chain, a firm could consistently outperform its competitors. The book was an immediate success and made Porter a management superstar.

Key to Porter’s view was that firms compete in industries that are shaped by five forces: competitors, customers, suppliers, substitutes, and new market entrants. So he advised leaders to build and leverage bargaining power in each of those directions to create a sustainable competitive advantage for the long term.

If you see your business environment as being neatly organized in specific industries, everybody is a potential rival. Even your allies need to be viewed with suspicion. So, for example, when a new open source operating system called Linux appeared, Microsoft CEO Steve Ballmer considered it to be a threat and immediately attacked, calling it a cancer.

Yet even as Ballmer went on the attack, the business environment was changing. As the internet made the world more connected, technology companies found that leveraging that connectivity through open source communities was a winning strategy. Microsoft’s current CEO, Satya Nadella, says that the company loves Linux. Ultimately, it recognized that it couldn’t continue to shut itself out and compete effectively.

Looking To The Future

Take a moment to think about what the world must have looked like to J.P. Morgan a century ago, in 1922. The disruptive technologies of the day, electricity and internal combustion, were already almost 40 years old, but had little measurable economic impact. Life largely went on as it always had and the legendary financier lorded over his domain of corporate barons.

That would quickly change over the next decade when those technologies would gain traction, form ecosystems and drive a 50-year boom. The great “trusts” that he built would get broken up and by 1930 virtually all of them would be dropped as components of the Dow Jones Industrial average. Every face of life would be completely transformed.

We’re at a similar point today, on the brink of enormous transformation. The recent string of calamities, including a financial meltdown, a pandemic and the deadliest war in Europe in 80 years, demand that we take a new path. Powerful shifts in technology, demographics, resources and migration, suggest that even more disruption may be in our future.

The course we take from here will be determined by how we see the world we live in. Do we see our fellow citizens as a burden or an asset? Are new technologies a blessing or a threat? Is the world full of opportunities to be embraced or dangers we need to protect ourselves from? These are questions we need to think seriously about.

How we answer them will determine what comes next.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Top 10 Human-Centered Change & Innovation Articles of August 2025

Top 10 Human-Centered Change & Innovation Articles of August 2025Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are August’s ten most popular innovation posts:

  1. The Nordic Way of Leadership in Business — by Stefan Lindegaard
  2. Science Says You Shouldn’t Waste Too Much Time Trying to Convince People — by Greg Satell
  3. A Manager’s Guide to Employee Engagement — by David Burkus
  4. Decoding the Code of Life – Human-Centered Innovation in Synthetic Biology — by Art Inteligencia
  5. Why Innovators Can’t Ignore the Quantum Revolution — by Art Inteligencia
  6. Performance Reviews Don’t Have to Suck — by David Burkus
  7. Why Explainable AI is the Key to Our Future – The Unseen Imperative — by Art Inteligencia
  8. Goals Require Belief to be Achievable — by Mike Shipulski
  9. The Future is Rotary – Human-Centered Innovation in Rotating Detonation Engines — by Art Inteligencia
  10. The Killer Strategic Concept You’ve Never Heard Of – You Really Need to Know About Schwerpunkt! — by Greg Satell

BONUS – Here are five more strong articles published in July that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Build a Common Language of Innovation on your team

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last four years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






How Neuromorphic Computing Will Unlock Human-Centered Innovation

The Next Great Leap

How Neuromorphic Computing Will Unlock Human-Centered Innovation

GUEST POST from Art Inteligencia

I’ve long advocated that the most transformative innovation is not just about technology, but about our ability to apply it in a way that creates a more human-centered future. We’re on the cusp of just such a shift with neuromorphic computing.

So, what exactly is it? At its core, neuromorphic computing is a radical departure from the architecture that has defined modern computing since its inception: the von Neumann architecture. This traditional model separates the processor (the CPU) from the memory (RAM), forcing data to constantly shuttle back and forth between the two. This “von Neumann bottleneck” creates a massive energy and time inefficiency, especially for tasks that require real-time, parallel processing of vast amounts of data—like what our brains do effortlessly.

Neuromorphic computing, as the name suggests, is directly inspired by the human brain. Instead of a single, powerful processor, it uses a network of interconnected digital neurons and synapses. These components mimic their biological counterparts, allowing for processing and memory to be deeply integrated. Information isn’t moved sequentially; it’s processed in a massively parallel, event-driven manner.

Think of it like this: A traditional computer chip is like a meticulous librarian who has to walk to the main stacks for every single piece of information, one by one. A neuromorphic chip is more like a vast, decentralized community where every person is both a reader and a keeper of information, and they can all share and process knowledge simultaneously. This fundamental change in architecture allows neuromorphic systems to be exceptionally efficient at tasks like pattern recognition, sensor fusion, and real-time decision-making, consuming orders of magnitude less power than traditional systems.

It’s this leap in efficiency and adaptability that makes it so critical for human-centered innovation. It enables intelligent devices to operate for years on a small battery, allows autonomous systems to react instantly to their environment, and opens the door to new forms of human-machine interaction.


Case Study 1: Accelerating Autonomous Systems with Intel’s Loihi 2

In the world of autonomous vehicles and robotics, real-time decision-making is a matter of safety and efficiency. Traditional systems struggle with **sensor fusion**, the complex task of integrating data from various sensors like cameras, lidar, and radar to create a cohesive understanding of the environment. This process is energy-intensive and often suffers from latency.

The Intel Loihi 2 neuromorphic chip represents a significant leap forward. Researchers have demonstrated that by using spiking neural networks, Loihi 2 can handle sensor fusion with remarkable speed and energy efficiency. In a study focused on datasets for autonomous systems, the chip was shown to be over 100 times more energy-efficient than a conventional CPU and nearly 30 times more efficient than a GPU. This dramatic reduction in power consumption and increase in speed allows for quicker course corrections and improved collision avoidance, moving us closer to a future where robots and vehicles don’t just react to their surroundings, but intelligently adapt.


Case Study 2: Revolutionizing Medical Diagnostics with IBM’s TrueNorth

The field of medical imaging is a prime candidate for neuromorphic disruption. Diagnosing conditions from complex scans like MRIs requires the swift and accurate **segmentation** of anatomical structures. This is a task that demands high computational power and is often handled by GPUs in a clinical setting.

A pioneering case study on the IBM TrueNorth neurosynaptic system demonstrated its ability to perform spinal image segmentation with exceptional efficiency. A deep learning network implemented on the TrueNorth chip was able to delineate spinal vertebrae and disks more than 20 times faster than a GPU-accelerated network, all while consuming less than 0.1W of power. This breakthrough proves that neuromorphic hardware can perform complex medical image analysis with the speed needed for real-time surgical or diagnostic environments, paving the way for more accessible and instant diagnoses.


The Vanguard of Innovation: A Glimpse at the Leaders

The innovation in neuromorphic computing is being driven by a powerful confluence of established tech giants and nimble startups. Intel and IBM, as highlighted in the case studies, continue to lead with their research platforms, Loihi and TrueNorth, respectively. Their work provides the foundational hardware for the entire ecosystem.

However, the field is also teeming with promising newcomers. Companies like BrainChip are pioneering ultra-low-power AI for edge applications, enabling sensors to operate for years on a single charge. SynSense is at the forefront of event-based vision, creating cameras that only process changes in a scene, dramatically reducing data and power requirements. Prophesee is another leader in this space, with partnerships with major companies like Sony and Bosch for their event-based machine vision sensors. The Dutch startup Innatera is focused on ultra-low-power processors for advanced cognitive applications, while MemComputing is taking a unique physics-based approach to solve complex optimization problems. This dynamic landscape ensures a constant flow of new ideas and applications, pushing the boundaries of what’s possible.


In the end, neuromorphic computing is not just about building better computers; it’s about building a better future. By learning from the ultimate example of efficiency—the human brain—we are creating a new generation of technology that will not only perform more efficiently but will empower us to solve some of our most complex human challenges, from healthcare to transportation, in ways we’ve only just begun to imagine.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Three Strategies for Overcoming Change Resistance

Three Strategies for Overcoming Change Resistance

GUEST POST from Greg Satell

Max Planck’s work in physics changed the way we were able to see the universe. Still, even he complained that “A new scientific truth does not triumph by convincing its opponents and making them see the light, but rather because its opponents eventually die, and a new generation grows up that is familiar with it.”

For most transformational efforts we need to pursue, we simply don’t have that kind of time. To drive significant change we have to overcome staunch resistance. Unfortunately, most change management strategies assume that opposition can be overcome through communication efforts that are designed to persuade.

This assumes that resistance always has a rational basis and clearly that’s not true. We all develop emotional attachments to ideas. When we feel those are threatened, it offends our dignity, identity and sense of self. If we are going to overcome our most fervent opponents we don’t need a better argument, we need a strategy. Here are three approaches that work:

Strategy 1: Designate An Internal Red Team

Resistance is never monolithic. While some people have irrational attachments based on their sense of identity and dignity, others are merely skeptical. One key difference between these two groups is that the irrational resistors rarely voice their opposition, but try to quietly sabotage change. The rational skeptics, on the other hand, are much more eager to engage.

While these are different groups, they often interact with each other behind the scenes. In many cases, it is the active, irrational opposition that is fueling the skeptics’ doubts. One useful strategy for dealing with this dynamic is to co-opt the opposition by setting up an internal red team to channel skepticism in a constructive way.

Red-teaming is a process in which an adversarial team is set up to poke holes in an operational or strategic plan. For example, red teams are used in airports and computer systems to see if they can find weaknesses in security. The military uses red teams to test battle plans. Perhaps most famously, a red team was used to help determine whether the conclusions that led to the raid on Osama bin Laden’s hideout were valid or if there was some other explanation.

Recruiting skeptics to be an internal red team provides two benefits. First, they can alert you to actual problems with your ideas, which you can then fix. Second, they not only voice their own objections, but also bring those of the irrational opposition out into the open (remember, irrational resisters rarely speak out.)

What’s key here is to make the distinction between rational skeptics and the irrational saboteurs. Engage with skeptics, leave the saboteurs to themselves.

Strategy 2: Don’t Engage And Quietly Gain Traction

Have you ever had this happen?: You’re in a meeting where things are moving slowly towards a consensus. Issues are discussed, objections raised and solutions devised. Toward the end of the meeting, just as things are shifting gears to next steps, somebody who had hardly said a word the whole time all of a sudden throws a hissy fit in the middle of the conference room and completely discredits themself.

There’s a reason why this happens. Remember saboteurs are not acting rationally. They have emotional attachments that they often can’t articulate, which is why they rarely give voice to their objections, but rather look for more discreet opportunities to derail the process. When they see things moving forward, they panic.

This doesn’t happen just in conference rooms. Those who are trying to sabotage change prefer to lurk in the background and hope they can quietly derail it. But when they see genuine progress being made, they will likely lash out, overreach and inadvertently further your cause.

This behavior is incredibly consistent. In fact, whenever I’m speaking to a group of transformation and change professionals and I describe this phenomenon to them, I always get people coming up to me afterwards. “I didn’t know that was a normal thing, I thought it was just something crazy that happened in our case!”

It’s important to resist the urge to respond to every attack. You don’t need to waste precious time and energy engaging with those who want to derail your initiative, which is more likely to frustrate and exhaust you than anything else. It’s much better to focus on empowering those who support change. Non-engagement can be a viable way to deal with opposition.

Strategy 3: Design A Dilemma Action

I once had a six-month assignment to restructure the sales and marketing operations of a troubled media company and the Sales Director was a real stumbling block. She never overtly objected, but would rather nod her head and then quietly sabotage progress. For example, she promised to hand over the clients she worked directly with to her staff, but never seemed to get around to it.

It was obvious that she intended to slow-walk everything until the six months were over and then return everything back to the way it was. As a longtime senior employee, she had considerable political capital within the organization and, because she was never directly insubordinate, creating a direct confrontation with her would be risky and unwise.

So rather than create a conflict, I designed a dilemma. I arranged with the CEO of a media buying agency for one of the salespeople to meet with a senior buyer and take over the account. The Sales Director had two choices. She could either let the meeting go ahead and lose her grip on the department or try to derail the meeting. She chose the latter and was fired for cause. Once she was gone, her mismanagement became obvious and sales shot up.

Dilemma actions have been around for at least a century. One early example was Alice Paul’s Silent Sentinels who picketed the Wilson White House with his own quotes in 1917. More recently, the tactic has been the subject of increasing academic interest. What’s becoming clear is that these actions share clear design principles that can be replicated in almost any context.

Key to the success of a dilemma action is that it is seen as a constructive act rooted in a shared value. In the case of the Sales Director, she had agreed to give up her accounts and setting up the meeting was aligned with that agreement. That’s what created the dilemma. She had to choose between violating the shared value or giving up her resistance.

How Change Really Happens

One of the biggest misconceptions about change is that it is an exercise in persuasion. Yet anyone who has ever been married or had kids knows how hard it can be to convince even a single person of something they don’t want to be convinced about. Seeking to persuade hundreds or thousands to change what they think or how they act is a tall order indeed.

The truth is that radical, transformational change is achieved when not when those who oppose it are convinced, but when they discredit themselves. It was the brutality of Bull Connor’s tactics in Birmingham that paved the way for the Civil Rights Act in 1964. It was Russia’s poisoning of Viktor Yushchenko in 2004 that set Ukraine on a different path. The passage of Proposition 8 in California created such controversy that it actually furthered the cause of same-sex marriage.

We find the same dynamic in our work with organizational transformations. Whenever you set out to make a significant impact, there will always be people who will hate the idea and seek to undermine it in ways that are dishonest, underhanded and deceptive. Once you are able to internalize that you are ready to move forward.

Through sound strategies, you can learn to leverage opposition to further your change initiative. You can co-opt those who are rationally skeptical to find flaws in your idea that can be fixed. For those who are adamantly and irrationally opposed to an initiative, there are proven strategies that help lead them to discredit themselves.

The status quo always has inertia on its side and never yields its power gracefully. The difference between successful revolutionaries and mere dreamers is that those who succeed anticipate resistance and build a plan to overcome it.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.