Author Archives: Greg Satell

About Greg Satell

Greg Satell is a popular speaker and consultant. His latest book, Cascades: How to Create a Movement That Drives Transformational Change, is available now. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.

How to Fix Corporate Transformation Failure

How to Fix Corporate Transformation Failure

GUEST POST from Greg Satell

We live in an age in which change has become the only constant. So it’s not surprising that change management models have become popular. Executives are urged to develop a plan to communicate the need for change, create a sense of urgency and then drive the process through to completion.

Unfortunately, the vast majority of these efforts fail and it’s not hard to see why. Anybody who’s ever been married or had kids knows first-hand how difficult it can be to convince even a single person of something. Any effort to persuade hundreds, if not thousands, of people through some kind of mass effort is setting the bar pretty high.

However, as I explain in Cascades, what you can do is help them convince each other by changing the dynamic so that people enthusiastic about change can influence other (slightly less) enthusiastic people. The truth is that small groups, loosely connected, but united by a shared purpose drive transformational change. So that’s where you need to start.

The Power Of Local Majorities

In the 1950’s, the prominent psychologist Solomon Asch undertook a pathbreaking series of conformity studies. The design of the study was simple, but ingenuous. He merely showed people pairs of cards, asking them to match the length of a single line on one card with one of three on an adjacent card. The answer was meant to be obvious.

However, as the experimenter went around the room, one person after another gave the same wrong answer. When it reached the final person in the group (in truth, the only real subject, the rest were confederates), the vast majority of the time that person conformed to the majority opinion, even if it was obviously wrong!

Majorities don’t just rule, they also influence, especially local majorities. The effect is even more powerful when the issue at hand is more ambiguous than the length of a line on a card. More recent research suggests that the effect applies not only to people we know well, but that we are also influenced even by second and third-degree relationships.

So perhaps the best way to convince somebody of something is to surround them with people who hold a different opinion. To extend the marriage analogy a bit, I might have a hard time convincing my wife or daughter, say, that my jokes are funny and not at all corny, but if they are surrounded by people who think I’m hilarious, they’ll be more likely to think so too.

Changing Dynamics

The problem with creating change throughout an organization is that any sufficiently large group of people will hold a variety of opinions about virtually any matter and these opinions tend to be widely dispersed. So the first step in creating large-scale change is to start thinking about where to target your efforts and there are two tools that can help you do that.

The first, called the Spectrum of Allies, helps you identify which people are active or passive supporters of the change you want to bring about, which are neutral and which actively or passively oppose it. Once you are able to identify these groups, you can start mobilizing the most enthusiastic supporters to start influencing the other groups to shift their opinions. You probably won’t ever convince the active opposition, but you can isolate and neutralize them.

The second tool, called the Pillars of Support, identifies stakeholder groups that can help bring change about. In a typical corporation, these might be business unit leaders, customer groups, industry associations, regulators and so on. These stakeholders are crucial for supporting the status quo, so if you want to drive change effectively, you will need to pull them in.

What is crucial is that every tactic mobilizes a specific constituency in the Spectrum of Allies to influence a specific stakeholder group in the Pillars of Support. For example, in 1984, Anti-Apartheid activists spray-painted “WHITES ONLY” and “BLACKS” above pairs of Barclays ATMs in British university town to draw attention to the bank’s investments in South Africa.

This of course, had little to no effect on public opinion in South Africa, but it meant a lot to the English university students that the bank wanted to attract. Its share of student accounts quickly plummeted from 27% to 15% and two years later Barclays pulled out all of its investments from the country, which greatly damaged the Apartheid regime.

Identifying A Keystone Change

Every change effort begins with a grievance: sales are down, customers are unhappy or perhaps a new technology threatens to disrupt a business model. Change starts when leaders are able to articulate a clear and affirmative “vision for tomorrow” that is empowering and points toward a better future.

However, the vision can rarely be achieved all at once. That’s why successful change efforts define a keystone change, which identifies a tangible goal, involves multiple stakeholders and paves the way for future change. A successful keystone change can supercharge your efforts to shift the Spectrum of Allies and pull in Pillars of Support.

For example, when Experian’s CIO, Barry Libenson, set out to shift his company to the cloud, he knew it would be an enormous undertaking. As one of the largest credit bureaus in the world, there were serious concerns that shifting its computing infrastructure would create vulnerabilities in its cybersecurity and its business model.

So rather than embarking on a multi-year death march to implement cloud technology throughout the company, he started with building internal APIs to build momentum. The move involved many of the same stakeholders he would need for the larger project, but involved far less risk and was able to show clear benefits that paved the way for future change.

In Cascades, I detail a number of cases, from major turnarounds at companies like IBM and Alcoa, to movements to gain independence in India and to secure LGBT rights in America. In each case, a keystone change played a major role in bringing change about.

Surviving Victory

As Saul Alinsky pointed out decades ago, every revolution inspires a counterrevolution. So many change efforts that show initial success ultimately fail because of backlash from key stakeholders. That’s why it is crucial to plan how you will survive victory by rooting your change effort in values, skills and capabilities, rather than in specific objectives or tactics.

For example, Blockbuster Video’s initial response to Netflix in 2004 was extremely successful and, by 2007, it was winning new subscribers faster than the upstart. Yet because it rooted its plan solely in terms of strategy and tactics, the changes were only skin deep. After the CEO left because of a compensation dispute, the strategy was quickly reversed. Blockbuster went bankrupt a few years later.

Compare that to the success at Experian. In both cases, large, successful enterprises needed to move against a disruptive threat. In both cases, legacy infrastructure and business models needed to be replaced. At Experian, however, the move was not rooted in a strategy imposed from above, but through empowering the organization with new skills and capabilities.

That made all the difference, because rather than having to convince the rank and file of the wisdom of moving to the cloud, Libenson was able to empower those already enthusiastic about the initiative. They then became advocates, brought others along and, before long, the enthusiasts soon outnumbered the skeptics.

The truth is you can’t overpower, bribe or coerce people to embrace change. By focusing on changing the dynamics upon which a transformation can take place, you can empower those within your organization to drive change themselves. The role of a leaders is no longer to plan and direct action, but to inspire and empower belief.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI and the Productivity Paradox

AI and the Productivity Paradox

GUEST POST from Greg Satell

In the 1970’s and 80’s, business investment in computer technology were increasing by more than twenty percent per year. Strangely though, productivity growth had decreased during the same period. Economists found this turn of events so strange that they called it the productivity paradox to underline their confusion.

Productivity growth would take off in the late 1990s, but then mysteriously drop again during the mid-aughts. At each juncture, experts would debate whether digital technology produced real value or if it was all merely a mirage. The debate would continue even as industry after industry was disrupted.

Today, that debate is over, but a new one is likely to begin over artificial intelligence. Much like in the early 1970s, we have increasing investment in a new technology, diminished productivity growth and “experts” predicting massive worker displacement . Yet now we have history and experience to guide us and can avoid making the same mistakes.

You Can’t Manage (Or Evaluate) What You Can’t Measure

The productivity paradox dumbfounded economists because it violated a basic principle of how a free market economy is supposed to work. If profit seeking businesses continue to make substantial investments, you expect to see a return. Yet with IT investment in the 70s and 80s, firms continued to increase their investment with negligible measurable benefit.

A paper by researchers at the University of Sheffield sheds some light on what happened. First, productivity measures were largely developed for an industrial economy, not an information economy. Second, the value of those investments, while substantial, were a small portion of total capital investment. Third, the aggregate productivity numbers didn’t reflect differences in management performance.

Consider a widget company in the 1970s that invested in IT to improve service so that it could ship out products in less time. That would improve its competitive position and increase customer satisfaction, but it wouldn’t produce any more widgets. So, from an economic point of view, it wouldn’t be a productive investment. Rival firms might then invest in similar systems to stay competitive but, again, widget production would stay flat.

So firms weren’t investing in IT to increase productivity, but to stay competitive. Perhaps even more importantly, investment in digital technology in the 70s and 80s was focused on supporting existing business models. It wasn’t until the late 90s that we began to see significant new business models being created.

The Greatest Value Comes From New Business Models—Not Cost Savings

Things began to change when firms began to see the possibilities to shift their approach. As Josh Sutton, CEO of Agorai, an AI marketplace, explained to me, “The businesses that won in the digital age weren’t necessarily the ones who implemented systems the best, but those who took a ‘digital first’ mindset to imagine completely new business models.”

He gives the example of the entertainment industry. Sure, digital technology revolutionized distribution, but merely putting your programming online is of limited value. The ones who are winning are reimagining storytelling and optimizing the experience for binge watching. That’s the real paradigm shift.

“One of the things that digital technology did was to focus companies on their customers,” Sutton continues. “When switching costs are greatly reduced, you have to make sure your customers are being really well served. Because so much friction was taken out of the system, value shifted to who could create the best experience.”

So while many companies today are attempting to leverage AI to provide similar service more cheaply, the really smart players are exploring how AI can empower employees to provide a much better service or even to imagine something that never existed before. “AI will make it possible to put powerful intelligence tools in the hands of consumers, so that businesses can become collaborators and trusted advisors, rather than mere service providers,” Sutton says.

It Takes An Ecosystem To Drive Impact

Another aspect of digital technology in the 1970s and 80s was that it was largely made up of standalone systems. You could buy, say, a mainframe from IBM to automate back office systems or, later, Macintoshes or a PCs with some basic software to sit on employees desks, but that did little more than automate basic clerical tasks.

However, value creation began to explode in the mid-90s when the industry shifted from systems to ecosystems. Open source software, such as Apache and Linux, helped democratize development. Application developers began offering industry and process specific software and a whole cadre of systems integrators arose to design integrated systems for their customers.

We can see a similar process unfolding today in AI, as the industry shifts from one-size-fits-all systems like IBM’s Watson to a modular ecosystem of firms that provide data, hardware, software and applications. As the quality and specificity of the tools continues to increase, we can expect the impact of AI to increase as well.

In 1987, Robert Solow quipped that, “ You can see the computer age everywhere but in the productivity statistics,” and we’re at a similar point today. AI permeates our phones, smart speakers in our homes and, increasingly, the systems we use at work. However, we’ve yet to see a measurable economic impact from the technology. Much like in the 70s and 80s, productivity growth remains depressed. But the technology is still in its infancy.

We’re Just Getting Started

One of the most salient, but least discussed aspects of artificial intelligence is that it’s not an inherently digital technology. Applications like voice recognition and machine vision are, in fact, inherently analog. The fact that we use digital technology to execute machine learning algorithms is actually often a bottleneck.

Yet we can expect that to change over the next decade as new computing architectures, such as quantum computers and neuromorphic chips, rise to the fore. As these more powerful technologies replace silicon chips computing in ones and zeroes, value will shift from bits to atoms and artificial intelligence will be applied to the physical world.

“The digital technology revolutionized business processes, so it shouldn’t be a surprise that cognitive technologies are starting from the same place, but that’s not where they will end up. The real potential is driving processes that we can’t manage well today, such as in synthetic biology, materials science and other things in the physical world,” Agorai’s Sutton told me.

In 1987, when Solow made his famous quip, there was no consumer Internet, no World Wide Web and no social media. Artificial intelligence was largely science fiction. We’re at a similar point today, at the beginning of a new era. There’s still so much we don’t yet see, for the simple reason that so much has yet to happen.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Avoid These Four Myths While Networking Your Organization

Avoid These Four Myths While Networking Your Organization

GUEST POST from Greg Satell

In an age of disruption, everyone has to adapt eventually. However, the typical organization is ill-suited to change direction. Managers spend years—and sometimes decades—working to optimize their operations to deliver specific outcomes and that can make an organization rigid in the face of a change in the basis of competition.

So it shouldn’t be surprising that the idea of a networked organizations have come into vogue. While hierarchies tend to be rigid, networks are highly adaptable and almost infinitely scalable. Unfortunately, popular organizational schemes such as matrixed management and Holacracy have had mixed results, at best.

The truth is that networks have little to do with an organization chart and much more to do with how informal connections form in your organization, especially among lower-level employees. In fact, coming up with a complex scheme is likely to do little more than cause a lot of needless confusion. Here are the myths you need to avoid.

Myth #1: You Need To Restructure Your Organization

In the early 20th century, the great sociologist Max Weber noted that the sweeping industrialization taking place would lead to a change in how organizations operated. As cottage industries were replaced by large enterprises, leadership would have to become less traditional and focused on charismatic leaders and more organized and rational.

He also foresaw that jobs would need to be broken down into small, specific tasks and be governed by a system of hierarchy, authority and responsibility. This would require a more formal mode of organization—a bureaucracy—in which roles and responsibilities were clearly defined. Later, executives such as Alfred Sloan at General Motors perfected the model.

Most enterprises are still set up this way because it remains the most efficient way to organize tasks. It aligns authority with accountability and optimizes information flow. Everybody knows where they stand and what they are responsible for. Organizational restructures are painful and time consuming because they disrupt and undermine the normal workflow.

In fact, reorganizations can backfire if they cut informal ties that don’t show up on the organization chart. So a better path is to facilitate informal ties so that people can coordinate work that falls in between organizational boundaries. In his book One Mission, McChrystal Group President Chris Fussell calls this a “hybrid organization.”

Myth #2 You Have To Break Down Silos

In 2005, researchers at Northwestern University took on the age old question: “What makes a hit on Broadway.” They looked at all the normal stuff you would imagine to influence success, such as the production budget, the marketing budget and the track record of the director. What they found, however, was surprising.

As it turns out, the most important factor was how the informal networks of the cast and crew were structured. If nobody had ever worked together before, results were poor, but if too many people had previously worked together, results also suffered. It was in the middle range, where there was both familiarity and disruption, that produced the best results.

Notice how the study doesn’t mention anything about the formal organization of the cast and crew. Broadway productions tend to have very basic structures, with a director leading the creative team, a producer managing the business side and others heading up things like music, choreography and so on. That makes it easy for a cast and crew to set up, because everyone knows their place.

The truth is that silos exist because they are centers of capability. Actors work with actors. Set designers work with set designers and so on. So instead of trying to break down silos, you need to start thinking about how to connect them. In the case of the Broadways plays, that was done through previous working relationships, but there are other ways to achieve the same goal.

Myth #3: You Need To Identify Influentials, Hubs And Bridges

In Malcolm Gladwell’s breakaway bestseller The Tipping Point, he wrote “The success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts,” which he called “The Law of the Few.” Before long, it seemed like everybody from marketers to organizational theorists were looking to identify a mysterious group of people called “influentials.”

Yet as I explain in Cascades, decades of empirical evidence shows that influentials are a myth. While it is true that some people are more influential than others, their influence is highly contextual and not significant enough to go to the trouble of identifying them. Also, a study that analyzed the emails of 60,000 people found that information does not need rely on hubs or bridges.

With that said, there are a number of ways to network your organization by optimizing organizational platforms for connection. For example, Facebook’s Engineering Bootcamp found that “bootcampers tend to form bonds with their classmates who joined near the same time and those bonds persist even after each has joined different teams.”

One of my favorite examples of how even small tweaks can improve connectivity is a project done at a bank’s call center. When it was found that a third of variation in productivity could be attributed to informal communication outside of meetings, the bank arranged for groups to go on coffee break together, increasing productivity by as much as 20% while improving employee satisfaction at the same time.

Myth #4: Networks Don’t Need Leadership

Perhaps the most damaging myth about networks is that they don’t need strong leadership. Many observers have postulated that because technology allows people to connect with greater efficiency, leaders are no longer critical to organizing work. The reality is that nothing can be further from the truth.

The fact is that it is small groups, loosely connected, but united by a shared purpose that drive change. While individuals can form loosely connected small groups, they can rarely form a shared purpose by themselves. So the function of leadership these days is less to plan and direct action than it is to empower and inspire belief.

So perhaps the biggest shift is not one of tactics, but of mindset. In traditional hierarchies, information flows up through the organization and orders flow down. That helps leaders maintain control, but it also makes the organization slow to adapt and vulnerable to disruption.

Leaders need to learn how to facilitate information flow through horizontal connections so people lower down in the organization can act on it without waiting for approval. That’s where shared purpose comes in. Without a common purpose and shared values, pushing decision making down will only result in chaos. It’s much easier to get people to do what you want if they already want what you want.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Malcolm Gladwell Trap

The Malcolm Gladwell Trap

GUEST POST from Greg Satell

A few years ago I bought a book that I was really excited about. It’s one of those books that created a lot of buzz and it was highly recommended by someone I respect. The author’s pedigree included Harvard, Stanford, McKinsey and a career as a successful entrepreneur and CEO.

Yet about halfway in I noticed that he was choosing facts to fit his story and ignoring critical truths that would indicate otherwise, much like Malcolm Gladwell’s often does in his books. Once I noticed a few of these glaring oversights I found myself not being able to fully trust anything the author wrote and set the book aside.

Stories are important and facts matter. When we begin to believe in false stories, we begin to make decisions based on them. When these decisions go awry, we’re likely to blame other factors, such as ourselves, those around us or other elements of context and not the false story. That’s how many businesses fail. They make decisions based on the wrong stories.

Don’t Believe Everything You Think

Go to just about any innovation conference and you will find some pundit on stage telling a story about a famous failure, usually Blockbuster, Kodak or Xerox. In each case, the reason given for the failure is colossal incompetence by senior management: Blockbuster didn’t recognize the Netflix threat. Kodak invented, but then failed to market, a digital camera. Xerox PARC developed technology, but not products.

In each case, the main assertion is demonstrably untrue. Blockbuster did develop and successfully execute a digital strategy, but its CEO left the company due a dispute and the strategy was reversed. Kodak’s EasyShare line of digital cameras were top sellers, but couldn’t replace the massive profits the company made developing film. The development of the laser printer at Xerox PARC actually saved the company.

None of this is very hard to uncover. Still, the author fell for two of these bogus myths (Kodak and Xerox), even after obviously doing significant research for the book. Most probably, he just saw something that fit with his narrative and never bothered to question whether it was true or not, because he was to busy validating what he already knew to be true.

This type of behavior is so common that there is a name for it: confirmation bias. We naturally seek out information that confirms our existing beliefs. It takes significant effort to challenge our own assumptions, so we rarely do. To overcome that is hard enough. Yet that’s only part of the problem.

Majorities Don’t Just Rule, They Also Influence

In the 1950’s, Solomon Asch undertook a pathbreaking series of conformity studies. What he found was that in small groups, people will conform to a majority opinion. The idea that people have a tendency toward conformity is nothing new, but that they would give obviously wrong answers to simple and unambiguous questions was indeed shocking.

Now think about how hard it is for a more complex idea to take hold across a broad spectrum of people, each with their own biases and opinions. The truth is that majorities don’t just rule, they also influence. More recent research suggests that the effect applies not only to people we know well, but that we are also influenced even by second and third degree relationships.

We tend to accept the beliefs of people around us as normal. So if everybody believes that the leaders of Blockbuster, Kodak and Xerox were simply dullards who were oblivious to what was going on around them, then we are very likely to accept that as the truth. Combine this group effect with confirmation bias, it becomes very hard to see things differently.

That’s why it’s important to step back and ask hard questions. Why did these companies fail? Did foolish and lazy people somehow rise to the top of successful organizations, or did smart people make bad decisions? Was there something else to the story? Given the same set of facts, would we act any differently?

The Inevitable Paradigm Shift

The use of the term “paradigm shift” has become so common that most people are unaware that it started out having a very specific meaning. The idea of a paradigm shift was first established by Thomas Kuhn in his book The Structure of Scientific Revolutions, to describe how scientific breakthroughs come to the fore.

It starts with an established model, the kind we learn in school or during initial training for a career. Models become established because they are effective and the more proficient we become at applying a good model, the better we perform. The leaders in any given field owe much of their success to these models.

Yet no model is perfect and eventually anomalies show up. Initially, these are regarded as “special cases” and are worked around. However, as the number of special cases proliferate, the model becomes increasingly untenable and a crisis ensues. At this point, a fundamental change in assumptions has to take place if things are to move forward.

The problem is that most people who are established in the field believe in the traditional model, because that’s what most people around them believe. So they seek out facts to confirm these beliefs. Few are willing to challenge what “everybody knows” and those who do are often put at great professional and reputational risk.

Why We Fail To Adapt

Now we can begin to see why not only businesses, but whole industries get disrupted. We tend to defend, rather than question, our existing beliefs and those around us often reinforce them. To make matters worse, by this time the idea has become so well established that we will often incur switching costs if we abandon it. That’s why we fail to adapt.

Yet not everybody shares our experiences. Others, who have not grown up with the conventional wisdom, often do not have the same assumptions. They also don’t have an existing peer group that will enforce those assumptions. So for them, the flaws are much easier to see, as are the opportunities to doing things another way.

Of course, none of this has to happen. As I describe in Mapping Innovation, some companies, such as IBM and Procter & Gamble, have survived for over a century because they are always actively looking for new problems to solve, which forces them to look for new ideas and insights. It compels them to question what they think they know.

Getting stories right is hard work. You have to force yourself. However, we all have an obligation to get it right. For me, that means relentlessly checking every fact with experts, even for things that I know most people won’t notice. Inevitably, I get things wrong—sometimes terribly wrong— and need to be corrected. That’s always humbling.

I do it because I know stories are powerful. They take on a life of their own. Getting them right takes effort. As my friend Whitney Johnson points out, the best way to avoid disruption is to first disrupt yourself.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Where People Go Wrong with Minimum Viable Products

Where People Go Wrong with Minimum Viable Products

GUEST POST from Greg Satell

Ever since Eric Reis published his bestselling book, The Lean Startup, the idea of a minimum viable product (MVP) has captured the imagination of entrepreneurs and product developers everywhere. The idea of testing products faster and cheaper has an intuitive logic that simply can’t be denied.

Yet what is often missed is that a minimum viable product isn’t merely a stripped down version of a prototype. It is a method to test assumptions and that’s something very different. A single product often has multiple MVPs, because any product development effort is based on multiple assumptions.

Developing an MVP isn’t just about moving faster and cheaper, but also minimizing risk. In order to test assumptions, you first need to identify them and that’s a soul searching process. You have to take a hard look at what you believe, why you believe it and how those ideas can be evaluated. Essentially, MVP’s work because they force you to do the hard thinking early.

Every Idea Has Assumptions Built In

In 1990, Nick Swinmurn had an idea for a business. He intended to create a website to sell shoes much like Amazon did for books. This was at the height of the dotcom mania, when sites were popping up to sell everything from fashion to pet food to groceries, so the idea itself wasn’t all that original or unusual.

What Swinmurn did next, however, was. Rather than just assuming that people would be willing to buy shoes online or conducting expensive marketing research, he built a very basic site, went to a shoe store and took pictures of shoes, which he placed on the site. When he got an order, he bought the shoes retail and shipped them out. He lost money on every sale.

That’s a terrible way to run a business, but a great — and incredibly cheap — way to to test a business idea. Once he knew that people were willing to buy shoes online, he began to build all the elements of a fully functioning business. Ten years later, the company he created, Zappos was acquired by Amazon for $1.2 billion.

Notice how he didn’t just assume that his business idea was viable. He tested it and validated it. He also learned other things, such as what styles were most popular. Later, Zappos expanded to include handbags, eyewear, clothing, watches, and kids’ merchandise.

The Cautionary Tale Of Google Glass

Now compare how Swinmurn launched his business with Google’s Glass debacle. Instead of starting with an MVP, it announced a full-fledged prototype complete with a snazzy video. Through augmented reality projected onto the lenses, users could seamlessly navigate an urban landscape, send and receive messages and take photos and videos. It generated a lot of excitement and seemed like a revolutionary new way to interact with technology.

Yet criticism quickly erupted. Many were horrified that hordes of wandering techno-hipsters could be surreptitiously recording us. Others had safety concerns about everything from people being distracted while driving to the devices being vulnerable to hacking. Soon there was a brewing revolt against “Google Glassholes.”

Situations like the Google Glass launch are startlingly common. In fact, the vast majority of new product launches fail because there’s no real way to know whether you have the right product-market fit customers actually get a chance to interact with the product. Unfortunately, most product development efforts start by seeking out the largest addressable market. That’s almost always a mistake.

If you are truly creating something new and different, you want to build for the few and not the many. That’s the mistake that Google made with its Glass prototype.

Identifying A Hair On Fire Use Case

The alternative to trying to address the largest addressable market is to identify a hair-on-fire use case. The idea is to find a potential customer that needs to solve a problem so badly that they almost literally have their hair on fire. These customers will be more willing to co-create with you and more likely to put up with the inevitable bugs and glitches that always come up.

For example, Tesla didn’t start out by trying to build an electric car for the masses. Instead, it created a $100,000 status symbol for Silicon Valley millionaires. Because these customers could afford multiple cars, range wasn’t as much of a concern. The high price tag also made a larger battery more feasible. The original Tesla Roadster had a range of 244 miles.

The Silicon Valley set were customers with their hair on fire. They wanted to be seen as stylish and eco-friendly, so were willing to put up with the inevitable limitations of electric cars. They didn’t have to depend on them for their commute or to pick the kids up at soccer practice. As long as the car was cool enough, they would buy it.

Interestingly, Google Glass made a comeback as an industrial product and had a nice run from 2019 to 2023 before they went away for good. For hipsters, an augmented reality product is far from a necessity, but a business that needs to improve productivity can be a true “hair-on-fire” use case. As the product improves and gains traction, it’s entirely possible that it eventually makes its way back to the consumer market in some form.

Using An MVP To Pursue A Grand Challenge

One of the criticisms of minimum viable products is that they are only suited for simple products and tweaks, rather than truly ambitious projects. Nothing could be further from the truth. The reality is that the higher your ambitions, the more important it is for you to start with a minimum viable product.

IBM is one company that has a long history of pursuing grand challenges such as the Deep Blue project which defeated world champion Garry Kasparov at chess and the Blue Gene project which created a new class of “massively parallel” supercomputers. More recently were the Jeopardy grand challenge, which led to the development of its current Watson business and the Debater project.

Notice that none of these were fully featured products. Rather they were attempts to, as IBM’s Chief Innovation Officer, Bernie Meyerson, put it to me, invent something that “even experts in the field, regard as an epiphany and changes assumptions about what’s possible.” That would be hard to do if you were trying to create a full featured product for a demanding customer.

That’s the advantage of creating an MVP. It essentially acts as a research lab where you can safely test hypotheses and eliminate sources of uncertainty. Once you’ve done that, you can get started trying to build a real business.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Communication Skills Trump Coding for Our Kids’ Future

Why Communication Skills Trump Coding for Our Kids' Future

GUEST POST from Greg Satell

Many say that coding is the new literacy. Kids are encouraged to learn programming in school and take coding courses online. In that famous scene in The Graduate Dustin Hoffman’s character was encouraged by a family friend to go into plastics. If it were shot today, it would have probably been computer code.

This isn’t actually that new. I remember first being taught how to code in middle school in the early 80s in BASIC (a mostly defunct language now). Yet even today, coding is far from an essential skill. In fact, with the rise of no-code platforms, there is a strong argument to be made that code is becoming less important.

Don’t get me wrong, there’s still plenty of coding to be done on the back end and programming is certainly a perfectly reasonable thing to learn. However, there’s no reason people need to learn it to have a successful, productive career. On the other hand writing, as well as other communication skills, will only become more important in the decades to com.

The Future Is Not Digital

During the past few decades, digital technology has become largely synonymous with innovation. Every 18 months or so, a new generation of processors has come out that was faster, more powerful and cheaper than its predecessors. Entrepreneurs would leverage these new capabilities to create exciting new products and disrupt entire industries.

Yet now that’s all coming to an end. Every technology eventually hits theoretical limits and that’s where we are now with regard to digital processors. We have maybe one or two generations of advancement and then, with some clever workarounds, we may be able to stretch the technology for a decade or so, but it’s highly unlikely that it’ll last any longer than that.

That’s not so horrible. There’s no 11th Commandment that says, “Thou shalt compute in ones and zeroes,” and there are nascent architectures that are potentially far more powerful than digital computers, such as quantum and neuromorphic computing. Neither of these, however are digital technologies. They operate on fundamentally different logic and will use different code.

So instead of learning to code, maybe our kids would be better served by learning about quantum mechanics or neurology. Those would seem to be far more relevant to their future.

The Shift From Bits To Atoms

Digital technology is largely virtual. Transistors on silicon wafers compute ones and zeroes so that images can flash across our screens. That can be very useful, because we can simulate things on a screen much more cheaply than in the physical world, but it’s also limited. We can’t eat, wear or live in a virtual world.

The important technologies of the next generation, however, will be based on atoms rather than bits. Advances in genomics have led to the new field of synthetic biology and a revolution in materials science is transforming our ability to develop advance materials for manufacturing, clean energy and space exploration. So maybe instead of learning how to code, kids should be studying genetics and chemistry.

As we develop new technologies, we will also need to design experiences so that we can use them more effectively. For example, we need linguists and conversational analysts to design better voice interfaces. Kids who study those things may be able to build great careers.

The rapid pace of technological advancement over the next generation will surely put stress on society. Digital technology has helped produce massive income inequality and a rise in extremism. We will need sociologists and political scientists to help us figure out how to cope with these new, much more powerful technologies.

Collaboration Is The New Competitive Advantage

When my generation was in school, we were preparing for a future that seemed pretty clear cut. We assumed we would become doctors, lawyers, executives and engineers and spend our entire lives working in our chosen fields. It didn’t turn out that way. These days a business model is unlikely to last a decade, much less a lifetime.

Kids today need to prepare to become lifelong learners because the pace of change will not slow down. In fact, it is likely to accelerate beyond anything we can imagine today. The one thing we can predict about the future is that collaboration will be critical for success. People like geneticists and quantum scientists will need to work closely with chemists, designers sociologists and specialists in fields that haven’t even been invented yet.

These are, in fact, longstanding trends. The journal Nature recently noted that the average scientific paper today has four times as many authors as one did in 1950 and the work they are doing is far more interdisciplinary and done at greater distances than in the past. We can only expect these trends to become more prominent in the future.

In order to collaborate effectively, you need to communicate effectively and that’s where writing comes in. Being able to express thoughts and ideas clearly and cogently is absolutely essential to collaboration and innovation.

Writing Well Is Thinking Well

Probably the most overlooked aspect of writing is that it does more than communicate thoughts, but helps form them. As Fareed Zakaria has put it. “Thinking and writing are inextricably intertwined. When I begin to write, I realize that my ‘thoughts’ are usually a jumble of half-baked, incoherent impulses strung together with gaping logical holes between them.”

“Whether you’re a novelist, a businessman, a marketing consultant or a historian,” he continues, “writing forces you to make choices and it brings clarity and order to your ideas.” Zakaria also points to Jeff Bezos’ emphasis on memo writing as an example of how clarity of expression leads to innovation.

In fact, Amazon considers writing so essential to its ability to innovate that it has become a key part of its culture. It’s hard to make much of a career at Amazon if you cannot write well, because to create products and services that are technically sound, easy to use and efficiently executed, a diverse group of highly skilled people need to tightly coordinate their efforts.

Today, as the digital revolution comes to an end and we enter a new era of innovation, it’s easy to get overwhelmed by the rapid advancement of breakthrough technologies. However, the key to success in our uncertain future will be humans collaborating with other humans to design work for machines. That starts with writing effectively.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Robots Aren’t Really Going to Take Over

The Robots Aren't Really Going to Take Over

GUEST POST from Greg Satell

In 2013, a study at Oxford University found that 47% of jobs in the United States are likely to be replaced by robots over the next two decades. As if that doesn’t seem bad enough, Yuval Noah Harari, in his bestselling book Homo Deus, writes that “humans might become militarily and economically useless.” Yeesh! That doesn’t sound good.

Yet today, ten years after the Oxford Study, we are experiencing a serious labor shortage. Even more puzzling is that the shortage is especially acute in manufacturing, where automation is most pervasive. If robots are truly taking over, then why are having trouble finding enough humans to do work that needs being done?

The truth is that automation doesn’t replace jobs, it replaces tasks and when tasks become automated, they largely become commoditized. So while there are significant causes for concern about automation, such as increasing returns to capital amid decreasing returns to labor, the real danger isn’t with automation itself, but what we choose to do with it.

Organisms Are Not Algorithms

Harari’s rationale for humans becoming useless is his assertion that “organisms are algorithms.” Much like a vending machine is programed to respond to buttons, humans and other animals are programed by genetics and evolution to respond to “sensations, emotions and thoughts.” When those particular buttons are pushed, we respond much like a vending machine does.

He gives various data points for this point of view. For example, he describes psychological experiments in which, by monitoring brainwaves, researchers are able to predict actions, such as whether a person will flip a switch, even before he or she is aware of it. He also points out that certain chemicals, such as Ritalin and Prozac, can modify behavior.

Therefore, he continues, free will is an illusion because we don’t choose our urges. Nobody makes a conscious choice to crave chocolate cake or cigarettes any more than we choose whether to be attracted to someone other than our spouse. Those things are a product of our biological programming.

Yet none of this is at all dispositive. While it is true that we don’t choose our urges, we do choose our actions. We can be aware of our urges and still resist them. In fact, we consider developing the ability to resist urges as an integral part of growing up. Mature adults are supposed to resist things like gluttony, adultery and greed.

Revealing And Building

If you believe that organisms are algorithms, it’s easy to see how humans become subservient to machines. As machine learning techniques combine with massive computing power, machines will be able to predict, with great accuracy, which buttons will lead to what actions. Here again, an incomplete picture leads to a spurious conclusion.

In his 1954 essay, The Question Concerning Technology the German philosopher Martin Heidegger sheds some light on these issues. He described technology as akin to art, in that it reveals truths about the nature of the world, brings them forth and puts them to some specific use. In the process, human nature and its capacity for good and evil is also revealed.

He gives the example of a hydroelectric dam, which reveals the energy of a river and puts it to use making electricity. In much the same sense, Mark Zuckerberg did not “build” a social network at Facebook, but took natural human tendencies and channeled them in a particular way. After all, we go online not for bits or electrons, but to connect with each other.

In another essay, Building Dwelling Thinking, Heidegger explains that building also plays an important role, because to build for the world, we first must understand what it means to live in it. Once we understand that Mark Zuckerberg, or anyone else for that matter, is working to manipulate us, we can work to prevent it. In fact, knowing that someone or something seeks to control us gives us an urge to resist. If we’re all algorithms, that’s part of the code.
Social Skills Will Trump Cognitive Skills

All of this is, of course, somewhat speculative. What is striking, however, is the extent to which the opposite of what Harari and other “experts” predict is happening. Not only have greater automation and more powerful machine learning algorithms not led to mass unemployment it has, as noted above, led to a labor shortage. What gives?

To understand what’s going on, consider the legal industry, which is rapidly being automated. Basic activities like legal discovery are now largely done by algorithms. Services like LegalZoom automate basic filings. There are even artificial intelligence systems that can predict the outcome of a court case better than a human can.

So it shouldn’t be surprising that many experts predict gloomy days ahead for lawyers. By now, you can probably predict the punchline. The number of lawyers in the US has increased by 15% since 2008 and it’s not hard to see why. People don’t hire lawyers for their ability to hire cheap associates to do discovery, file basic documents or even, for the most part, to go to trial. In large part, they want someone they can trust to advise them.

The true shift in the legal industry will be from cognitive to social skills. When much of the cognitive heavy lifting can be done by machines, attorneys who can show empathy and build trust will have an advantage over those who depend on their ability to retain large amounts of information and read through lots of documents.

Value Never Disappears, It Just Shifts To Another Place

In 1900, 30 million people in the United States worked as farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a matter of speaking, 90% of American agriculture workers lost their jobs, mostly due to automation. Yet somehow, the twentieth century was seen as an era of unprecedented prosperity.

You can imagine anyone working in agriculture a hundred years ago would be horrified to find that their jobs would vanish over the next century. If you told them that everything would be okay because they could find work as computer scientists, geneticists or digital marketers, they would probably have thought that you were some kind of a nut.

But consider if you told them that instead of working in the fields all day, they could spend that time in a nice office that was cool and dry because of something called “air conditioning,” and that they would have machines that cook meals without needing wood to be chopped and hauled. To sweeten the pot you could tell them that ”work” would mostly consist largely of talking to other people. They may have imagined it as a paradise.

The truth is that value never disappears, it just shifts to another place. That’s why today we have less farmers, but more food and, for better or worse, more lawyers. It is also why it’s highly unlikely that the robots will take over, because we are not algorithms. We have the power to choose.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Most Corporate Innovation Programs Fail

(And How To Make Them Succeed)

Why Most Corporate Innovation Programs Fail

GUEST POST from Greg Satell

Today, everybody needs to innovate. So it shouldn’t be surprising that corporate innovation programs have become wildly popular. There is an inherent tradeoff between innovation and the type of optimization that operational executives excel at. Creating a separate unit to address innovation just makes intuitive sense.

Yet corporate innovation programs often fail and it’s not hard to see why. Unlike other business functions, like marketing or finance, in a healthy organization everybody takes pride in their ability to innovate. Setting up a separate innovation unit can often seem like an affront to those who work hard to innovate in operational units.

Make no mistake, a corporate innovation program is no panacea. It doesn’t replace the need to innovate every day. Yet a well designed program can augment those efforts, take the business in new directions and create real value. The key to a successful innovation program is to develop a clear purpose built on a shared purpose that can solve important problems.

A Good Innovation Program Extends, It Doesn’t Replace

It’s no secret that Alphabet is one of the most powerful companies in the world. Nevertheless, it has a vulnerability that is often overlooked. Much like Xerox and Kodak decades ago, it’s highly dependent on a single revenue stream. In 2018, 86% of its revenues came from advertising, mostly from its Google search business.

It is with this in mind that the company created its X division. Because the unit was set up to pursue opportunities outside of its core search business, it didn’t encounter significant resistance. In fact, the X division is widely seen as an extension of what made Alphabet so successful in the first place.

Another important aspect is that the X division provides a platform to incubate internal projects. For example, Google Brain started out as a “20% time project.” As it progressed and needed more resources, it was moved to the X division, where it was scaled up further. Eventually, it returned to the mothership and today is an integral part of the core business.

Notice how the vision of the X division was never to replace innovation efforts in the core business, but to extend them. That’s been a big part of its success and has led to exciting new business like Waymo autonomous vehicles and the Verily healthcare division.

Focus On Commonality, Not Difference

All too often, innovation programs thrive on difference. They are designed to put together a band of mavericks and disruptors who think differently than the rest of the organization. That may be great for instilling a strong esprit de corps among those involved with the innovation program, but it’s likely to alienate others.

As I explain in Cascades, any change effort must be built on shared purpose and shared values. That’s how you build trust and form the basis for effective collaboration between the innovation program and the rest of the organization. Without those bonds of trust, any innovation effort is bound to fail.

You can see how that works in Alphabet’s X division. It is not seen as fundamentally different from the core Google business, but rather as channeling the company’s strengths in new directions. The business opportunities it pursues may be different, but the core values are the same.

The key question to ask is why you need a corporate innovation program in the first place. If the answer is that you don’t feel your organization is innovative enough, then you need to address that problem first. A well designed innovation program can’t be a band-aid for larger issues within the core business.

Executive Sponsorship Isn’t Enough

Clearly, no corporate innovation program can be successful without strong executive sponsorship. Commitment has to come from the top. Yet just as clearly, executive sponsorship isn’t enough. Unless you can build support among key stakeholders inside and outside the organization, support from the top is bound to erode.

For example, when Eric Haller started Datalabs at Experian, he designed it to be focused on customers, rather than ideas developed internally. “We regularly sit down with our clients and try and figure out what’s causing them agita,” he told me, “because we know that solving problems is what opens up enormous business opportunities for us.”

Because the Datalabs units works directly with customers to solve problems that are important to them, it has strong support from a key stakeholder group. Another important aspect at Datalabs is that once a project gets beyond the prototype stage it goes to one of the operational units within the company to be scaled up into a real business. Over the past five years businesses originated at Datalabs have added over $100 million in new revenues.

Perhaps most importantly, Haller is acutely aware how innovation programs can cause resentment, so he works hard to reduce tensions through building collaborations around the organization. Datalabs is not where “innovation happens” at Experian. Rather it serves to augment and expand capabilities that were already there.

Don’t Look For Ideas, Identify Meaningful Problems

Perhaps most importantly, an innovation program should not be seen as a place to generate ideas. The truth is that ideas can come from anywhere. So designating one particular program in which ideas are supposed to happen will not only alienate the rest of the organization, it is also likely to overlook important ideas generated elsewhere.

The truth is that innovation isn’t about ideas. It’s about solving problems. In researching my book, Mapping Innovation, I came across dozens of stories from every conceivable industry and field and it always started with someone who came across a problem they wanted to solve. Sometimes, it happened by chance, but in most cases I found that great innovators were actively looking for problems that interested them.

If you look at successful innovation programs like Alphabet’s X division and Experian’s Datalabs, the fundamental activity is exploration. X division explores domains outside of search, while Datalabs explores problems that its customers need solved. Once you identify a meaningful problem, the ideas will come.

That’s the real potential of innovation programs. They provide a space to explore areas that don’t fit with the current business, but may play an important role in its future. A good innovation program doesn’t replace capabilities in the core organization, but leverages them to create new opportunities.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Four Major Shifts Driving the 21st Century

Four Major Shifts Driving the 21st Century

GUEST POST from Greg Satell

In 1900, most people lived much like their ancestors had for millennia. They lived and worked on farms, using animal power and hand tools to augment their own abilities. They inhabited small communities and rarely, if ever, traveled far from home. They engaged in small scale violence and lived short, hard lives.

That would all change over the next century as we learned to harness the power of internal combustion, electricity and atoms. These advancements allowed us to automate physical labor on a large scale, engage in mass production, travel globally and wage violence that could level entire cities.

Today, at the beginning of a new century, we are seeing similar shifts that are far more powerful and are moving far more quickly. Disruption is no longer seen as merely an event, but a way of life and the fissures are there for all to see. Our future will depend on our determination to solve problems faster than our proclivity to continually create them.

1. Technology Shifts

At the turn of the 20th century, electricity and internal combustion were over a decade old, but hadn’t made much of an impact yet. That would change in the 1920s, as roads got built and new appliances that harnessed the power of electricity were invented. As ecosystems formed around new technologies, productivity growth soared and quality of life increased markedly.

There would be two more major technology shifts over the course of the century. The Green Revolution and the golden age of antibiotics in the 50s and 60s saved an untold number of lives. The digital revolution in the 90s created a new era of communication and media that still reverberates today.

These technological shifts worked for both good and ill in that they revealed the best and worst parts of human nature. Increased mobility helped to bring about violence on a massive scale during two world wars. The digital revolution made war seem almost antiseptic, enabling precision strikes to kill people half a world away at the press of a button.

Today, we are on the brink of a new set of technological shifts that will be more powerful and more pervasive than any we have seen before. The digital revolution is ending, yet new technologies, such as novel computing architectures, artificial intelligence, as well as rapid advancements in genomics and materials science promise to reshape the world as we know it.

2. Resource Shifts

As new technologies reshaped the 20th century, they also reshaped our use of resources. Some of these shifts were subtle, such as how the invention of synthetic indigo dye in Germany affected farmers in India. Yet the biggest resource shift, of course, was the increase in the demand for oil.

The most obvious impact from the rise of oil was how it affected the Middle East. Previously nomadic societies were suddenly awash in money. Within just a single generation, countries like Saudi Arabia, Iraq and Iran became global centers of power. The Arab Oil Embargo of the 1970s nearly brought western societies to their knees and prolonged the existence of the Soviet Union.

So I was more than surprised last year to find when I was at a conference in Bahrain that nearly every official talked openly about he need to “get off oil.” With the rise of renewable energy, depending on a single commodity is no longer a viable way to run a society. Today, solar power is soaring in the Middle East.

Still, resource availability remains a powerful force. As the demand for electric vehicles increases, the supply of lithium could become a serious issue. Already China is threatening to leverage its dominance in rare earth elements in the trade war with the United States. Climate change and population growth is also making water a scarce resource in many places.

3. Migrational Shifts

One of the most notable shifts in the 20th century was how the improvement in mobility enabled people to “vote with their feet.” Those who faced persecution or impoverishment could, if they dared, sail off to some other place where the prospects were better. These migrational shifts also helped shape the 20th century and will likely do the same in the 21st.

Perhaps the most notable migration in the 20th century was from Europe to the United States. Before World War I, immigrants from Southern and Eastern Europe flooded American shores and the backlash led to the Immigration Act of 1924. Later, the rise of fascism led to another exodus from Europe that included many of its greatest scientists.

It was largely through the efforts of immigrant scientists that the United States was able to develop technologies like the atomic bomb and radar during World War II. Less obvious though is the contributions of second and third generation citizens, who make up a large proportion of the economic and political elite in the US.

Today, the most noteworthy shift is the migration of largely Muslim people from war-torn countries into Europe. Much like America in the 1920s, the strains of taking in so many people so quickly has led to a backlash, with nationalist parties making significant gains in many countries.

4. Demographic Shifts

While the first three shifts played strong roles throughout the 20th century, demographic shifts, in many ways, shaped the second half of the century. The post war generation of Baby Boomers repeatedly challenged traditional values and led the charge in political movements such as the struggle for civil rights in the US, the Prague Spring in Czechoslovakia and the March 1968 protests in Poland.

The main drivers of the Baby Boomer’s influence have been its size and economic prosperity. In America alone, 76 million people were born in between 1946 and 1964, and they came of age in the prosperous years of the 1960s. These factors gave them unprecedented political and economic clout that continues to this day.

Yet now, Millennials, who are more diverse and focused on issues such as the environment and tolerance, are beginning to outnumber Baby Boomers. Much like in the 1960s, their increasing influence is driving trends in politics, the economy and the workplace and their values often put them in conflict with the baby boomers.

However, unlike the Baby Boomers, Millennials are coming of age in an era where prosperity seems to be waning. With Baby Boomers retiring and putting further strains on the economy, especially with regard to healthcare costs, tensions are on the rise.

Building On Progress

As Mark Twain is reputed to have said, “History doesn’t repeat itself, but it does rhyme.” While shifts in technology, resources, migration and demographics were spread throughout the 20th century, today we’re experiencing shifts in all four areas at once. Given that the 20th century was rife with massive wars and genocide, that is somewhat worrying.

Many of the disturbing trends around the world, such as the rise of authoritarian and populist movements, global terrorism and cyber warfare, can be attributed to the four shifts. Yet the 20th century was also a time of great progress. Wars became less frequent, life expectancy doubled and poverty fell while quality of life improved dramatically.

So today, while we face seemingly insurmountable challenges, we should also remember that many of the shifts that cause tensions, also give us the power to solve our problems. Advances in genomics and materials science can address climate change and rising healthcare costs. A rising, multicultural generation can unlock creativity and innovation. Migration can move workers to places where they are sorely needed.

The truth is that every disruptive era is not only fraught with danger, but also opportunity. Every generation faces unique challenges and must find the will to solve them. My hope is that we will do the same. The alternative is unthinkable.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Four Transformation Secrets Business Leaders Can Learn from Social and Political Movements

Four Transformation Secrets Business Leaders Can Learn from Social and Political Movements

GUEST POST from Greg Satell

In 2004, I was managing a major news organization during the Orange Revolution in Ukraine. One of the things I noticed was that thousands of people, who would normally be doing thousands of different things, would stop what they were doing and start doing the same things all at once, in nearly complete unison, with no clear authority guiding them.

What struck me was how difficult it was for me to coordinate action among the people in my company. I thought if I could harness the forces I saw at work in the Orange Revolution, it could be a powerful model for business transformation. That’s what started me out on the 15-year journey that led to my book, Cascades.

What I found was that many of the principles of successful movements can be applied to business transformation. Also, because social and political movements so well documented—there are often thousands of contemporary accounts from every conceivable perspective—we can gain insights that a traditional case studies miss. Here are four principles you can apply.

1. Failure Doesn’t Have To Be Fatal

One of the things that amazed me while researching revolutionary movements was how consistently failure played a part in their journey. Mahatma Gandhi’s early efforts to bring independence to India led to the massacre at Amritsar in 1919. Our own efforts in Ukraine in 2004 ultimately led to Viktor Yanukovych’s rise to power in 2010.

In the corporate context, it is often a crisis that leads to transformational change. In the early 90s, IBM was nearly bankrupt and many thought the company should be broken up. That’s what led to the Gerstner revolution that put the company back on track and a similar crisis at Alcoa presaged record profits under Paul O’Neil.

In fact, Lou Gertner would later say that failure and transformation are inextricably linked. “Transformation of an enterprise begins with a sense of crisis or urgency,” he told a groups of Harvard Business School students. “No institution will go through fundamental change unless it believes it is in deep trouble and needs to do something different to survive.”

What’s important about early failures is what you learn from them. In every successful transformation I researched, what turned the tide was when the insights gained from early failures were applied to create a keystone change that set out a clear and concrete initiative, involved multiple stakeholders and paved the way for a greater transformation down the road.

2. Don’t Bet Your Transformation On Persuasion

Any truly transformational change is going to encounter significant resistance. Those who fear change and support the status quo can be expected to work to undermine your efforts. That’s fairly obvious in social and political movements like the civil rights movements or the struggle against Apartheid, but often gets overlooked in the corporate context.

All too often change management efforts seek to convince opponents through persuasion. That’s unlikely to succeed. Betting your transformation on the idea that, given the right set of arguments or snappy slogans, those who oppose the change that you seek will immediately see the light is unrealistic. What you can do, however, is set out to influence stakeholders who can wield influence.

For example, in the 1980s, anti-Aparthied activists activists led a campaign against Barclays Bank in British university towns. That probably did little to persuade white nationalists in South Africa, but it severely affected Barclays’ business, which pulled its investments from South Africa. That and similar efforts made Apartheid economically untenable and helped lead to its downfall.

In a corporate transformation, there are many institutions, such as business units, customer groups, industry associations, and others that can wield significant influence. By looking at stakeholder groups more broadly, you can win important allies that can help you drive transformation forward.

3. Be Explicit About Your Values

Today, we regard Nelson Mandela as an almost saintly figure, but it wasn’t always that way. Throughout his career as an activist, he was accused of being a communist, an anarchist and worse. When confronted with these accusations, however, he always pointed out that no one had to guess what he believed in, because it was written down in the Freedom Charter in 1955.

Being explicit about values helped to signal to external stakeholders, such as international institutions, that the anti-Aparthied activists shared common values. In fact, although the Freedom Charter was a truly revolutionary document, its call for things like equal rights and equal protection would be considered utterly unremarkable in most countries.

After Apartheid fell and Mandela rose to power, the values spelled out in the Freedom Charter became important constraints. If, for example, a core value is that all national groups should be treated equal, then Mandela’s government clearly couldn’t oppress whites. His reconciliation efforts are a big part of the reason he is so revered today.

Irving Wladawsky-Berger, one of Gerster’s key lieutenants, told me how values played a similar role during IBM’s turnaround years. “The Gerstner revolution wasn’t about technology or strategy, it was about transforming our values and our culture to be in greater harmony with the market… Because the transformation was about values first and technology second, we were able to continue to embrace those values as the technology and marketplace continued to evolve.”

4. Every Revolution Inspires A Counter-Revolution

After the Orange Revolution ended in 2005, we felt triumphant. We overcame enormous odds and had won. Little did we know that there would be much darker days ahead. In 2010, Viktor Yanukovych, the man we took to the streets to keep out of power, was elected president in an election that international observers judged to be free and fair.

While surprising, this is hardly uncommon. Similar events took place during the Arab Spring. The government of Hosni Mubarrak was overthrown only to be replaced by that of Abdel Fattah el-Sisi, who is possibly even more oppressive. Harvard professor Rita Gunther McGrath points out that in today’s business environment, competitive advantage tends to be transient.

The truth is that every revolution inspires a counter-revolution. That’s why the early days of victory are often the most fragile. That’s when you tend to take your foot off the gas and relax, while at the same time those who oppose the change you worked to build are just beginning to plan to redouble their efforts.

That’s why you need a plan to survive victory from the start rooted in shared values. In the final analysis, driving change is less about a series of objectives than it is about forming a common cause. That’s just as true in a corporate transformation as it is in a social or political revolution.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.