Author Archives: Greg Satell

About Greg Satell

Greg Satell is a popular speaker and consultant. His latest book, Cascades: How to Create a Movement That Drives Transformational Change, is available now. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.

What Pundits Always Get Wrong About the Future

What Pundits Always Get Wrong About the Future

GUEST POST from Greg Satell

Peter Thiel likes to point out that we wanted flying cars, but got 140 characters instead. He’s only partly right. For decades futuristic visions showed everyday families zipping around in flying cars and it’s true that even today we’re still stuck on the ground. Yet that’s not because we’re unable to build one. In fact the first was invented in 1934.

The problem is not so much with engineering, but economics, safety and convenience. We could build a flying car if we wanted to, but to make one that can compete with regular cars is another matter entirely. Besides, in many ways, 140 characters are better than a flying car. Cars only let us travel around town, the Internet helps us span the globe.

That has created far more value than a flying car ever could. We often fail to predict the future accurately because we don’t account for our capacity to surprise ourselves, to see new possibilities and take new directions. We interact with each other, collaborate and change our priorities. The future that we predict is never as exciting as the one we eventually create.

1. The Future Will Not Look Like The Past

We tend to predict the future by extrapolating from the present. So if we invent a car and then an airplane, it only seems natural that we can combine the two. If family has a car, then having one that flies can seem like a logical next step. We don’t look at a car and dream up, say, a computer. So in 1934, we dreamed of flying cars, but not computers.

It’s not just optimists that fall prey to this fundamental error, but pessimists too. In Homo Deus, author and historian Yuval Noah Harari points to several studies that show that human jobs are being replaced by machines. He then paints a dystopian picture. “Humans might become militarily and economically useless,” he writes. Yeesh!

Yet the picture is not as dark as it may seem. Consider the retail apocalypse. Over the past few years, we’ve seen an unprecedented number of retail store closings. Those jobs are gone and they’re not coming back. You can imagine thousands of retail employees sitting at home, wondering how to pay their bills, just as Harari predicts.

Yet economist Michael Mandel argues that the data tell a very different story. First, he shows that the jobs gained from e-commerce far outstrip those lost from traditional retail. Second, he points out that the total e-commerce sector, including lower-wage fulfillment centers, has an average wage of $21.13 per hour, which is 27 percent higher than the $16.65 that the average worker in traditional retail earns.

So not only are more people working, they are taking home more money too. Not only is the retail apocalypse not a tragedy, it’s somewhat of a blessing.

2. The Next Big Thing Always Starts Out Looking Like Nothing At All

Every technology eventually hits theoretical limits. Buy a computer today and you’ll find that the technical specifications are much like they were five years ago. When a new generation of iPhones comes out these days, reviewers tout the camera rather than the processor speed. The truth is that Moore’s law is effectively over.

That seems tragic, because our ability to exponentially increase the number of transistors that we can squeeze onto a silicon wafer has driven technological advancement over the past few decades. Every 18 months or so, a new generation of chips has come out and opened up new possibilities that entrepreneurs have turned into exciting new businesses.

What will we do now?

Yet there’s no real need to worry. There is no 11th commandment that says, “Thou shalt compute with ones and zeros” and the end of Moore’s law will give way to newer, more powerful technologies, like quantum and neuromorphic computing. These are still in their nascent stage and may not have an impact for at least five to ten years, but will likely power the future for decades to come.

The truth is that the next big thing always starts out looking like nothing at all. Einstein never thought that his work would have a practical impact during his lifetime. When Alexander Fleming first discovered penicillin, nobody noticed. In much the same way, the future is not digital. So what? It will be even better!

3. It’s Ecosystems, Not Inventions, That Drive The Future

When the first automobiles came to market, they were called “horseless carriages” because that’s what everyone knew and was familiar with. So it seemed logical that people would use them much like they used horses, to take the occasional trip into town and to work in the fields. Yet it didn’t turn out that way, because driving a car is nothing like riding a horse.

So first people started taking “Sunday drives” to relax and see family and friends, something that would be too tiring to do regularly on a horse. Gas stations and paved roads changed how products were distributed and factories moved from cities in the north, close to customers, to small towns in the south, where land and labor were cheaper.

As the ability to travel increased, people started moving out of cities and into suburbs. When consumers could easily load a week’s worth of groceries into their cars, corner stores gave way to supermarkets and, eventually, shopping malls. The automobile changed a lot more than simply how we got from place to place. It changed our way of life in ways that were impossible to predict.

Look at other significant technologies, such as electricity and computers, and you find a similar story. It’s ecosystems, rather than inventions, that drive the future.

4. We Can Only Validate Patterns Going Forward

G. H. Hardy once wrote that, “a mathematician, like a painter or poet, is a maker of patterns. If his patterns are more permanent than theirs, it is because they are made with ideas.” Futurists often work the same way, identifying patterns in the past and present, then extrapolating them into the future. Yet there is a substantive difference between patterns that we consider to be preordained and those that are to be discovered.

Think about Steve Jobs and Apple for a minute and you will probably recognize the pattern and assume I misspelled the name of his iconic company by forgetting to include the “e” at the end. But I could have just have easily been about to describe an “Applet” he designed for the iPhone or some connection between Jobs and Appleton WI, a small town outside Green Bay.

The point is that we can only validate patterns going forward, never backward. That, in essence, is what Steve Blank means when he says that business plans rarely survive first contact with customers and why his ideas about lean startups are changing the world. We need to be careful about the patterns we think we see. Some are meaningful. Others are not.

The problem with patterns is that future is something we create, not some preordained plan that we are beholden to. The things we create often become inflection points and change our course. That may frustrate the futurists, but it’s what makes life exciting for the rest of us.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Eureka Moment Fallacy

The Eureka Moment Fallacy

GUEST POST from Greg Satell

In 1928, Alexander Fleming arrived at his lab to find that a mysterious mold had contaminated his Petri dishes and was eradicating the bacteria colonies he was trying to grow. Intrigued, he decided to study the mold. That’s how Fleming came to be known as the discoverer of penicillin.

Fleming’s story is one that is told and retold because it reinforces so much about what we love about innovation. A brilliant mind meets a pivotal moment of epiphany and — Eureka! — the world is forever changed. Unfortunately, that’s not really how things work. It wasn’t true in Fleming’s case and it won’t work for you.

The truth is that innovation is never a single event, but a process of discovery, engineering and transformation, which is why penicillin didn’t become commercially available until 1945 (and the drug was actually a different strain of the mold than Fleming had discovered). We need to stop searching for Eureka moments and get busy with the real work of innovating.

Learning To Recognize And Define Problems

Before Fleming, there was Ignaz Semmelweis and to understand Fleming’s story it helps to understand that of his predecessor. Much like Fleming, Semmelweis was a bright young man of science who had a moment of epiphany. In Semmelweis’s case, he was one of the first to realize that infections could spread from doctor to patient.

That simple insight led him to institute a strict regime of hand washing at Vienna General Hospital. Almost immediately, the incidence of deadly childbed fever dropped precipitously. Yet his ideas were not accepted at the time and Semmelweis didn’t do himself any favors by refusing to format his data properly or to work collaboratively to build support for his ideas. Instead, he angrily railed against the medical establishment he saw as undermining his work.

Semmelweis would die in an insane asylum, ironically from an infection he contracted under care, and never got to see the germ theory of disease emerge from the work of people like Louis Pasteur and Robert Koch. That’s what led to the study of bacteriology, sepsis and Alexander Fleming growing those cultures that were contaminated by the mysterious mold.

When Fleming walked into his lab on that morning in 1928, he was bringing a wealth of experiences to the problem. During World War I, he had witnessed many soldiers die from sepsis and how applying antiseptic agents to the wound often made the problem worse. Later, he found that nasal secretions inhibited bacterial growth.

So when the chance discovery of penicillin happened, it was far from a single moment, but rather a “happy accident” that he had spent years preparing for.

Combining Domains

Today, we remember Fleming’s discovery of penicillin as a historic breakthrough, but it wasn’t considered to be so at the time. In fact, when it was first published in the British Journal of Experimental Pathology, nobody really noticed. The truth is that what Fleming discovered couldn’t have cured anybody. It was just a mold secretion that killed bacteria in a Petri dish.

Perhaps even more importantly, Fleming was ill-equipped to transform penicillin into something useful. He was a pathologist that largely worked alone. To transform his discovery into an actual cure, he would need chemists and other scientists, as well as experts in fermentation, manufacturing, logistics and many other things. To go from milliliters in the lab to metric tons in the real world is no trivial thing.

So Fleming’s paper lay buried in a scientific journal for ten years before it was rediscovered by a team led by Howard Florey and Ernst Chain at the University of Oxford. Chain, a world-class biochemist, was able to stabilize the penicillin compound and another member of the team, Norman Heatley, developed a fermentation process to produce it in greater quantities.

Because Florey and Chain led a larger team in a bigger lab they were also had the staff and equipment to perform experiments on mice, which showed that penicillin was effective in treating infections. However, when they tried to cure a human, they found that they were not able to produce enough of the drug. They simply didn’t have the capacity.

Driving A Transformation

By the time Florey and Chain had established the potential of penicillin it was already 1941 and England was at war, which made it difficult to find funding to scale up their work. Luckily, Florey had done a Rhodes Scholarship in the United States and was able to secure a grant to travel to America and continue the development of penicillin with US-based labs.

That collaboration produced two more important breakthroughs. First, they were able to identify a more powerful strain of the penicillin mold. Second, they developed a fermentation process utilizing corn steep liquor as a medium. Corn steep liquor was common in the American Midwest, but virtually unheard of back in England.

Still, they needed to figure out a way to scale up production and that was far beyond the abilities of research scientists. However, the Office of Scientific Research and Development (OSRD), a government agency in charge of wartime research, understood the potential of penicillin for the war effort and initiated an aggressive program, involving two dozen pharmaceutical companies, to overcome the challenges.

Working feverishly, they were able to produce enough penicillin to deploy the drug for D-Day in 1944 and saved untold thousands of lives. After the war was over, in 1945, penicillin was made commercially available, which touched off a “golden age” of antibiotic research and new drugs were discovered almost every year between 1950 and 1970.

Innovation Is Never A Single Event

The story of Fleming’s Eureka! moment is romantic and inspiring, but also incredibly misleading. It wasn’t one person and one moment that changed the world, but the work of many over decades that made an impact. As I explain in my book, Cascades, it is small groups, loosely connected, but united by a shared purpose that drive transformational change.

In fact, the development of penicillin involved not one, but a series of epiphanies. First, Fleming discovered penicillin. Then, Florey and Chain rediscovered Fleming’s work. Chain stabilized the compound, Heatley developed the fermentation process, other scientists identified the more powerful strain and corn steep liquor as a fermentation medium. Surely, there were many other breakthroughs involving production, logistics and treatment that are lost to history.

This is not the exception, but the rule. The truth is that the next big thing always starts out looking like nothing at all. For example, Jim Allison, who recently won the Nobel Prize for his development of cancer immunotherapy, had his idea rejected by pharmaceutical companies, much like the medical establishment dismissed Semmelweis back in the 1850s.

Yet Allison kept at it. He continued to pound the pavement, connect and collaborate with others and that’s why today he his hailed as a pioneer and a hero. That’s why we need to focus less on inventions and more on ecosystems. It’s never a single moment of Eureka! that truly changes the world, but many of them.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

How to Fix Corporate Transformation Failure

How to Fix Corporate Transformation Failure

GUEST POST from Greg Satell

We live in an age in which change has become the only constant. So it’s not surprising that change management models have become popular. Executives are urged to develop a plan to communicate the need for change, create a sense of urgency and then drive the process through to completion.

Unfortunately, the vast majority of these efforts fail and it’s not hard to see why. Anybody who’s ever been married or had kids knows first-hand how difficult it can be to convince even a single person of something. Any effort to persuade hundreds, if not thousands, of people through some kind of mass effort is setting the bar pretty high.

However, as I explain in Cascades, what you can do is help them convince each other by changing the dynamic so that people enthusiastic about change can influence other (slightly less) enthusiastic people. The truth is that small groups, loosely connected, but united by a shared purpose drive transformational change. So that’s where you need to start.

The Power Of Local Majorities

In the 1950’s, the prominent psychologist Solomon Asch undertook a pathbreaking series of conformity studies. The design of the study was simple, but ingenuous. He merely showed people pairs of cards, asking them to match the length of a single line on one card with one of three on an adjacent card. The answer was meant to be obvious.

However, as the experimenter went around the room, one person after another gave the same wrong answer. When it reached the final person in the group (in truth, the only real subject, the rest were confederates), the vast majority of the time that person conformed to the majority opinion, even if it was obviously wrong!

Majorities don’t just rule, they also influence, especially local majorities. The effect is even more powerful when the issue at hand is more ambiguous than the length of a line on a card. More recent research suggests that the effect applies not only to people we know well, but that we are also influenced even by second and third-degree relationships.

So perhaps the best way to convince somebody of something is to surround them with people who hold a different opinion. To extend the marriage analogy a bit, I might have a hard time convincing my wife or daughter, say, that my jokes are funny and not at all corny, but if they are surrounded by people who think I’m hilarious, they’ll be more likely to think so too.

Changing Dynamics

The problem with creating change throughout an organization is that any sufficiently large group of people will hold a variety of opinions about virtually any matter and these opinions tend to be widely dispersed. So the first step in creating large-scale change is to start thinking about where to target your efforts and there are two tools that can help you do that.

The first, called the Spectrum of Allies, helps you identify which people are active or passive supporters of the change you want to bring about, which are neutral and which actively or passively oppose it. Once you are able to identify these groups, you can start mobilizing the most enthusiastic supporters to start influencing the other groups to shift their opinions. You probably won’t ever convince the active opposition, but you can isolate and neutralize them.

The second tool, called the Pillars of Support, identifies stakeholder groups that can help bring change about. In a typical corporation, these might be business unit leaders, customer groups, industry associations, regulators and so on. These stakeholders are crucial for supporting the status quo, so if you want to drive change effectively, you will need to pull them in.

What is crucial is that every tactic mobilizes a specific constituency in the Spectrum of Allies to influence a specific stakeholder group in the Pillars of Support. For example, in 1984, Anti-Apartheid activists spray-painted “WHITES ONLY” and “BLACKS” above pairs of Barclays ATMs in British university town to draw attention to the bank’s investments in South Africa.

This of course, had little to no effect on public opinion in South Africa, but it meant a lot to the English university students that the bank wanted to attract. Its share of student accounts quickly plummeted from 27% to 15% and two years later Barclays pulled out all of its investments from the country, which greatly damaged the Apartheid regime.

Identifying A Keystone Change

Every change effort begins with a grievance: sales are down, customers are unhappy or perhaps a new technology threatens to disrupt a business model. Change starts when leaders are able to articulate a clear and affirmative “vision for tomorrow” that is empowering and points toward a better future.

However, the vision can rarely be achieved all at once. That’s why successful change efforts define a keystone change, which identifies a tangible goal, involves multiple stakeholders and paves the way for future change. A successful keystone change can supercharge your efforts to shift the Spectrum of Allies and pull in Pillars of Support.

For example, when Experian’s CIO, Barry Libenson, set out to shift his company to the cloud, he knew it would be an enormous undertaking. As one of the largest credit bureaus in the world, there were serious concerns that shifting its computing infrastructure would create vulnerabilities in its cybersecurity and its business model.

So rather than embarking on a multi-year death march to implement cloud technology throughout the company, he started with building internal APIs to build momentum. The move involved many of the same stakeholders he would need for the larger project, but involved far less risk and was able to show clear benefits that paved the way for future change.

In Cascades, I detail a number of cases, from major turnarounds at companies like IBM and Alcoa, to movements to gain independence in India and to secure LGBT rights in America. In each case, a keystone change played a major role in bringing change about.

Surviving Victory

As Saul Alinsky pointed out decades ago, every revolution inspires a counterrevolution. So many change efforts that show initial success ultimately fail because of backlash from key stakeholders. That’s why it is crucial to plan how you will survive victory by rooting your change effort in values, skills and capabilities, rather than in specific objectives or tactics.

For example, Blockbuster Video’s initial response to Netflix in 2004 was extremely successful and, by 2007, it was winning new subscribers faster than the upstart. Yet because it rooted its plan solely in terms of strategy and tactics, the changes were only skin deep. After the CEO left because of a compensation dispute, the strategy was quickly reversed. Blockbuster went bankrupt a few years later.

Compare that to the success at Experian. In both cases, large, successful enterprises needed to move against a disruptive threat. In both cases, legacy infrastructure and business models needed to be replaced. At Experian, however, the move was not rooted in a strategy imposed from above, but through empowering the organization with new skills and capabilities.

That made all the difference, because rather than having to convince the rank and file of the wisdom of moving to the cloud, Libenson was able to empower those already enthusiastic about the initiative. They then became advocates, brought others along and, before long, the enthusiasts soon outnumbered the skeptics.

The truth is you can’t overpower, bribe or coerce people to embrace change. By focusing on changing the dynamics upon which a transformation can take place, you can empower those within your organization to drive change themselves. The role of a leaders is no longer to plan and direct action, but to inspire and empower belief.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI and the Productivity Paradox

AI and the Productivity Paradox

GUEST POST from Greg Satell

In the 1970’s and 80’s, business investment in computer technology were increasing by more than twenty percent per year. Strangely though, productivity growth had decreased during the same period. Economists found this turn of events so strange that they called it the productivity paradox to underline their confusion.

Productivity growth would take off in the late 1990s, but then mysteriously drop again during the mid-aughts. At each juncture, experts would debate whether digital technology produced real value or if it was all merely a mirage. The debate would continue even as industry after industry was disrupted.

Today, that debate is over, but a new one is likely to begin over artificial intelligence. Much like in the early 1970s, we have increasing investment in a new technology, diminished productivity growth and “experts” predicting massive worker displacement . Yet now we have history and experience to guide us and can avoid making the same mistakes.

You Can’t Manage (Or Evaluate) What You Can’t Measure

The productivity paradox dumbfounded economists because it violated a basic principle of how a free market economy is supposed to work. If profit seeking businesses continue to make substantial investments, you expect to see a return. Yet with IT investment in the 70s and 80s, firms continued to increase their investment with negligible measurable benefit.

A paper by researchers at the University of Sheffield sheds some light on what happened. First, productivity measures were largely developed for an industrial economy, not an information economy. Second, the value of those investments, while substantial, were a small portion of total capital investment. Third, the aggregate productivity numbers didn’t reflect differences in management performance.

Consider a widget company in the 1970s that invested in IT to improve service so that it could ship out products in less time. That would improve its competitive position and increase customer satisfaction, but it wouldn’t produce any more widgets. So, from an economic point of view, it wouldn’t be a productive investment. Rival firms might then invest in similar systems to stay competitive but, again, widget production would stay flat.

So firms weren’t investing in IT to increase productivity, but to stay competitive. Perhaps even more importantly, investment in digital technology in the 70s and 80s was focused on supporting existing business models. It wasn’t until the late 90s that we began to see significant new business models being created.

The Greatest Value Comes From New Business Models—Not Cost Savings

Things began to change when firms began to see the possibilities to shift their approach. As Josh Sutton, CEO of Agorai, an AI marketplace, explained to me, “The businesses that won in the digital age weren’t necessarily the ones who implemented systems the best, but those who took a ‘digital first’ mindset to imagine completely new business models.”

He gives the example of the entertainment industry. Sure, digital technology revolutionized distribution, but merely putting your programming online is of limited value. The ones who are winning are reimagining storytelling and optimizing the experience for binge watching. That’s the real paradigm shift.

“One of the things that digital technology did was to focus companies on their customers,” Sutton continues. “When switching costs are greatly reduced, you have to make sure your customers are being really well served. Because so much friction was taken out of the system, value shifted to who could create the best experience.”

So while many companies today are attempting to leverage AI to provide similar service more cheaply, the really smart players are exploring how AI can empower employees to provide a much better service or even to imagine something that never existed before. “AI will make it possible to put powerful intelligence tools in the hands of consumers, so that businesses can become collaborators and trusted advisors, rather than mere service providers,” Sutton says.

It Takes An Ecosystem To Drive Impact

Another aspect of digital technology in the 1970s and 80s was that it was largely made up of standalone systems. You could buy, say, a mainframe from IBM to automate back office systems or, later, Macintoshes or a PCs with some basic software to sit on employees desks, but that did little more than automate basic clerical tasks.

However, value creation began to explode in the mid-90s when the industry shifted from systems to ecosystems. Open source software, such as Apache and Linux, helped democratize development. Application developers began offering industry and process specific software and a whole cadre of systems integrators arose to design integrated systems for their customers.

We can see a similar process unfolding today in AI, as the industry shifts from one-size-fits-all systems like IBM’s Watson to a modular ecosystem of firms that provide data, hardware, software and applications. As the quality and specificity of the tools continues to increase, we can expect the impact of AI to increase as well.

In 1987, Robert Solow quipped that, “ You can see the computer age everywhere but in the productivity statistics,” and we’re at a similar point today. AI permeates our phones, smart speakers in our homes and, increasingly, the systems we use at work. However, we’ve yet to see a measurable economic impact from the technology. Much like in the 70s and 80s, productivity growth remains depressed. But the technology is still in its infancy.

We’re Just Getting Started

One of the most salient, but least discussed aspects of artificial intelligence is that it’s not an inherently digital technology. Applications like voice recognition and machine vision are, in fact, inherently analog. The fact that we use digital technology to execute machine learning algorithms is actually often a bottleneck.

Yet we can expect that to change over the next decade as new computing architectures, such as quantum computers and neuromorphic chips, rise to the fore. As these more powerful technologies replace silicon chips computing in ones and zeroes, value will shift from bits to atoms and artificial intelligence will be applied to the physical world.

“The digital technology revolutionized business processes, so it shouldn’t be a surprise that cognitive technologies are starting from the same place, but that’s not where they will end up. The real potential is driving processes that we can’t manage well today, such as in synthetic biology, materials science and other things in the physical world,” Agorai’s Sutton told me.

In 1987, when Solow made his famous quip, there was no consumer Internet, no World Wide Web and no social media. Artificial intelligence was largely science fiction. We’re at a similar point today, at the beginning of a new era. There’s still so much we don’t yet see, for the simple reason that so much has yet to happen.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Avoid These Four Myths While Networking Your Organization

Avoid These Four Myths While Networking Your Organization

GUEST POST from Greg Satell

In an age of disruption, everyone has to adapt eventually. However, the typical organization is ill-suited to change direction. Managers spend years—and sometimes decades—working to optimize their operations to deliver specific outcomes and that can make an organization rigid in the face of a change in the basis of competition.

So it shouldn’t be surprising that the idea of a networked organizations have come into vogue. While hierarchies tend to be rigid, networks are highly adaptable and almost infinitely scalable. Unfortunately, popular organizational schemes such as matrixed management and Holacracy have had mixed results, at best.

The truth is that networks have little to do with an organization chart and much more to do with how informal connections form in your organization, especially among lower-level employees. In fact, coming up with a complex scheme is likely to do little more than cause a lot of needless confusion. Here are the myths you need to avoid.

Myth #1: You Need To Restructure Your Organization

In the early 20th century, the great sociologist Max Weber noted that the sweeping industrialization taking place would lead to a change in how organizations operated. As cottage industries were replaced by large enterprises, leadership would have to become less traditional and focused on charismatic leaders and more organized and rational.

He also foresaw that jobs would need to be broken down into small, specific tasks and be governed by a system of hierarchy, authority and responsibility. This would require a more formal mode of organization—a bureaucracy—in which roles and responsibilities were clearly defined. Later, executives such as Alfred Sloan at General Motors perfected the model.

Most enterprises are still set up this way because it remains the most efficient way to organize tasks. It aligns authority with accountability and optimizes information flow. Everybody knows where they stand and what they are responsible for. Organizational restructures are painful and time consuming because they disrupt and undermine the normal workflow.

In fact, reorganizations can backfire if they cut informal ties that don’t show up on the organization chart. So a better path is to facilitate informal ties so that people can coordinate work that falls in between organizational boundaries. In his book One Mission, McChrystal Group President Chris Fussell calls this a “hybrid organization.”

Myth #2 You Have To Break Down Silos

In 2005, researchers at Northwestern University took on the age old question: “What makes a hit on Broadway.” They looked at all the normal stuff you would imagine to influence success, such as the production budget, the marketing budget and the track record of the director. What they found, however, was surprising.

As it turns out, the most important factor was how the informal networks of the cast and crew were structured. If nobody had ever worked together before, results were poor, but if too many people had previously worked together, results also suffered. It was in the middle range, where there was both familiarity and disruption, that produced the best results.

Notice how the study doesn’t mention anything about the formal organization of the cast and crew. Broadway productions tend to have very basic structures, with a director leading the creative team, a producer managing the business side and others heading up things like music, choreography and so on. That makes it easy for a cast and crew to set up, because everyone knows their place.

The truth is that silos exist because they are centers of capability. Actors work with actors. Set designers work with set designers and so on. So instead of trying to break down silos, you need to start thinking about how to connect them. In the case of the Broadways plays, that was done through previous working relationships, but there are other ways to achieve the same goal.

Myth #3: You Need To Identify Influentials, Hubs And Bridges

In Malcolm Gladwell’s breakaway bestseller The Tipping Point, he wrote “The success of any kind of social epidemic is heavily dependent on the involvement of people with a particular and rare set of social gifts,” which he called “The Law of the Few.” Before long, it seemed like everybody from marketers to organizational theorists were looking to identify a mysterious group of people called “influentials.”

Yet as I explain in Cascades, decades of empirical evidence shows that influentials are a myth. While it is true that some people are more influential than others, their influence is highly contextual and not significant enough to go to the trouble of identifying them. Also, a study that analyzed the emails of 60,000 people found that information does not need rely on hubs or bridges.

With that said, there are a number of ways to network your organization by optimizing organizational platforms for connection. For example, Facebook’s Engineering Bootcamp found that “bootcampers tend to form bonds with their classmates who joined near the same time and those bonds persist even after each has joined different teams.”

One of my favorite examples of how even small tweaks can improve connectivity is a project done at a bank’s call center. When it was found that a third of variation in productivity could be attributed to informal communication outside of meetings, the bank arranged for groups to go on coffee break together, increasing productivity by as much as 20% while improving employee satisfaction at the same time.

Myth #4: Networks Don’t Need Leadership

Perhaps the most damaging myth about networks is that they don’t need strong leadership. Many observers have postulated that because technology allows people to connect with greater efficiency, leaders are no longer critical to organizing work. The reality is that nothing can be further from the truth.

The fact is that it is small groups, loosely connected, but united by a shared purpose that drive change. While individuals can form loosely connected small groups, they can rarely form a shared purpose by themselves. So the function of leadership these days is less to plan and direct action than it is to empower and inspire belief.

So perhaps the biggest shift is not one of tactics, but of mindset. In traditional hierarchies, information flows up through the organization and orders flow down. That helps leaders maintain control, but it also makes the organization slow to adapt and vulnerable to disruption.

Leaders need to learn how to facilitate information flow through horizontal connections so people lower down in the organization can act on it without waiting for approval. That’s where shared purpose comes in. Without a common purpose and shared values, pushing decision making down will only result in chaos. It’s much easier to get people to do what you want if they already want what you want.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Malcolm Gladwell Trap

The Malcolm Gladwell Trap

GUEST POST from Greg Satell

A few years ago I bought a book that I was really excited about. It’s one of those books that created a lot of buzz and it was highly recommended by someone I respect. The author’s pedigree included Harvard, Stanford, McKinsey and a career as a successful entrepreneur and CEO.

Yet about halfway in I noticed that he was choosing facts to fit his story and ignoring critical truths that would indicate otherwise, much like Malcolm Gladwell’s often does in his books. Once I noticed a few of these glaring oversights I found myself not being able to fully trust anything the author wrote and set the book aside.

Stories are important and facts matter. When we begin to believe in false stories, we begin to make decisions based on them. When these decisions go awry, we’re likely to blame other factors, such as ourselves, those around us or other elements of context and not the false story. That’s how many businesses fail. They make decisions based on the wrong stories.

Don’t Believe Everything You Think

Go to just about any innovation conference and you will find some pundit on stage telling a story about a famous failure, usually Blockbuster, Kodak or Xerox. In each case, the reason given for the failure is colossal incompetence by senior management: Blockbuster didn’t recognize the Netflix threat. Kodak invented, but then failed to market, a digital camera. Xerox PARC developed technology, but not products.

In each case, the main assertion is demonstrably untrue. Blockbuster did develop and successfully execute a digital strategy, but its CEO left the company due a dispute and the strategy was reversed. Kodak’s EasyShare line of digital cameras were top sellers, but couldn’t replace the massive profits the company made developing film. The development of the laser printer at Xerox PARC actually saved the company.

None of this is very hard to uncover. Still, the author fell for two of these bogus myths (Kodak and Xerox), even after obviously doing significant research for the book. Most probably, he just saw something that fit with his narrative and never bothered to question whether it was true or not, because he was to busy validating what he already knew to be true.

This type of behavior is so common that there is a name for it: confirmation bias. We naturally seek out information that confirms our existing beliefs. It takes significant effort to challenge our own assumptions, so we rarely do. To overcome that is hard enough. Yet that’s only part of the problem.

Majorities Don’t Just Rule, They Also Influence

In the 1950’s, Solomon Asch undertook a pathbreaking series of conformity studies. What he found was that in small groups, people will conform to a majority opinion. The idea that people have a tendency toward conformity is nothing new, but that they would give obviously wrong answers to simple and unambiguous questions was indeed shocking.

Now think about how hard it is for a more complex idea to take hold across a broad spectrum of people, each with their own biases and opinions. The truth is that majorities don’t just rule, they also influence. More recent research suggests that the effect applies not only to people we know well, but that we are also influenced even by second and third degree relationships.

We tend to accept the beliefs of people around us as normal. So if everybody believes that the leaders of Blockbuster, Kodak and Xerox were simply dullards who were oblivious to what was going on around them, then we are very likely to accept that as the truth. Combine this group effect with confirmation bias, it becomes very hard to see things differently.

That’s why it’s important to step back and ask hard questions. Why did these companies fail? Did foolish and lazy people somehow rise to the top of successful organizations, or did smart people make bad decisions? Was there something else to the story? Given the same set of facts, would we act any differently?

The Inevitable Paradigm Shift

The use of the term “paradigm shift” has become so common that most people are unaware that it started out having a very specific meaning. The idea of a paradigm shift was first established by Thomas Kuhn in his book The Structure of Scientific Revolutions, to describe how scientific breakthroughs come to the fore.

It starts with an established model, the kind we learn in school or during initial training for a career. Models become established because they are effective and the more proficient we become at applying a good model, the better we perform. The leaders in any given field owe much of their success to these models.

Yet no model is perfect and eventually anomalies show up. Initially, these are regarded as “special cases” and are worked around. However, as the number of special cases proliferate, the model becomes increasingly untenable and a crisis ensues. At this point, a fundamental change in assumptions has to take place if things are to move forward.

The problem is that most people who are established in the field believe in the traditional model, because that’s what most people around them believe. So they seek out facts to confirm these beliefs. Few are willing to challenge what “everybody knows” and those who do are often put at great professional and reputational risk.

Why We Fail To Adapt

Now we can begin to see why not only businesses, but whole industries get disrupted. We tend to defend, rather than question, our existing beliefs and those around us often reinforce them. To make matters worse, by this time the idea has become so well established that we will often incur switching costs if we abandon it. That’s why we fail to adapt.

Yet not everybody shares our experiences. Others, who have not grown up with the conventional wisdom, often do not have the same assumptions. They also don’t have an existing peer group that will enforce those assumptions. So for them, the flaws are much easier to see, as are the opportunities to doing things another way.

Of course, none of this has to happen. As I describe in Mapping Innovation, some companies, such as IBM and Procter & Gamble, have survived for over a century because they are always actively looking for new problems to solve, which forces them to look for new ideas and insights. It compels them to question what they think they know.

Getting stories right is hard work. You have to force yourself. However, we all have an obligation to get it right. For me, that means relentlessly checking every fact with experts, even for things that I know most people won’t notice. Inevitably, I get things wrong—sometimes terribly wrong— and need to be corrected. That’s always humbling.

I do it because I know stories are powerful. They take on a life of their own. Getting them right takes effort. As my friend Whitney Johnson points out, the best way to avoid disruption is to first disrupt yourself.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Where People Go Wrong with Minimum Viable Products

Where People Go Wrong with Minimum Viable Products

GUEST POST from Greg Satell

Ever since Eric Reis published his bestselling book, The Lean Startup, the idea of a minimum viable product (MVP) has captured the imagination of entrepreneurs and product developers everywhere. The idea of testing products faster and cheaper has an intuitive logic that simply can’t be denied.

Yet what is often missed is that a minimum viable product isn’t merely a stripped down version of a prototype. It is a method to test assumptions and that’s something very different. A single product often has multiple MVPs, because any product development effort is based on multiple assumptions.

Developing an MVP isn’t just about moving faster and cheaper, but also minimizing risk. In order to test assumptions, you first need to identify them and that’s a soul searching process. You have to take a hard look at what you believe, why you believe it and how those ideas can be evaluated. Essentially, MVP’s work because they force you to do the hard thinking early.

Every Idea Has Assumptions Built In

In 1990, Nick Swinmurn had an idea for a business. He intended to create a website to sell shoes much like Amazon did for books. This was at the height of the dotcom mania, when sites were popping up to sell everything from fashion to pet food to groceries, so the idea itself wasn’t all that original or unusual.

What Swinmurn did next, however, was. Rather than just assuming that people would be willing to buy shoes online or conducting expensive marketing research, he built a very basic site, went to a shoe store and took pictures of shoes, which he placed on the site. When he got an order, he bought the shoes retail and shipped them out. He lost money on every sale.

That’s a terrible way to run a business, but a great — and incredibly cheap — way to to test a business idea. Once he knew that people were willing to buy shoes online, he began to build all the elements of a fully functioning business. Ten years later, the company he created, Zappos was acquired by Amazon for $1.2 billion.

Notice how he didn’t just assume that his business idea was viable. He tested it and validated it. He also learned other things, such as what styles were most popular. Later, Zappos expanded to include handbags, eyewear, clothing, watches, and kids’ merchandise.

The Cautionary Tale Of Google Glass

Now compare how Swinmurn launched his business with Google’s Glass debacle. Instead of starting with an MVP, it announced a full-fledged prototype complete with a snazzy video. Through augmented reality projected onto the lenses, users could seamlessly navigate an urban landscape, send and receive messages and take photos and videos. It generated a lot of excitement and seemed like a revolutionary new way to interact with technology.

Yet criticism quickly erupted. Many were horrified that hordes of wandering techno-hipsters could be surreptitiously recording us. Others had safety concerns about everything from people being distracted while driving to the devices being vulnerable to hacking. Soon there was a brewing revolt against “Google Glassholes.”

Situations like the Google Glass launch are startlingly common. In fact, the vast majority of new product launches fail because there’s no real way to know whether you have the right product-market fit customers actually get a chance to interact with the product. Unfortunately, most product development efforts start by seeking out the largest addressable market. That’s almost always a mistake.

If you are truly creating something new and different, you want to build for the few and not the many. That’s the mistake that Google made with its Glass prototype.

Identifying A Hair On Fire Use Case

The alternative to trying to address the largest addressable market is to identify a hair-on-fire use case. The idea is to find a potential customer that needs to solve a problem so badly that they almost literally have their hair on fire. These customers will be more willing to co-create with you and more likely to put up with the inevitable bugs and glitches that always come up.

For example, Tesla didn’t start out by trying to build an electric car for the masses. Instead, it created a $100,000 status symbol for Silicon Valley millionaires. Because these customers could afford multiple cars, range wasn’t as much of a concern. The high price tag also made a larger battery more feasible. The original Tesla Roadster had a range of 244 miles.

The Silicon Valley set were customers with their hair on fire. They wanted to be seen as stylish and eco-friendly, so were willing to put up with the inevitable limitations of electric cars. They didn’t have to depend on them for their commute or to pick the kids up at soccer practice. As long as the car was cool enough, they would buy it.

Interestingly, Google Glass made a comeback as an industrial product and had a nice run from 2019 to 2023 before they went away for good. For hipsters, an augmented reality product is far from a necessity, but a business that needs to improve productivity can be a true “hair-on-fire” use case. As the product improves and gains traction, it’s entirely possible that it eventually makes its way back to the consumer market in some form.

Using An MVP To Pursue A Grand Challenge

One of the criticisms of minimum viable products is that they are only suited for simple products and tweaks, rather than truly ambitious projects. Nothing could be further from the truth. The reality is that the higher your ambitions, the more important it is for you to start with a minimum viable product.

IBM is one company that has a long history of pursuing grand challenges such as the Deep Blue project which defeated world champion Garry Kasparov at chess and the Blue Gene project which created a new class of “massively parallel” supercomputers. More recently were the Jeopardy grand challenge, which led to the development of its current Watson business and the Debater project.

Notice that none of these were fully featured products. Rather they were attempts to, as IBM’s Chief Innovation Officer, Bernie Meyerson, put it to me, invent something that “even experts in the field, regard as an epiphany and changes assumptions about what’s possible.” That would be hard to do if you were trying to create a full featured product for a demanding customer.

That’s the advantage of creating an MVP. It essentially acts as a research lab where you can safely test hypotheses and eliminate sources of uncertainty. Once you’ve done that, you can get started trying to build a real business.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Communication Skills Trump Coding for Our Kids’ Future

Why Communication Skills Trump Coding for Our Kids' Future

GUEST POST from Greg Satell

Many say that coding is the new literacy. Kids are encouraged to learn programming in school and take coding courses online. In that famous scene in The Graduate Dustin Hoffman’s character was encouraged by a family friend to go into plastics. If it were shot today, it would have probably been computer code.

This isn’t actually that new. I remember first being taught how to code in middle school in the early 80s in BASIC (a mostly defunct language now). Yet even today, coding is far from an essential skill. In fact, with the rise of no-code platforms, there is a strong argument to be made that code is becoming less important.

Don’t get me wrong, there’s still plenty of coding to be done on the back end and programming is certainly a perfectly reasonable thing to learn. However, there’s no reason people need to learn it to have a successful, productive career. On the other hand writing, as well as other communication skills, will only become more important in the decades to com.

The Future Is Not Digital

During the past few decades, digital technology has become largely synonymous with innovation. Every 18 months or so, a new generation of processors has come out that was faster, more powerful and cheaper than its predecessors. Entrepreneurs would leverage these new capabilities to create exciting new products and disrupt entire industries.

Yet now that’s all coming to an end. Every technology eventually hits theoretical limits and that’s where we are now with regard to digital processors. We have maybe one or two generations of advancement and then, with some clever workarounds, we may be able to stretch the technology for a decade or so, but it’s highly unlikely that it’ll last any longer than that.

That’s not so horrible. There’s no 11th Commandment that says, “Thou shalt compute in ones and zeroes,” and there are nascent architectures that are potentially far more powerful than digital computers, such as quantum and neuromorphic computing. Neither of these, however are digital technologies. They operate on fundamentally different logic and will use different code.

So instead of learning to code, maybe our kids would be better served by learning about quantum mechanics or neurology. Those would seem to be far more relevant to their future.

The Shift From Bits To Atoms

Digital technology is largely virtual. Transistors on silicon wafers compute ones and zeroes so that images can flash across our screens. That can be very useful, because we can simulate things on a screen much more cheaply than in the physical world, but it’s also limited. We can’t eat, wear or live in a virtual world.

The important technologies of the next generation, however, will be based on atoms rather than bits. Advances in genomics have led to the new field of synthetic biology and a revolution in materials science is transforming our ability to develop advance materials for manufacturing, clean energy and space exploration. So maybe instead of learning how to code, kids should be studying genetics and chemistry.

As we develop new technologies, we will also need to design experiences so that we can use them more effectively. For example, we need linguists and conversational analysts to design better voice interfaces. Kids who study those things may be able to build great careers.

The rapid pace of technological advancement over the next generation will surely put stress on society. Digital technology has helped produce massive income inequality and a rise in extremism. We will need sociologists and political scientists to help us figure out how to cope with these new, much more powerful technologies.

Collaboration Is The New Competitive Advantage

When my generation was in school, we were preparing for a future that seemed pretty clear cut. We assumed we would become doctors, lawyers, executives and engineers and spend our entire lives working in our chosen fields. It didn’t turn out that way. These days a business model is unlikely to last a decade, much less a lifetime.

Kids today need to prepare to become lifelong learners because the pace of change will not slow down. In fact, it is likely to accelerate beyond anything we can imagine today. The one thing we can predict about the future is that collaboration will be critical for success. People like geneticists and quantum scientists will need to work closely with chemists, designers sociologists and specialists in fields that haven’t even been invented yet.

These are, in fact, longstanding trends. The journal Nature recently noted that the average scientific paper today has four times as many authors as one did in 1950 and the work they are doing is far more interdisciplinary and done at greater distances than in the past. We can only expect these trends to become more prominent in the future.

In order to collaborate effectively, you need to communicate effectively and that’s where writing comes in. Being able to express thoughts and ideas clearly and cogently is absolutely essential to collaboration and innovation.

Writing Well Is Thinking Well

Probably the most overlooked aspect of writing is that it does more than communicate thoughts, but helps form them. As Fareed Zakaria has put it. “Thinking and writing are inextricably intertwined. When I begin to write, I realize that my ‘thoughts’ are usually a jumble of half-baked, incoherent impulses strung together with gaping logical holes between them.”

“Whether you’re a novelist, a businessman, a marketing consultant or a historian,” he continues, “writing forces you to make choices and it brings clarity and order to your ideas.” Zakaria also points to Jeff Bezos’ emphasis on memo writing as an example of how clarity of expression leads to innovation.

In fact, Amazon considers writing so essential to its ability to innovate that it has become a key part of its culture. It’s hard to make much of a career at Amazon if you cannot write well, because to create products and services that are technically sound, easy to use and efficiently executed, a diverse group of highly skilled people need to tightly coordinate their efforts.

Today, as the digital revolution comes to an end and we enter a new era of innovation, it’s easy to get overwhelmed by the rapid advancement of breakthrough technologies. However, the key to success in our uncertain future will be humans collaborating with other humans to design work for machines. That starts with writing effectively.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Robots Aren’t Really Going to Take Over

The Robots Aren't Really Going to Take Over

GUEST POST from Greg Satell

In 2013, a study at Oxford University found that 47% of jobs in the United States are likely to be replaced by robots over the next two decades. As if that doesn’t seem bad enough, Yuval Noah Harari, in his bestselling book Homo Deus, writes that “humans might become militarily and economically useless.” Yeesh! That doesn’t sound good.

Yet today, ten years after the Oxford Study, we are experiencing a serious labor shortage. Even more puzzling is that the shortage is especially acute in manufacturing, where automation is most pervasive. If robots are truly taking over, then why are having trouble finding enough humans to do work that needs being done?

The truth is that automation doesn’t replace jobs, it replaces tasks and when tasks become automated, they largely become commoditized. So while there are significant causes for concern about automation, such as increasing returns to capital amid decreasing returns to labor, the real danger isn’t with automation itself, but what we choose to do with it.

Organisms Are Not Algorithms

Harari’s rationale for humans becoming useless is his assertion that “organisms are algorithms.” Much like a vending machine is programed to respond to buttons, humans and other animals are programed by genetics and evolution to respond to “sensations, emotions and thoughts.” When those particular buttons are pushed, we respond much like a vending machine does.

He gives various data points for this point of view. For example, he describes psychological experiments in which, by monitoring brainwaves, researchers are able to predict actions, such as whether a person will flip a switch, even before he or she is aware of it. He also points out that certain chemicals, such as Ritalin and Prozac, can modify behavior.

Therefore, he continues, free will is an illusion because we don’t choose our urges. Nobody makes a conscious choice to crave chocolate cake or cigarettes any more than we choose whether to be attracted to someone other than our spouse. Those things are a product of our biological programming.

Yet none of this is at all dispositive. While it is true that we don’t choose our urges, we do choose our actions. We can be aware of our urges and still resist them. In fact, we consider developing the ability to resist urges as an integral part of growing up. Mature adults are supposed to resist things like gluttony, adultery and greed.

Revealing And Building

If you believe that organisms are algorithms, it’s easy to see how humans become subservient to machines. As machine learning techniques combine with massive computing power, machines will be able to predict, with great accuracy, which buttons will lead to what actions. Here again, an incomplete picture leads to a spurious conclusion.

In his 1954 essay, The Question Concerning Technology the German philosopher Martin Heidegger sheds some light on these issues. He described technology as akin to art, in that it reveals truths about the nature of the world, brings them forth and puts them to some specific use. In the process, human nature and its capacity for good and evil is also revealed.

He gives the example of a hydroelectric dam, which reveals the energy of a river and puts it to use making electricity. In much the same sense, Mark Zuckerberg did not “build” a social network at Facebook, but took natural human tendencies and channeled them in a particular way. After all, we go online not for bits or electrons, but to connect with each other.

In another essay, Building Dwelling Thinking, Heidegger explains that building also plays an important role, because to build for the world, we first must understand what it means to live in it. Once we understand that Mark Zuckerberg, or anyone else for that matter, is working to manipulate us, we can work to prevent it. In fact, knowing that someone or something seeks to control us gives us an urge to resist. If we’re all algorithms, that’s part of the code.
Social Skills Will Trump Cognitive Skills

All of this is, of course, somewhat speculative. What is striking, however, is the extent to which the opposite of what Harari and other “experts” predict is happening. Not only have greater automation and more powerful machine learning algorithms not led to mass unemployment it has, as noted above, led to a labor shortage. What gives?

To understand what’s going on, consider the legal industry, which is rapidly being automated. Basic activities like legal discovery are now largely done by algorithms. Services like LegalZoom automate basic filings. There are even artificial intelligence systems that can predict the outcome of a court case better than a human can.

So it shouldn’t be surprising that many experts predict gloomy days ahead for lawyers. By now, you can probably predict the punchline. The number of lawyers in the US has increased by 15% since 2008 and it’s not hard to see why. People don’t hire lawyers for their ability to hire cheap associates to do discovery, file basic documents or even, for the most part, to go to trial. In large part, they want someone they can trust to advise them.

The true shift in the legal industry will be from cognitive to social skills. When much of the cognitive heavy lifting can be done by machines, attorneys who can show empathy and build trust will have an advantage over those who depend on their ability to retain large amounts of information and read through lots of documents.

Value Never Disappears, It Just Shifts To Another Place

In 1900, 30 million people in the United States worked as farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a matter of speaking, 90% of American agriculture workers lost their jobs, mostly due to automation. Yet somehow, the twentieth century was seen as an era of unprecedented prosperity.

You can imagine anyone working in agriculture a hundred years ago would be horrified to find that their jobs would vanish over the next century. If you told them that everything would be okay because they could find work as computer scientists, geneticists or digital marketers, they would probably have thought that you were some kind of a nut.

But consider if you told them that instead of working in the fields all day, they could spend that time in a nice office that was cool and dry because of something called “air conditioning,” and that they would have machines that cook meals without needing wood to be chopped and hauled. To sweeten the pot you could tell them that ”work” would mostly consist largely of talking to other people. They may have imagined it as a paradise.

The truth is that value never disappears, it just shifts to another place. That’s why today we have less farmers, but more food and, for better or worse, more lawyers. It is also why it’s highly unlikely that the robots will take over, because we are not algorithms. We have the power to choose.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Most Corporate Innovation Programs Fail

(And How To Make Them Succeed)

Why Most Corporate Innovation Programs Fail

GUEST POST from Greg Satell

Today, everybody needs to innovate. So it shouldn’t be surprising that corporate innovation programs have become wildly popular. There is an inherent tradeoff between innovation and the type of optimization that operational executives excel at. Creating a separate unit to address innovation just makes intuitive sense.

Yet corporate innovation programs often fail and it’s not hard to see why. Unlike other business functions, like marketing or finance, in a healthy organization everybody takes pride in their ability to innovate. Setting up a separate innovation unit can often seem like an affront to those who work hard to innovate in operational units.

Make no mistake, a corporate innovation program is no panacea. It doesn’t replace the need to innovate every day. Yet a well designed program can augment those efforts, take the business in new directions and create real value. The key to a successful innovation program is to develop a clear purpose built on a shared purpose that can solve important problems.

A Good Innovation Program Extends, It Doesn’t Replace

It’s no secret that Alphabet is one of the most powerful companies in the world. Nevertheless, it has a vulnerability that is often overlooked. Much like Xerox and Kodak decades ago, it’s highly dependent on a single revenue stream. In 2018, 86% of its revenues came from advertising, mostly from its Google search business.

It is with this in mind that the company created its X division. Because the unit was set up to pursue opportunities outside of its core search business, it didn’t encounter significant resistance. In fact, the X division is widely seen as an extension of what made Alphabet so successful in the first place.

Another important aspect is that the X division provides a platform to incubate internal projects. For example, Google Brain started out as a “20% time project.” As it progressed and needed more resources, it was moved to the X division, where it was scaled up further. Eventually, it returned to the mothership and today is an integral part of the core business.

Notice how the vision of the X division was never to replace innovation efforts in the core business, but to extend them. That’s been a big part of its success and has led to exciting new business like Waymo autonomous vehicles and the Verily healthcare division.

Focus On Commonality, Not Difference

All too often, innovation programs thrive on difference. They are designed to put together a band of mavericks and disruptors who think differently than the rest of the organization. That may be great for instilling a strong esprit de corps among those involved with the innovation program, but it’s likely to alienate others.

As I explain in Cascades, any change effort must be built on shared purpose and shared values. That’s how you build trust and form the basis for effective collaboration between the innovation program and the rest of the organization. Without those bonds of trust, any innovation effort is bound to fail.

You can see how that works in Alphabet’s X division. It is not seen as fundamentally different from the core Google business, but rather as channeling the company’s strengths in new directions. The business opportunities it pursues may be different, but the core values are the same.

The key question to ask is why you need a corporate innovation program in the first place. If the answer is that you don’t feel your organization is innovative enough, then you need to address that problem first. A well designed innovation program can’t be a band-aid for larger issues within the core business.

Executive Sponsorship Isn’t Enough

Clearly, no corporate innovation program can be successful without strong executive sponsorship. Commitment has to come from the top. Yet just as clearly, executive sponsorship isn’t enough. Unless you can build support among key stakeholders inside and outside the organization, support from the top is bound to erode.

For example, when Eric Haller started Datalabs at Experian, he designed it to be focused on customers, rather than ideas developed internally. “We regularly sit down with our clients and try and figure out what’s causing them agita,” he told me, “because we know that solving problems is what opens up enormous business opportunities for us.”

Because the Datalabs units works directly with customers to solve problems that are important to them, it has strong support from a key stakeholder group. Another important aspect at Datalabs is that once a project gets beyond the prototype stage it goes to one of the operational units within the company to be scaled up into a real business. Over the past five years businesses originated at Datalabs have added over $100 million in new revenues.

Perhaps most importantly, Haller is acutely aware how innovation programs can cause resentment, so he works hard to reduce tensions through building collaborations around the organization. Datalabs is not where “innovation happens” at Experian. Rather it serves to augment and expand capabilities that were already there.

Don’t Look For Ideas, Identify Meaningful Problems

Perhaps most importantly, an innovation program should not be seen as a place to generate ideas. The truth is that ideas can come from anywhere. So designating one particular program in which ideas are supposed to happen will not only alienate the rest of the organization, it is also likely to overlook important ideas generated elsewhere.

The truth is that innovation isn’t about ideas. It’s about solving problems. In researching my book, Mapping Innovation, I came across dozens of stories from every conceivable industry and field and it always started with someone who came across a problem they wanted to solve. Sometimes, it happened by chance, but in most cases I found that great innovators were actively looking for problems that interested them.

If you look at successful innovation programs like Alphabet’s X division and Experian’s Datalabs, the fundamental activity is exploration. X division explores domains outside of search, while Datalabs explores problems that its customers need solved. Once you identify a meaningful problem, the ideas will come.

That’s the real potential of innovation programs. They provide a space to explore areas that don’t fit with the current business, but may play an important role in its future. A good innovation program doesn’t replace capabilities in the core organization, but leverages them to create new opportunities.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.