Author Archives: Greg Satell

About Greg Satell

Greg Satell is a popular speaker and consultant. His latest book, Cascades: How to Create a Movement That Drives Transformational Change, is available now. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.

Why Humans Fail to Plan for the Future

Why Humans Fail to Plan for the Future

GUEST POST from Greg Satell

I was recently reading Michiu Kaku’s wonderful book, The Future of Humanity, about colonizing space and was amazed how detailed some of the plans are. Plans for a Mars colony, for example, are already fairly advanced. In other cases, scientists are actively thinking about technologies that won’t be viable for a century or more.

Yet while we seem to be so good at planning for life in outer space, we are much less capable of thinking responsibly about the future here on earth, especially in the United States. Our federal government deficit recently rose to 4.6% of GDP, which is obviously unsustainable in an economy that’s growing at a meager 2.3%.

That’s just one data point, but everywhere you look we seem to be unable to plan for the future. Consumer debt in the US recently hit levels exceeding those before the crash in 2008. Our infrastructure is falling apart. Air quality is getting worse. The list goes on. We need to start thinking more seriously about the future, but don’t seem to be able. Why is that?

It’s Biology, Stupid

The simplest and most obvious explanation for why we fail to plan for the future is basic human biology. We have pleasure centers in our brains that release a hormone called dopamine, which gives us a feeling of well-being. So, it shouldn’t be surprising that we seek to maximize our dopamine fix in the present and neglect the future.

Yuval Noah Harari made this argument in his book Homo Deus, in which he argued that “organisms are algorithms.” Much like a vending machine is programed to respond to buttons, Harari argues, humans and other animals are programed by genetics and evolution to respond to “sensations, emotions and thoughts.” When those particular buttons are pushed, we respond much like a vending machine does.

He gives various data points for this point of view. For example, he describes psychological experiments in which, by monitoring brainwaves, researchers are able to predict actions, such as whether a person will flip a switch, even before he or she is aware of it. He also points out that certain chemicals, such as Ritalin and Prozac, can modify behavior.

Yet this somehow doesn’t feel persuasive. Adults in even primitive societies are expected to overcome basic urges. Citizens of Ancient Rome were taxed to pay for roads that led to distant lands and took decades to build. Medieval communities built churches that stood for centuries. Why would we somehow lose our ability to think long-term in just the past generation or so?

The Profit Motive

Another explanation of why we neglect the future is the profit motive. Pressed by demanding shareholders to deliver quarterly profits, corporate executives focus on showing short-term profits instead of investing for the future. The result is increased returns to fund managers, but a hollowing out of corporate competitiveness.

A recent article in Harvard Business Review would appear to bear this out. When a team of researchers looked into the health of the innovation ecosystem in the US, they found that corporate America has largely checked out. They also observed that storied corporate research labs, such as Bell Labs and Xerox PARC have diminished over time.

Yet take a closer look and the argument doesn’t hold up. In fact, the data from the National Science Foundation shows that corporate research has increased from roughly 40% of total investment in the 1950s and 60s to more than 60% today. At the same time, while some firms have closed research facilities, others, such as Microsoft, IBM and Google have either opened new ones or greatly expanded previous efforts. Overall R&D spending has risen over time.

Take a look at how Google innovates and you’ll be able to see the source for some the dissonance. 50 years ago, the only real option for corporate investment in research was a corporate lab. Today, however, there are many other avenues, including partnerships with academic researchers, internal venture capital operations, incubators, accelerators and more.

The Free Rider Problem

A third reason we may fail to invest in the future is the free rider problem. In this view, the problem is not that we don’t plan for the future, but that we don’t want to spend money on others who are undeserving. For example, why should we pay higher taxes to educate kids from outside our communities? Or to infrastructure projects that are wasteful and corrupt?

This type of welfare queen argument can be quite powerful. Although actual welfare fraud has been shown to be incredibly rare, there are many who believe that the public sector is inherently wasteful and money would be more productively invested elsewhere. This belief doesn’t only apply to low-income people, but also to “elites” such as scientists.

Essentially, this is a form of kinship selection. We are more willing to invest in the future of people who we see as similar to ourselves, because that is a form of self-survival. However, when we find ourselves asked to invest in the future of those we see as different from ourselves, whether that difference is of race, social class or even profession, we balk.

Yet here again, a closer look and the facts don’t quite fit with the narrative. Charitable giving, for example, has risen almost every year since 1977. So, it’s strange that we’re increasingly generous in giving to those who are in need, but stingy when it comes to things like infrastructure and education.

A New Age of Superstition

What’s especially strange about our inability to plan for the future is that it’s relatively new. In fact, after World War II, we invested heavily in the future. We created new avenues for scientific investment at agencies like the National Science Foundation and the National Institutes of Health, rebuilt Europe with the Marshall Plan and educated an entire generation with the GI Bill.

It wasn’t until the 1980s that our willingness to plan for and invest in the future began to wane, mostly due to two ideas that warped decision making. The first, called the Laffer Curve, argued that by lowering taxes we can increase revenue and that tax cuts, essentially, pay for themselves. The second, shareholder value, argued that whatever was best for shareholders is also best for society.

Both ideas have been partially or thoroughly debunked. Over the past 40 years, lower tax rates have consistently led to lower revenues and higher deficits. The Business Roundtable, an influential group of almost 200 CEOs of America’s largest companies, recently denounced the concept of shareholder value. Yet strangely, many still use both to support anti-future decisions.

We seem to be living in a new era of superstition, where mere belief is enough to inspire action. So projects which easily capture the imagination, such as colonizing Mars, are able to garner fairly widespread support, while investing in basic things like infrastructure, debt reduction or the environment are neglected.

The problem, in other words, seems to be mostly in the realm of a collective narrative. We are more than capable of enduring privation today to benefit tomorrow, just as businesses routinely take less profits today to invest in tomorrow. We are even capable of giving altruistically to others in need. All we need is a story to believe in.

There is, however, the possibility that it is not the future we really have a problem with, but each other and that our lack of a common story arises from a lack of shared values which leads to major differences in how we view the same facts. In any case, the future suffers.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Hard Facts Are a Hard Thing

Hard Facts Are a Hard Thing

GUEST POST from Greg Satell

In 1977, Ken Olsen, the founder and CEO of Digital Equipment Corporation, reportedly said, “There is no reason for any individual to have a computer in his home.” It was an amazingly foolish thing to say and, ever since, observers have pointed to Olsen’s comment to show how supposed experts can be wildly wrong.

The problem is that Olsen was misquoted. In fact, his company was actually in the business of selling personal computers and he had one in his own home. This happens more often than you would think. Other famous quotes, such IBM CEO Thomas Watson predicting that there would be a global market for only five computers, are similarly false.

There is great fun in bashing experts, which is why so many inaccurate quotes get repeated so often. If the experts are always getting it wrong, then we are liberated from the constraints of expertise and the burden of evidence. That’s the hard thing about hard facts. They can be so elusive that it’s easy to believe doubt their existence. Yet they do exist and they matter.

The Search for Absolute Truth

In the early 20th century, science and technology emerged as a rising force in western society. The new wonders of electricity, automobiles and telecommunication were quickly shaping how people lived, worked and thought. Empirical verification, rather than theoretical musing, became the standard by which ideas were measured.

It was against this backdrop that Moritz Schlick formed the Vienna Circle, which became the center of the logical positivist movement and aimed to bring a more scientific approach to human thought. Throughout the 20’s and 30’s, the movement spread and became a symbol of the new technological age.

At the core of logical positivism was Ludwig Wittgenstein’s theory of atomic facts, the idea the world could be reduced to a set of statements that could be verified as being true or false—no opinions or speculation allowed. Those statements, in turn, would be governed by a set of logical algorithms which would determine the validity of any argument.

It was, to the great thinkers of the day, both a grand vision and an exciting challenge. If all facts could be absolutely verified, then we could confirm ideas with absolute certainty. Unfortunately, the effort would fail so miserably that Wittgenstein himself would eventually disown it. Instead of building a world of verifiable objective reality, we would be plunged into uncertainty.

The Fall of Logic and the Rise of Uncertainty

Ironically, while the logical positivist movement was gaining steam, two seemingly obscure developments threatened to undermine it. The first was a hole at the center of logic called Russell’s Paradox, which suggested that some statements could be both true and false. The second was quantum mechanics, a strange new science in which even physical objects could defy measurement.

Yet the battle for absolute facts would not go down without a fight. David Hilbert, the most revered mathematician of the time, created a program to resolve Russell’s Paradox. Albert Einstein, for his part, argued passionately against the probabilistic quantum universe, declaring that “God does not play dice with the universe.”

Alas, it was all for naught. Kurt Gödel would prove that every logical system is flawed with contradictions. Alan Turing would show that all numbers are not computable. The Einstein-Bohr debates would be resolved in Bohr’s favor, destroying Einstein’s vision of an objective physical reality and leaving us with an uncertain universe.

These developments weren’t all bad. In fact, they were what made modern computing possible. However, they left us with an uncomfortable uncertainty. Facts could no longer be absolutely verifiable, but would stand until they could be falsified. We could, after thorough testing, become highly confident in our facts, but never completely sure.

Science, Truth and Falsifiability

In Richard Feynman’s 1974 commencement speech at Cal-Tech, he recounted going to a new-age resort where people were learning reflexology. A man was sitting in a hot tub rubbing a woman’s big toe and asking the instructor, “Is this the pituitary?” Unable to contain himself, the great physicist blurted out, “You’re a hell of a long way from the pituitary, man.”

His point was that it’s relatively easy to make something appear “scientific” by, for example, having people wear white coats or present charts and tables, but that doesn’t make it real science. True science is testable and falsifiable. You can’t merely state what you believe to be true, but must give others a means to test it and prove you wrong.

This is important because it’s very easy for things to look like the truth, but actually be false. That’s why we need to be careful, especially when we believe something to be true. The burden is even greater when it is something that “everybody knows.” That’s when we need to redouble our efforts, dig in and make sure we verify our facts.

“We’ve learned from experience that the truth will out,” Feynman said. “The first principle is that you must not fool yourself—and you are the easiest person to fool.” Truth doesn’t reveal itself so easily, but it’s out there and we can find it if we are willing to make the effort.

The Lie of a Post-Truth World

Writing a non-fiction book can be a grueling process. You not only need to gather hundreds of pages of facts and mold them into a coherent story that interests the reader, but also to verify that those facts are true. For both of my books, Mapping Innovation and Cascades, I spent countless hours consulting sources and sending out fact checks.

Still, I lived in fear knowing that whatever I put on the page would permanently be there for anyone to discredit. In fact, I would later find two minor inaccuracies in my first book (ironically, both had been checked with primary sources). These were not, to be sure, material errors, but they wounded me. I’m sure, in time, others will be uncovered as well.

Yet I don’t believe that those errors diminish the validity of the greater project. In fact, I think that those imperfections serve to underline the larger truth that the search for knowledge is always a journey, elusive and just out of reach. We can struggle for a lifetime to grasp even a small part of it, but to shake free even a few seemingly insignificant nuggets can be a gift.

Yet all too often people value belief more than facts. That’s why they repeat things that aren’t factual, because they believe they point to some deeper truth that defy facts in evidence. Yet that is not truth. It is just a way of fooling yourself and, if you’re persuasive, fooling others as well. Still, as Feynman pointed out long ago, “We’ve learned from experience that the truth will out.”

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Software Isn’t Going to Eat the World

Software Isn't Going to Eat the World

GUEST POST from Greg Satell

In 2011, technology pioneer Marc Andreessen declared that software is eating the world. “With lower start-up costs and a vastly expanded market for online services,” he wrote, “the result is a global economy that for the first time will be fully digitally wired — the dream of every cyber-visionary of the early 1990s, finally delivered, a full generation later.

Yet as Derek Thompson recently pointed out in The Atlantic, the euphoria of Andreessen and his Silicon Valley brethren seems to have been misplaced. Former unicorns like Uber, Lyft, and Peloton have seen their value crash, while WeWork saw its IPO self-destruct. Hardly “the dream of every cyber-visionary.”

The truth is that we still live in a world of atoms, not bits and most of the value is created by making things we live in, wear, eat and ride in. For all of the tech world’s astounding success, it still makes up only a small fraction of the overall economy. So, taking a software centric view, while it has served Silicon Valley well in the past, may be its Achilles heel in the future.

The Silicon Valley Myth

The Silicon Valley way of doing business got its start in 1968, when an investor named Arthur Rock backed executives from Fairchild Semiconductor to start a new company, which would become known as Intel. Unlike back east, where businesses depended on stodgy banks for finance, on the west coast venture capitalists, many of whom were former engineers themselves, would decide which technology companies got funded.

Over the years, a virtuous cycle ensued. Successful tech companies created fabulously wealthy entrepreneurs and executives, who would in turn invest in new ventures. Things shifted into hyperdrive when the company Andreessen founded, Netscape, quadrupled its value on its first day of trading, kicking off the dotcom boom.

While the dotcom bubble would crash in 2000, it wasn’t all based on pixie dust. As the economist W. Brian Arthur explained in Harvard Business Review, while traditional industrial companies were subject to diminishing returns, software companies with negligible marginal costs could achieve increasing returns powered by network effects.

Yet even as real value was being created and fabulous new technology businesses prospered, an underlying myth began to take hold. Rather than treating software business as a special case, many came to believe that the Silicon Valley model could be applied to any business. In other words, that software would eat the world.

The Productivity Paradox (Redux)

One reason that so many outside of Silicon Valley were skeptical of the technology boom for a long time was a longstanding productivity paradox. Although throughout the 1970s and 80s, business investment in computer technology was increasing by more than 20% per year, productivity growth had diminished during the same period.

In the late 90s, however, this trend reversed itself and productivity began to soar. It seemed that Andreessen and his fellow “cyber-visionaries were redeemed. No longer considered outcasts, they became the darlings of corporate America. It appeared that a new day was dawning and the Silicon Valley ethos took hold.

While the dotcom crash deflated the bubble in 2000, the Silicon Valley machine was soon rolling again. Web 2.0 unleashed the social web, smartphones initiated the mobile era and then IBM’s Watson’s defeat of human champions on the game show Jeopardy! heralded a new age of artificial intelligence.

Yet still, we find ourselves in a new productivity paradox. By 2005, productivity growth had disappeared once again and has remained diminished ever since. To paraphrase economist Robert Solow, we see software everywhere except in the productivity statistics.

The Platform Fallacy

Today, pundits are touting a new rosy scenario. They point out that Uber, the world’s largest taxi company, owns no vehicles. Airbnb, the largest accommodation provider, owns no real estate. Facebook, the most popular media owner, creates no content and so on. The implicit assumption is that it is better to build software that makes matches than to invest in assets.

Yet platform-based businesses have three inherent weaknesses that aren’t always immediately obvious. First, they lack barriers to entry, which makes it difficult to create a sustainable competitive advantage. Second, they tend to create “winner-take-all” markets so for every fabulous success like Facebook, you can have thousands of failures. Finally, rabid competition leads to high costs.

The most important thing to understand about platforms is that they give us access to ecosystems of talent, technology and information and it is in those ecosystems where the greatest potential for value creation lies. That’s why, to become profitable, platform businesses eventually need to invest in real assets.

Consider Amazon: Almost two thirds of Amazon’s profits come from its cloud computing unit, AWS, which provides computing infrastructure for other organizations. More recently, it bought Whole Foods and began opening Amazon Go retail stores. The more that you look, Amazon looks less like a platform and more like a traditional pipeline business.

Reimagining Innovation for a World of Atoms

The truth is that the digital revolution, for all of the excitement and nifty gadgets it has produced, has been somewhat of a disappointment. Since personal computers first became available in the 1970’s we’ve had less than ten years of elevated productivity growth. Compare that to the 50-year boom in productivity created in the wake of electricity and internal combustion and it’s clear that digital technology falls short.

In a sense though, the lack of impact shouldn’t be that surprising. Even at this late stage, information and communication technologies only make up for about 6% of GDP in advanced economies. Clearly, that’s not enough to swallow the world. As we have seen, it’s barely enough to make a dent.

Yet still, there is great potential in the other 94% of the economy and there may be brighter days ahead in using computing technology to drive advancement in the physical world. Exciting new fields, such as synthetic biology and materials science may very well revolutionize industries like manufacturing, healthcare, energy and agriculture.

So, we are now likely embarking on a new era of innovation that will be very different than the digital age. Rather than focused on one technology, concentrated in one geographical area and dominated by a handful of industry giants, it will be widely dispersed and made up of a diverse group of interlocking ecosystems of talent, technology and information.

Make no mistake. The future will not be digital. Instead, we will need to learn how to integrate a diverse set of technologies to reimagine atoms in the physical world.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Technology Pushing Us into a New Ethical Universe

Technology Pushing Us into a New Ethical Universe

GUEST POST from Greg Satell

We take it for granted that we’re supposed to act ethically and, usually, that seems pretty simple. Don’t lie, cheat or steal, don’t hurt anybody on purpose and act with good intentions. In some professions, like law or medicine, the issues are somewhat more complex, and practitioners are trained to make good decisions.

Yet ethics in the more classical sense isn’t so much about doing what you know is right, but thinking seriously about what the right thing is. Unlike the classic “ten commandments” type of morality, there are many situations that arise in which determining the right action to take is far from obvious.

Today, as our technology becomes vastly more powerful and complex, ethical issues are increasingly rising to the fore. Over the next decade we will have to build some consensus on issues like what accountability a machine should have and to what extent we should alter the nature of life. The answers are far from clear-cut, but we desperately need to find them.

The Responsibility of Agency

For decades intellectuals have pondered an ethical dilemma known as the trolley problem. Imagine you see a trolley barreling down the tracks that is about to run over five people. The only way to save them is to pull a lever to switch the trolley to a different set of tracks, but if you do that, one person standing there will be killed. What should you do?

For the most part, the trolley problem has been a subject for freshman philosophy classes and avant garde cocktail parties, without any real bearing on actual decisions. However, with the rise of technologies like self-driving cars, decisions such as whether to protect the life of a passenger or a pedestrian will need to be explicitly encoded into the systems we create.

That’s just the start. It’s become increasingly clear that data bias can vastly distort decisions about everything from whether we are admitted to a school, get a job or even go to jail. Still, we’ve yet to achieve any real clarity about who should be held accountable for decisions an algorithm makes.

As we move forward, we need to give serious thought to the responsibility of agency. Who’s responsible for the decisions a machine makes? What should guide those decisions? What recourse should those affected by a machine’s decision have? These are no longer theoretical debates, but practical problems that need to be solved.

Evaluating Tradeoffs

“Now I am become Death, the destroyer of worlds,” said J. Robert Oppenheimer, quoting the Bhagavad Gita. upon witnessing the world’s first nuclear explosion as it shook the plains of New Mexico. It was clear that we had crossed a Rubicon. There was no turning back and Oppenheimer, as the leader of the project, felt an enormous sense of responsibility.

Yet the specter of nuclear Armageddon was only part of the story. In the decades that followed, nuclear medicine saved thousands, if not millions of lives. Mildly radioactive isotopes, which allow us to track molecules as they travel through a biological system, have also been a boon for medical research.

The truth is that every significant advancement has the potential for both harm and good. Consider CRISPR, the gene editing technology that vastly accelerates our ability to alter DNA. It has the potential to cure terrible diseases such as cancer and Multiple Sclerosis, but also raises troubling issues such as biohacking and designer babies.

In the case of nuclear technology many scientists, including Oppenheimer, became activists. They actively engaged with the wider public, including politicians, intellectuals and the media to raise awareness about the very real dangers of nuclear technology and work towards practical solutions.

Today, we need similar engagement between people who create technology and the public square to explore the implications of technologies like AI and CRISPR, but it has scarcely begun. That’s a real problem.

Building A Consensus Based on Transparency

It’s easy to paint pictures of technology going haywire. However, when you take a closer look, the problem isn’t so much with technological advancement, but ourselves. For example, the recent scandals involving Facebook were not about issues inherent to social media websites, but had more to do with an appalling breach of trust and lack of transparency. The company has paid dearly for it and those costs will most likely continue to pile up.

It doesn’t have to be that way. Consider the case of Paul Berg, a pioneer in the creation of recombinant DNA, for which he won the Nobel Prize. Unlike Zuckerberg, he recognized the gravity of the Pandora’s box he had opened and convened the Asilomar Conference to discuss the dangers, which resulted in the Berg Letter that called for a moratorium on the riskiest experiments until the implications were better understood.

In her book, A Crack in Creation, Jennifer Doudna, who made the pivotal discovery for CRISPR gene editing, points out that a key aspect of the Asilomar conference was that it included not only scientists, but also lawyers, government officials and media. It was the dialogue between a diverse set of stakeholders, and the sense of transparency it produced, that helped the field advance.

The philosopher Martin Heidegger argued that technological advancement is a process of revealing and building. We can’t control what we reveal through exploration and discovery, but we can—and should—be wise about what we build. If you just “move fast and break things,” don’t be surprised if you break something important.

Meeting New Standards

In Homo Deus, Yuval Noah Harari writes that the best reason to learn history is “not in order to predict, but to free yourself of the past and imagine alternative destinies.” As we have already seen, when we rush into technologies like nuclear power, we create problems like Chernobyl and Fukushima and reduce technology’s potential.

The issues we will have to grasp over the next few decades will be far more complex and consequential than anything we have faced before. Nuclear technology, while horrifying in its potential for destruction, requires a tremendous amount of scientific expertise to produce it. Even today, it remains confined to governments and large institutions.

New technologies, such as artificial intelligence and gene editing are far more accessible. Anybody with a modicum of expertise can go online and download powerful algorithms for free. High school kids can order CRISPR kits for a few hundred dollars and modify genes. We need to employ far better judgment than organizations like Facebook and Google have shown in the recent past.

Some seem to grasp this. Most of the major tech companies have joined with the ACLU, UNICEF and other stakeholders to form the Partnership On AI to create a forum that can develop sensible standards for artificial intelligence. Salesforce recently hired a Chief Ethical and Human Use Officer. Jennifer Doudna has begun a similar process for CRISPR at the Innovative Genomics Institute.

These are important developments, but they are little more than first steps. We need a more public dialogue about the technologies we are building to achieve some kind of consensus of what the risks are and what we as a society are willing to accept. If not, the consequences, financial and otherwise, may be catastrophic.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Creating Change That Lasts

Creating Change That Lasts

GUEST POST from Greg Satell

When Lou Gerstner took over at IBM in 1993, the century-old tech giant was in dire straits. Overtaken by nimbler upstarts, like Microsoft in software, Compaq in hardware and Intel in microprocessors, it was hemorrhaging money. Many believed that it needed to be broken up into smaller, more focused units in order to compete.

Yet Gerstner saw it differently and kept the company intact, which led to one of the most dramatic turnarounds in corporate history. Today, more than a quarter century later, while many of its formal rivals have long since disappeared IBM is still profitable and on the cutting edge of many of the most exciting technologies.

That success was no accident. In researching my book, Cascades, I studied not only business transformations, but many social and political movements as well. What I found is that while most change efforts fail, the relatively few that succeed follow a pattern that is amazingly consistent. If you want to create change that lasts, here’s what you need to do.

Build Trust Through Shared Values

When Mahatma Gandhi returned to India, he began to implement a strategy of civil disobedience similar to what he had so successful in his campaigns in South Africa. He would later call this his Himalayan miscalculation. “Before a people could be fit for offering civil disobedience,” he later wrote, “they should thoroughly understand its deeper implications.”

One of the key tenets of transformation is that you can’t change fundamental behaviors without changing fundamental beliefs. So Gerstner, like Gandhi, first set out to change the culture within his organization. He saw that IBM had lost sight of its values. For example, the company had always valued competitiveness, but by the time he arrived much of that competitive energy was directed at fighting internal battles rather than in the marketplace.

“We needed to integrate as a team inside the company so that we could integrate for the customers on their premises,” Gerster would later say. “It flew in the face of what everybody did in their careers before I arrived there. It meant that we would share technical plans, we would move toward common technical standards and plans, we would not have individual transfer pricing between every product so that everybody could get their little piece of the customers’ money.”

He pushed these values constantly, through personal conversations, company emails, in the press and at company meetings. As Irving Wladawsky-Berger, one of Gerstner’s chief lieutenants, told me, “Lou refocused us all on customers and listening to what they wanted and he did it by example. We started listening to customers more because he listened to customers.”

Create a Clear Vision for the Future

At his very first press conference, Gerstner declared: “the last thing IBM needs right now is a vision.” So it was ironic that he developed a vision for the company within months into his tenure. What he noticed was that the culture within IBM had degraded to such an extent that it was hard to align its business units around a coherent strategy

Every change effort begins with a list of grievances. Sales are down, your industry is being disrupted or technology is passing you by. But until you are able to articulate a clear vision for how you want things to look in the future, any change is bound to be fleeting. For Gerstner at IBM, that vision was to put customers, rather than technology, at the center.

He started with a single keystone change, shifting IBM’s focus from its own “proprietary stack of technologies” to its customers’ “stack of business processes.” That focus on the customer was much more clear and tangible than simply “changing the culture.” It also would require multiple stakeholders to work together and pave the way for future change.

In my research, I found that every successful transformation, whether it was a political movement, a social movement or a business transformation, was able to identify a keystone change that paved the way for a larger vision. So if you want to bring about lasting transformation, that’s a great place to start.

Identify Support — And Opposition

Once Gerstner decided to focus his transformation strategy on IBM’s customers, he found that they were terrified at the prospect of the company failing or being broken up. They depended on IBM’s products to manage mission critical processes. They also needed a partner who could help them transition legacy technology to the Internet.

He also found that he could create new allies to support his mission. For example, IBM had a history of competing with application developers, but wasn’t making much money in the application business. So he started treating the application developers as true partners and gained their support.

Yet every significant change effort is bound to attract opposition as well. There will always be a certain faction that is so tied to the old ways of doing things that they will do whatever they can to undermine the transformation and IBM was no different. Some executives, for example, enjoyed the infighting and turf battles that had become the norm. Gerstner took a zero tolerance policy and even fired some senior executives who didn’t get with the program.

Compare that to Blockbuster Video. As I’ve noted before, the company actually devised a viable strategy to meet the Netflix threat but was unable to align internal stakeholders around that strategy.

Treat Transformation as a Journey, Not A Destination

Probably the most impressive thing about IBM’s turnaround in the 90s is how it has endured. Gerstner left the firm in 2002, and it has its share of ups and downs since then, but still rakes in billions in profit every year and continues to innovate in cutting edge areas such as blockchain and quantum computing.

“The Gerstner revolution wasn’t about technology or strategy, it was about transforming our values and our culture to be in greater harmony with the market,” Wladawsky-Berger told me. “Because the transformation was about values first and technology second, we were able to continue to embrace those values as the technology and marketplace continued to evolve.”

That’s what sets those that succeed from those who fail. You can’t bet your future on a particular strategy, program or tactic, because the future will always surprise us. It is how you align people behind a strategy, through forging shared values and building trust, that will determine whether change endures.

Perhaps most of all, you need to understand that transformation is always a journey, never a destination. Success is never a straight line. There will be ups and downs. But if you keep fighting for a better tomorrow, you will not only be able to bring about the change you seek, but the next ones after that as well.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

3 Things Politicians Can Do to Create Innovation

3 Things Politicians Can Do to Create Innovation

GUEST POST from Greg Satell

In the 1960s, the federal government accounted for more than 60% of all research funding, yet by 2016 that had fallen to just over 20%. During the same time, businesses’ share of R&D investment more than doubled from about 30% to almost 70%. Government’s role in US innovation, it seems, has greatly diminished.

Yet new research suggests that the opposite is actually true. Analyzing all patents since 1926, researchers found that the number of patents that relied on government support has risen from 12% in the 1980s to almost 30% today. Interestingly, the same research found that startups benefitted the most from government research.

As we struggle to improve productivity from historical lows, we need the public sector to play a part. The truth is that the government has a unique role to play in driving innovation and research is only part of it. In addition to funding labs and scientists, it can help bring new ideas to market, act as a convening force and offer crucial expertise to private businesses.

1. Treat Knowledge As A Public Good

By 1941, it had become clear that the war raging in Europe would soon envelop the US. With this in mind, Vannevar Bush went to President Roosevelt with a visionary idea — to mobilize the nation’s growing scientific prowess for the war effort. Roosevelt agreed and signed an executive order that would create the Office of Scientific Research and Development (OSRD).

With little time to build labs, the OSRD focused on awarding grants to private organizations such as universities. It was, by all accounts, an enormous success and lead to important breakthroughs such as the atomic bomb, proximity fuze and radar. As the war was winding down, Roosevelt asked Bush to write a report to continue OSRD’s success peacetime.

That report, titled Science, The Endless Frontier, was delivered to President Truman and would set the stage for America’s lasting technological dominance. It set forth a new vision in which scientific advancement would be treated as a public good, financed by the government, but made available for private industry. As Bush explained:

Basic research leads to new knowledge. It provides scientific capital. It creates the fund from which the practical applications of knowledge must be drawn. New products and new processes do not appear full-grown. They are founded on new principles and new conceptions, which in turn are painstakingly developed by research in the purest realms of science.

The influence of Bush’s idea cannot be overstated. It led to the creation of new government agencies, such as the National Science Foundation (NSF), the National Institutes of Health (NIH) and, later, the Defense Advanced Research Projects Agency (DARPA). These helped to create a scientific infrastructure that has no equal anywhere in the world.

2. Help to Overcome the Valley of Death

Government has a unique role to play in basic research. Because fundamental discoveries are, almost by definition, widely applicable, they are much more valuable if they are published openly. At the same time, because private firms have relatively narrow interests, they are less able to fully leverage basic discoveries.

However, many assume that because basic research is a primary role for public investment that it is its only relevant function. Clearly, that’s not the case. Another important role government has to play is helping to overcome the gap between the discovery of a new technology and its commercialization, which is so fraught with peril that it’s often called the “Valley of Death.”

The oldest and best known of initiative is SBIR/STTR program, which is designed to help startups commercialize cutting-edge research. Grants are given in two phases. In the first, a proof-of-concept phase, grants are capped at $150,000. If that’s successful, up to $1 million more can be awarded. Some SBIR/STTR companies, such as Qualcomm, iRobot and Symantec, have become industry leaders.

Other more focused programs have also been established. ARPA-e focuses exclusively on advanced energy technologies. Lab Embedded Entrepreneurship Programs (LEEP) give entrepreneurs access to the facilities and expertise of the National Labs in addition to a small grant. The Manufacturing Extension Program (MEP) helps smaller companies build the skills they need to be globally competitive.

3. Act As a Convening Force

A third role government can play is that of a convening force. For example, in 1987 a non-profit consortium made up of government labs, research universities and private sector companies, called SEMATECH, was created to regain competitiveness in the semiconductor industry. America soon regained its lead, which continues even today.

The reason that SEMATECH was so successful was that it combined the scientific expertise of the country’s top labs with the private sector’s experience in solving real world problems. It also sent a strong signal that the federal government saw the technology as important, which encouraged private companies to step up their investment as well.

Today, a number of new initiatives have been launched that follow a similar model. The most wide-ranging is the Manufacturing USA Institutes, which are helping drive advancement in everything from robotics and photonics to biofabrication and composite materials. Others, such as JCESR and the Critical Materials Institute, are more narrowly focused.

Much like its role in supporting basic science and helping new technologies get through the “Valley of Death,” acting as a convening force is something that, for the most part, only the federal government can do.

Make No Mistake: This Is Our New Sputnik Moment

In the 20th century three key technologies, electricity, internal combustion and computing drove economic advancement and the United States led each one. That is why it is often called the “American Century.” No country, perhaps since the Roman Empire, has ever so thoroughly dominated the known world.

Yet the 21st century will be different. The most important technologies will be things like synthetic biology, materials science and artificial intelligence. These are largely nascent and it’s still not clear who, if anybody, will emerge as a clear leader. It is very possible that we will compete economically and technologically with China, much like we used to compete politically and militarily with the Soviet Union.

Yet back in the Cold War, it was obvious that the public sector had an important role to play. When Kennedy vowed to go to the moon, nobody argued that the effort should be privatized. It was clear that such an enormous undertaking needed government leadership at the highest levels. We pulled together and we won.

Today, by all indications, we are at a new Sputnik moment in which our global scientific and technological leadership is being seriously challenged. We can respond with imagination, creating novel ways to, as Bush put it, “turn the wheels of private and public enterprise,” or we can let the moment pass us by and let the next generation face the consequences.

One thing is clear. We will be remembered for what we chose to do.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

4 Key Aspects of Robots Taking Our Jobs

4 Key Aspects of Robots Taking Our Jobs

GUEST POST from Greg Satell

A 2019 study by the Brookings Institution found that over 61% of jobs will be affected by automation. That comes on the heels of a 2017 report from the McKinsey Global Institute that found that 51% of total working hours and $2.7 trillion dollars in wages are highly susceptible to automation and a 2013 Oxford study that found 47% of jobs will be replaced.

The future looks pretty grim indeed until you start looking at jobs that have already been automated. Fly-by-wire was introduced in 1968, but today we’re facing a massive pilot shortage. The number of bank tellers has doubled since ATMs were introduced. Overall, the US is facing a massive labor shortage.

In fact, although the workforce has doubled since 1970, labor participation rates have risen by more than 10% since then. Everywhere you look, as automation increases, so does the demand for skilled humans. So the challenge ahead isn’t so much finding work for humans, but to prepare humans to do the types of work that will be in demand in the years to come.

1. Automation Doesn’t Replace Jobs, It Replaces Tasks

To understand the disconnect between all the studies that seem to be predicting the elimination of jobs and the increasingly dire labor shortage, it helps to look a little deeper at what those studies are actually measuring. The truth is that they don’t actually look at the rate of jobs being created or lost, but tasks that are being automated. That’s something very different.

To understand why, consider the legal industry, which is rapidly being automated. Basic activities like legal discovery are now largely done by algorithms. Services like LegalZoom automate basic filings. There are even artificial intelligence systems that can predict the outcome of a court case better than a human can.

So, it shouldn’t be surprising that many experts predict gloomy days ahead for lawyers. Yet the number of lawyers in the US has increased by 15% since 2008 and it’s not hard to see why. People don’t hire lawyers for their ability to hire cheap associates to do discovery, file basic documents or even, for the most part, to go to trial. In large part, they want someone they can trust to advise them.

In a similar way we don’t expect bank tellers to process transactions anymore, but to help us with things that we can’t do at an ATM. As the retail sector becomes more automated, demand for e-commerce workers is booming. Go to a highly automated Apple Store and you’ll find far more workers than at a traditional store, but we expect them to do more than just ring us up.

2. When Tasks Become Automated, The Become Commoditized

Let’s think back to what a traditional bank looked like before ATMs or the Internet. In a typical branch, you would see a long row of tellers there to process deposits and withdrawals. Often, especially on Fridays when workers typically got paid, you would expect to see long lines of people waiting to be served.

In those days, tellers needed to process transactions quickly or the people waiting in line would get annoyed. Good service was fast service. If a bank had slow tellers, people would leave and go to one where the lines moved faster. So training tellers to process transactions efficiently was a key competitive trait.

Today, however, nobody waits in line at the bank because processing transactions is highly automated. Our paychecks are usually sent electronically. We can pay bills online and get cash from an ATM. What’s more, these aren’t considered competitive traits, but commodity services. We expect them as a basic requisite of doing business.

In the same way, we don’t expect real estate agents to find us a house or travel agents to book us a flight or find us a hotel room. These are things that we used to happily pay for, but today we expect something more.

3. When Things Become Commodities, Value Shifts Elsewhere

In 1900, 30 million people in the United States were farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a manner of speaking, 90% of American agriculture workers lost their jobs, mostly due to automation. Still, the twentieth century became an era of unprecedented prosperity.

We’re in the midst of a similar transformation today. Just as our ancestors toiled in the fields, many of us today spend much of our time doing rote, routine tasks. However, as two economists from MIT explain in a paper, the jobs of the future are not white collar or blue collar, but those focused on non-routine tasks, especially those that involve other humans.

Consider the case of bookstores. Clearly, by automating the book buying process, Amazon disrupted superstore book retailers like Barnes & Noble and Borders. Borders filed for bankruptcy in 2011 and was liquidated later that same year. Barnes & Noble managed to survive but has been declining for years.

Yet a study at Harvard Business School found that small independent bookstores are thriving by adding value elsewhere, such as providing community events, curating titles and offering personal recommendations to customers. These are things that are hard to do well at a big box retailer and virtually impossible to do online.

4. Value Is Shifting from Cognitive Skills to Social Skills

20 or 30 years ago, the world was very different. High value work generally involved retaining information and manipulating numbers. Perhaps not surprisingly, education and corporate training programs were focused on teaching those skills and people would build their careers on performing well on knowledge and quantitative tasks.

Today, however, an average teenager has more access to information and computing power than a typical large enterprise had a generation ago, so knowledge retention and quantitative ability have largely been automated and devalued. High value work has shifted from cognitive skills to social skills.

Consider that the journal Nature has found that the average scientific paper today has four times as many authors as one did in 1950, and the work they are doing is far more interdisciplinary and done at greater distances than in the past. So even in highly technical areas, the ability to communicate and collaborate effectively is becoming an important skill.

There are some things that a machine will never do. Machines will never strike out at a Little League game, have their hearts broken or see their children born. That makes it difficult, if not impossible, for machines to relate to humans as well as a human can. The future of work is humans collaborating with other humans to design work for machines.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

How Transformational Leaders Learn to Conquer Failure

How Transformational Leaders Learn to Conquer Failure

GUEST POST from Greg Satell

When we think of great leaders their great successes usually come to mind. We picture Washington crossing the Delaware or Gandhi leading massive throngs or Steve Jobs standing triumphantly on stage. It is moments of triumph such as these that make indelible marks on history’s consciousness.

While researching my book, Cascades, however, what struck me most is how often successful change movements began with failure. It seems that those later, more triumphant moments can blind us to the struggles that come before. That can give us a mistaken view of what it takes to drive transformational change.

To be clear, these early and sometimes tragic failures are not simply the result of bad luck. Rather they happen because most new leaders are not ready to lead and make novice mistakes. The difference, I have found, between truly transformational leaders and those that fail isn’t so much innate talent or even ambition, but their ability to learn along the way.

A Himalayan Miscalculation

Today, we remember Mohandas Gandhi as the “Mahatma,” an iconic figure, superlatively wise and saintly in demeanor. His greatest triumph, the Salt March, remains an enduring symbol of the power of nonviolent activism, which has inspired generations to work constructively toward positive change in the world.

What many overlook, however, is that ten years before that historic event Gandhi embarked on a similar effort that would fail so tragically he would come to regard it as his Himalayan miscalculation. It was, in fact, what he learned from the earlier failure that helped make the Salt March such a remarkable success.

In 1919, he called for a nationwide series of strikes and boycotts to protest against unjust laws, called the Rowlatt Acts, passed by the British Raj. These protests were successful at first, but soon spun wildly out of control and eventually led to the massacre at Amritsar, in which British soldiers left hundreds dead and more than a thousand wounded.

Most people would have simply concluded that the British were far too cruel and brutal to be dealt with peacefully. Yet Gandhi realized that he had not sufficiently indoctrinated the protestors in his philosophy of Satyagraha. So he spent the next decade creating a dedicated cadre of devoted and disciplined followers.

When the opportunity arose again in 1930 Gandhi would not call for nationwide protests, but set out on the Salt March with 70 or 80 of his closest disciples. Their nonviolent discipline inspired the nation and the world. That’s what led to Gandhi’s ultimate victory, Indian independence, in 1947.

Learning To Overthrow a Dictator

If you looked at Serbia in 1999, you probably wouldn’t have noticed anything amiss. The country was ruled, as it had been for a decade, by Slobodan Milošević, whose power was nearly absolute. There was no meaningful political opposition or even an active protest movement. Milošević, it seemed, would be ruler for life.

Yet just a year later he was voted out of power. When he tried to steal the election, massive protests broke out and, when he lost the support of the military and security services, he was forced to concede. Two years later, he was tried at The Hague for crimes against humanity and found guilty. He would die in his prison cell in 2006.

However, the success of these protests was the product of earlier failures. There were student protests in 1992 that, much like the “Occupy” protests later in the US, quickly dissipated with little to show for the effort. Later the Zajedno (together) opposition coalition had some initial success, but then fell apart into disunity.

In 1998, veterans of both protests met in a coffee shop. They reflected on past failures and were determined not to repeat the same mistakes. Instead of looking for immediate results, they would use what they learned about organizing protests to build a massive networked organization, called Otpor, that would transcend political factions.

They had learned that if they could mobilize the public that they could beat Milošević at the polls and that, just like in 1996, he would deny the results. However, this time they would be prepared. Instead of disorganized protests, the regime faced an organization of 70,000 trained activists who inspired the nation and brought down a dictator.

A Wunderkind’s Fall from Grace

There is probably no business leader in history more iconic than Steve Jobs. We remember him not only for the incredible products he created, but the mastery with which he marketed them. Apple’s product launches became vastly more than mere business events, but almost cultural celebrations of expanding the limits of possibility.

What most people fail to realize about Steve Jobs, however, is how much he changed over the course of his career. Getting fired from Apple, the company he founded, was an excruciatingly traumatic experience. It forced him to come to terms with some of the more destructive parts of his personality.

While the Macintosh is rightfully seen today as a pathbreaking product, most people forget that, initially at least, it wasn’t profitable. After leaving Apple he started NeXT Computer which, although hailed for its design, also flopped. Along the way he bought Pixar, which struggled for years before finally becoming successful.

When Jobs returned to Apple in 1997 he was a very different leader, more open to taking in the ideas of others. Although he became enamored with iMovie, his team convinced him that digital music was a better bet and the iPod became the new Apple’s first big hit. Later, even though he was dead set against allowing outside developers to create software for the iPhone, he eventually relented and created the App store.

Before You Can Change the World, You First Must Change Yourself

We tend to look back at transformational leaders and see greatness in them from the start. The truth is that lots of people have elements of greatness in them, but never amount to much. It is the ability to overcome our tragic flaws that makes the difference between outsized achievement and mediocrity.

When Gandhi began his career as a lawyer he was so shy that he couldn’t speak up in court. Before the founders of Otpor became leaders of a massive movement, they were just kids who wanted to party and listen to rock and roll. Steve Jobs was always talented, was so difficult to deal with even his allies on Apple’s board knew he needed to go.

Most people never overcome their flaws. Instead, they make accommodations with them. It would have been easy for Gandhi to blame the British for his “Himalayan Miscalculation,” just as it would have been easy for the Otpor founders to blame Milošević for their struggles and for Jobs to continue to swing at windmills, but they didn’t. Instead, they found the capacity to change.

We all have our talents, but innate ability will only take you so far. In the final analysis, what makes transformational leaders different is their ability to transform themselves to suit the needs of their mission.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

We Must Prepare for Future Crises Like We Prepare for War

We Must Prepare for Future Crises Like We Prepare for War

GUEST POST from Greg Satell

In a 2015 TED talk, Bill Gates warned that “if anything kills ten million people in the next few decades, it’s most likely to be a highly infectious virus rather than a war. Not missiles, but microbes.” He went on to point out that we have invested enormous amounts of money in nuclear deterrents, but relatively little to battle epidemics.

It’s an apt point. In the US, we enthusiastically spend nearly $700 billion on our military, but cut corners on nearly everything else. Major breakthroughs, such as GPS satellites, the Internet and transistors, are merely offshoots of budgets intended to help us fight wars more effectively. At the same time, politicians gleefully propose budget cuts to the NIH.

A crisis, in one sense, is like anything else. It eventually ends and, when it does, we hope to be wiser for it. No one knows how long this epidemic will last or what the impact will be, but one thing is for sure — it will not be our last crisis. We should treat this as a new Sputnik moment and prepare for the next crisis with the same vigor with which we prepare for war.

Getting Artificial Intelligence Under Control

In the Terminator series, an automated defense system called Skynet becomes “self aware” and launches a nuclear attack to end humanity. Machines called “cyborgs” are created to hunt down the survivors that remain. Clearly it is an apocalyptic vision. Not completely out of the realm of possibility, but very unlikely.

The dangers of artificial intelligence, however, are very real, although not nearly so dramatic. Four years ago, in 2016, I published an article in Harvard Business Review outlining the ethical issues we need to address, ranging from long standing thought experiments like the trolley problem to issues surrounding accountability for automated decisions.

Unlike the Terminator scenario, these issues are clear and present. Consider the problem of data bias. Increasingly, algorithms determine what college we attend, if we get hired for a job and even who goes to prison and for how long. Unlike human decisions, these mathematical models are rarely questioned, but affect materially people’s lives.

The truth is that we need our algorithms to be explainable, auditable and transparent. Just because the possibility of our machines turning on us is fairly remote, doesn’t mean we don’t need too address more subtle, but all to real, dangers. We should build our systems to serve humanity, not the other way around.

The Slow-Moving Climate Crisis

Climate change is an issue that seems distant and political. To most people, basic needs like driving to work, heating their homes and doing basic household chores are much more top of mind than the abstract dangers of a warming planet. Yet the perils of climate change are, in fact, very clear and present.

Consider that the National Oceanic and Atmospheric Administration has found that, since 1980, there have been at least 258 weather and climate disasters where overall damages reached or exceeded $1 billion and that the total cost of these events has been more than $1.7 trillion. That’s an enormous amount of money.

Yet it pales in comparison to what we can expect in the future. A 2018 climate assessment published by the US government warned that we can expect climate change to “increasingly affect our trade and economy, including import and export prices and U.S. businesses with overseas operations and supply chains,” and had similar concerns with regard to our health, safety and quality of life.

There have been, of course, some efforts to slow the increase of carbon in our atmosphere that causes climate change such as the Paris Climate Agreement. However, these efforts are merely down payments to stem the crisis and, in any case, few countries are actually meeting their Paris targets. The US pulled out of the accord entirely.

The Debt Time Bomb

The US national debt today stands at about 23.5 trillion dollars or roughly 110% of GDP. That’s a very large, but not catastrophic number. The deficit in 2020 was expected to be roughly $1 trillion, or about four percent of GDP, but with the impact of the Coronavirus, we can expect it to be at least two to three times that now.

Considering that the economy of the United States grows at about two percent a year on average, any deficit above that level is unsustainable. Clearly, we are far beyond that now and, with baby boomers beginning to retire in massive numbers, Medicare spending is set to explode. At some point, these bills will have to be paid.

Yet focusing solely on financial debt misses a big part of the picture. Not only have we been overspending and under-taxing, we’ve also been massively under investing. Consider that the American Society of Civil Engineers has estimated that we need to spend $4.5 trillion to repair our broken infrastructure. Add that infrastructure debt to our financial and environmental debt it likely adds up to $30-$40 trillion, or roughly 150%-200% of GDP.

Much like the dangers of artificial intelligence and the climate crisis, not to mention the other inevitable crises like the new pandemics that are sure to come, we will eventually have to pay our debts. The only question is how long we want to allow the interest to pile up.

The Visceral Abstract

Some years ago, I wrote about a concept I called the visceral abstract. We often fail to realize how obscure concepts affect our daily lives. The strange theories of quantum mechanics, for example, make modern electronics possible. Einstein’s relativity helps calibrate our GPS satellites. Darwin’s natural selection helps us understand diseases like the Coronavirus.

In much the same way, we find it easy to ignore dangers that don’t seem clear and present. Terminator machines hunting us down in the streets is terrifying, but the very real dangers of data bias in our artificial intelligence systems is easy to dismiss. We worry how to pay the mortgage next month, but the other debts mounting fade into the background.

The news isn’t all bad, of course. Clearly, the Internet has made it far easier to cope with social distancing. Technologies such as gene sequencing and supercomputing simulations make it more likely that we will find a cure or a vaccine. We have the capacity for both petty foolishness and extreme brilliance.

The future is not inevitable. It is what we make it. We can choose, as we have in the past, to invest in our ability to withstand crises and mitigate their effects, or we can choose to sit idly by and give ourselves up to the whims of fate. We pay the price either way. How we pay it is up to us.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Leading Your Way Through Crisis

Leading Your Way Through Crisis

GUEST POST from Greg Satell

There’s a passage in Ernest Hemingway’s 1925 novel, The Sun Also Rises, in which a character is asked how he went bankrupt. “Two ways,” he answers. “Gradually, then suddenly.” The quote has since become emblematic of how a crisis takes shape. First with small signs you hardly notice and then with shocking impact.

That’s certainly how it felt to me in November, 2008, when I was leading a media company in Kyiv. By that time, the financial crisis was going full throttle, although things had been relatively calm in our market. Ukraine had been growing briskly in recent years and, while we expected a slowdown, we didn’t expect a crash.

Those illusions were soon shattered. Ad sales in Ukraine would eventually fall by a catastrophic 85%, while overall GDP would be down 14%. It was, to say the least, the worst business crisis I had ever encountered. In many ways, our business never really recovered, but the lessons I learned while managing through it will last a lifetime.

Build Trust Through Candor and Transparency

Our October revenues had come through fairly strong, so we were reasonably confident in our ability to weather the crisis. That all changed in November though, when ad sales, our primary source of revenue, dropped precipitously. By mid-November it had become clear that we were going to have to take drastic measures.

One of the first things that happens in a crisis is that the rumor mill goes into high gear. As if the real news isn’t bad enough, unimaginably crazy stories start getting passed around. To make matters worse, the facts were moving so fast that I didn’t have a clear picture of what the reality actually was, so couldn’t offer much in the way of consolation.

Yet what I could do was offer clarity and transparency. I called my senior team into an emergency meeting and told them, “This is bad. Really bad. And to be honest I’m not sure where we stand right now. One thing that I can assure you all of though is this: Like everything else, eventually this crisis will end and, when it does, you are going to want to look back at how you acted and you are going to want to be proud.”

A good number of those in the room that day have since told me how much that meeting meant to them. I wasn’t able to offer much in substance, or even any condolence for that matter. What I was able to do was establish a standard of candor and transparency which made trust possible. That became an essential asset moving forward.

Create An Imperfect Plan

Creating an atmosphere of transparency and trust is essential, but you also have to move quickly to action. In our case, that meant restructuring the entire company over the next 36 hours in order to bring our costs somewhat back in line with revenues. We weren’t even close to having a plan for the long-term, this was about survival.

We still, however, wanted to limit the damage. Although we were eliminating some businesses entirely, we recognized that some of our best talent worked in those businesses. So to lay people off indiscriminately would be a mistake. We wanted to keep our top performers and place them where they could have the most impact.

Over the next day and a half, we had a seemingly never-ending and excruciating series of meetings in which we decided who would stay and who would go, where we could increase efficiency by combining functions and leveraging our scale. Our goal was to do more than just survive, but to position ourselves to be more competitive in the future.

The plan we created in that short period of time was by no means perfect. I had to make decisions based on poor information in a very compressed time frame. Certainly mistakes were made. But within 36 hours we had a plan to move forward and a committed team that, in many ways, welcomed the distraction of focusing on the task in front of them.

Look for Dead Sea Markets

In their 2005 book, W. Chan Kim and Renée Mauborgne popularized the notion of a Blue Ocean Strategy, which focuses on new markets, rather than fighting it out in a “red ocean” filled with rabid competition. As MIT Professor David Robertson has described, however, sometimes markets are neither a red or blue ocean, but more like a dead sea, which kills off existing life but provides a new ecosystem in which different organisms can thrive.

He gave the example of LEGO’s Discovery Centers, which has capitalized on the abrupt shift in the economics of mall space. A typical location is set up in an empty department store and features miniature versions of some of the same attractions that can be found at the Toy giant’s amusement parks. The strategy leverages the fact that many mall owners are in dire need to fill the space.

We found something similar during the Ukraine economic collapse of 2009. Because the country was a major outsourcing center for web developers, demand for those with technical talent actually increased. Many of our weaker competitors were unable to retain their staff, which gave us an opportunity to launch several niche digital brands even while we were cutting back in other parts of our business.

Every crisis changes economic relationships and throws pricing out of whack. In some cases that turns cheap commodities, such as Lysol and hand sanitizer amid a Coronavirus pandemic, into highly demanded products. In other cases, however, it makes both assets and market share surprisingly affordable. That can create great opportunities.

Prepare for the Next Crisis

By the fall of 2009, our company was financially stable and things were returning to some form of normalcy. We had a strong management team, a portfolio of leading products and our survival was no longer seriously in question. However, I was exhausted and decided to leave to pursue other opportunities.

The founder, who had started the company almost 15 years before, was as exhausted as me and was ready to sell the company. Given our highly political sensitive portfolio of news brands, I urged him to seek a deal with a multinational firm. However, for various reasons, he decided to go with a local group led by Petro Poroshenko and Boris Lozhkin.

In my book Cascades, I describe what happens next. Due to the hard-hitting coverage of our news journalists, the company came under pressure from the oppressive Yanukovych regime. In 2013, the new owners were forced to sell the company to an ally of the Ukrainian President. A few months later, the Euromaidan protests broke out and Yanukovych was unanimously impeached. Later, Poroshenko was elected President and named Lozhkin as his Chief of Staff.

I still keep in touch with a core group of my former colleagues. Many have started families or new businesses. Quite a few have moved to different countries. Yet we all share the bond of working through the crucible of crisis together, some pride in what we achieved and the satisfaction that, when it was called for, we gave it our honest best.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.