Author Archives: Greg Satell

About Greg Satell

Greg Satell is a popular speaker and consultant. His latest book, Cascades: How to Create a Movement That Drives Transformational Change, is available now. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.

Driving Change Forward Requires a Shared Purpose

Driving Change Forward Requires a Shared Purpose

GUEST POST from Greg Satell

On September 12, 1962, President Kennedy addressed the nation from Rice University. “We choose to go to the moon,” he said. “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills.”

The speech galvanized the country into one of the most vast collective efforts in history, involving politicians, scientists, engineers and the general public to achieve that goal. Perhaps even more importantly, it imbued the country with a sense of shared purpose that carried over into our business, personal and community life.

Today, that sense of shared purpose is much harder to achieve. Our societies are more diverse and we no longer expect to spend an entire career at a single company, or even a single industry. That’s why the most essential element of a leader’s job today isn’t so much to plan and direct action, but to inspire and empower belief in a common mission.

Start with Shared Identity

When Lou Gerstner first arrived at IBM, the company was going bankrupt. He quickly identified the root of the problem: Infighting. “Units competed with each other, hid things from each other,” he would later write. Huge staffs spent countless hours debating and managing transfer pricing terms between IBM units instead of facilitating a seamless transfer of products to customers.”

The problem is a common one. General Stanley McChrystal experienced something similar in Iraq. As he described in Team of Teams, his forces were split into competing tribes, such as Navy SEALS, Army Special Forces, Night Stalker helicopter pilots, and others, each competing with everyone else for resources.

We naturally tend to form groups based on identity. For example, in a study of adults that were randomly assigned to “leopards” and “tigers,” fMRI studies noted hostility to outgroup members. Similar results were found in a study involving five-year-old children and even in infants. So, to a certain extent, tribalism is unavoidable.

It can also be positive. Under Gerstner, his employees continued to take pride in their unit, just as under McChrystal commando teams continued to build an esprit de corps. Yet those leaders, and President Kennedy as well, expanded those tribes to include a second, larger identity as IBMers, warriors in the fight against terrorism and as Americans, respectively.

Anchor Shared Identity with Shared Values

Shared identity is the first step to building a true sense of shared purpose, but without shared values shared identity is meaningless. We can, as in the study mentioned above, designate ourselves “leopards” or “tigers,” but that is a fairly meaningless distinction. It may be enough to generate hostility to outsiders, but not enough to create a genuine team dynamic.

In the 1950s there were a number of groups opposed to Apartheid in South Africa. Even though they shared common goals, they were unable to work together effectively. That began to change with the Congress of the People, a multi-racial gathering which produced a statement of shared values that came to be known as the Freedom Charter.

Nelson Mandela would later say that the Freedom Charter would have been very different if his organization, the African National Congress (ANC) had written it by themselves, but it wouldn’t have been nearly as powerful. It not only gave anti-Apartheid groups a basis for collective action, by being explicit values, it formed a foundation for those outside of South Africa, who shared the same values, to share the anti-Apartheid purpose.

Perhaps most importantly, the Freedom Charter imposed costs and constraints on the anti-Apartheid movement. By committing itself to a multi-racial movement the African National Congress lost some freedom of action. However, constraining itself in that way was in itself a powerful argument for the viability of a multi-racial society in South Africa.

One of the most powerful moments in our Transformation and Change Workshops is when people make the shift from differentiating values, such as the black nationalism that Mandela favored as a young man, to shared values, such as equal rights under the law that the Freedom Charter called for. Of course, you can be a black nationalist and also support equal rights, but it is through shared values that your change effort will grow.

Engaging in Shared Action

Shared identity and shared values are both essential elements of shared purpose, but they are still not sufficient. To create a true sense of a common mission, you need to instill bonds of trust and that can only be done through engaging in shared action. Consider a study done in the 1960s, called the Robbers Cave Experiment, which involved 22 boys of similar religious, racial and economic backgrounds invited to spend a few weeks at a summer camp.

In the first phase, they were separated into two groups of “Rattlers” and “Eagles” that had little contact with each other. As each group formed its own identity, they began to display hostility on the rare occasions when they were together. During the second phase, the two groups were given competitive tasks and tensions boiled over, with each group name calling, sabotaging each other’s efforts and violently attacking one another.

In the third phase, the researchers attempted to reduce tensions. At first, they merely brought them into friendly contact, with little effect. The boys just sneered at each other. However, when they were tricked into challenging tasks where they were forced to work together in order to be successful, the tenor changed quickly. By end of the camp the two groups had fallen into a friendly camaraderie.

In much the same way, President Kennedy’s Moonshot wasn’t some obscure project undertaken in a secret lab, but involved 400,000 people and was followed on TV by millions more. The Congress of the People wasn’t important just for the document that it produced, but because of the bonds forged in the process. General McChrystal didn’t just preach collaboration, but made it necessary by embedding his personnel in each other’s units.

Becoming a Transformational Leader

Times like these strain any organization. The Covid-19 crisis alone forces enterprises to change. Put racial and political tensions on top and you can quickly have a powder keg waiting to explode. On the other hand, much like the boys in the “Robbers Cave” experiment, common struggle can serve to build common bonds.

When President Kennedy gave his famous speech in 1962, the outlook didn’t look very bright. The launch of the Russian satellite Sputnik in 1957 had put America on its heels. Kennedy’s disastrously failed Bay of Pigs invasion was only compounded by his humiliation at the hands of Khrushchev in Vienna.

Yet instead of buckling under the pressure, Kennedy had the grit and imagination to conceive a new project that would “serve to organize and measure the best of our energies and skills.” He pledged that we would go to the moon before the decade was out and we did, putting America back on top of the world and imbuing the country with a sense of pride and ambition.

We can do the same. The Covid pandemic, while tragic, gives us the opportunity to reimagine healthcare and fix a broken system. The racial tensions that George Floyd’s murder exposed have the potential to help us build a new racial consciousness. Revolutions do not begin with a slogan, they begin with a cause.

That’s what makes transformational leaders different. Where others see calamity, they see potential for change.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Proof Innovation Takes More Than Genius

Proof Innovation Takes More Than Genius

GUEST POST from Greg Satell

It’s easy to look at someone like Steve Jobs or Elon Musk and imagine that their success was inevitable. Their accomplishments are so out of the ordinary that it just seems impossible that they could have ever been anything other than successful. You get the sense that whatever obstacles they encountered, they would overcome.

Yet it isn’t that hard to imagine a different path. If, for example, Jobs had remained in Homs, Syria, where he was conceived, it’s hard to see how he would have ever been able to become a technology entrepreneur at all, much less a global icon. If Apartheid never ended, Musk’s path to Silicon Valley would be much less likely as well.

The truth is that genius can be exceptionally fragile. Making a breakthrough takes more than talent. It requires a mixture of talent, luck and an ecosystem of support to mold an idea into something transformative. In fact, in my research of great innovators what’s amazed me the most is how often they almost drifted into obscurity. Who knows how many we have lost?

The One That Nearly Slipped Away

On a January morning in 1913, the eminent mathematician G.H. Hardy opened his mail to find a letter written in almost indecipherable scrawl from a destitute young man in India named Srinivasa Ramanujan. It began inauspiciously:

I beg to introduce myself to you as a clerk in the Accounts Department of the Port Trust Office at Madras on a salary of £ 20 per annum. I am now about 23 years of age. I have had no university education but I have undergone the ordinary school. I have been employing the spare time at my disposal to work at Mathematics.

Inside he found what looked like mathematical nonsense, using strange notation and purporting theories that “scarcely seemed possible.” It was almost impossible to understand, except for a small section that refuted one of Hardy’s own conjectures made just months before. Assuming some sort of strange prank, he threw it in the wastebasket.

Throughout the day, however, Hardy found the ideas in the paper gnawing at him and he retrieved the letter. That night, he took it over to the home of his longtime collaborator, J.E. Littlewood. By midnight, they realized that they had just discovered one of the greatest mathematical talents the world had ever seen.

They invited him to Cambridge, where together they revolutionized number theory. Although Ramanujan’s work was abstract, it has made serious contributions to fields ranging from crystallography and string theory. Even now, almost a century later, his notebooks continue to be widely studied by mathematicians looking to glean new insights.

A Distraught Young Graduate

Near the turn of the 20th century, the son of a well-to-do industrialist, recently graduated from university, found himself poorly married with a young child and unemployed. He fell into a deep depression, became nearly suicidal and wrote to his sister in a letter:

What depresses me most is the misfortune of my poor parents who have not had a happy moment for so many years. What further hurts me deeply is that as an adult man, I have to look on without being able to do anything. I am nothing but a burden to my family…It would be better off if I were not alive at all.

His father would pass away a few years later. By that time, the young Albert Einstein did find work as a lowly government clerk. Soon after, in 1905, he unleashed four papers in quick succession that would change the world. It was an accomplishment so remarkable that it is now referred to as his miracle year.

It would still be another seven years before Einstein finally got a job as a university professor. It wasn’t after 1919, when a solar eclipse confirmed his oddball theory, that he became the world famous icon we know today.

The Medical Breakthrough That Almost Never Happened

Jim Allison spent most of his life as a fairly ordinary bench scientist and that’s all he really wanted to be. He told me once that he “just liked figuring things out” and by doing so, he gained some level of prominence in the field of immunology, making discoveries that were primarily of interest to other immunologists.

His path diverged when he began to research the ability of our immune system to fight cancer. Using a novel approach, he was able to show amazing results in mice. “The tumors just melted away,” he told me. Excited, he practically ran to tell pharmaceutical companies about his idea and get them to invest in his research.

Unfortunately, they were not impressed. The problem wasn’t that they didn’t understand Jim’s idea, but that they had already invested — and lost — billions of dollars on similar ideas. Hundreds of trials had been undertaken on immunological approaches to cancer and there hadn’t been one real success.

Nonetheless, Jim persevered. He collected more data, pounded the pavement and made his case. It took three years, but he eventually got a small biotech company to invest in his idea and cancer immunotherapy is now considered to be a miracle cure. Tens of thousands of people are alive today because Jim had the courage and grit to stick it out.

Genius Can Come From Anywhere

These are all, in the end, mostly happy stories. Ramanujan did not die in obscurity, but is recognized as one of the great mathematical minds in history. Einstein’s did not succumb to despair, but became almost synonymous with genius. Jill Allison won the Nobel Prize for his work in 2018.

Yet it is easy to see how it all could have turned out differently. Ramanujan sent out letters to three mathematicians in England. The other two ignored him (and Hardy almost did). Einstein’s job at the patent office was almost uniquely suited to his mode of thinking, giving him time to daydream and pursue thought experiments. Dozens of firms passed on Allison’s idea before he found one that would back him.

We’d like to think that today, with all of our digital connectivity and search capability, that we’d be much better at finding and nurturing genius, but there are indications the opposite may be true. It’s easy to imagine the next Ramanujan pulled from his parents at a border camp. With increased rates of depression and suicide in America, the next Einstein is probably more likely to succumb.

The most important thing to understand about innovation is that it is something that people do. The truth is that a mind is a fragile thing. It needs to be nurtured and supported. That’s just as true for a normal, everyday mind capable of normal, everyday accomplishments. When we talk about innovation and how to improve it, that seems to me to be a good place to start.

— Article courtesy of the Digital Tonto blog
— Image credit: MisterInnovation.com (Pixabay)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Four Lessons Learned from the Digital Revolution

Four Lessons Learned from the Digital Revolution

GUEST POST from Greg Satell

When Steve Jobs was trying to lure John Sculley from Pepsi to Apple in 1982, he asked him, “Do you want to sell sugar water for the rest of your life, or do you want to come with me and change the world?” The ploy worked and Sculley became the first major CEO of a conventional company to join a hot Silicon Valley startup.

It seems so quaint today, in the midst of a global pandemic, that a young entrepreneur selling what was essentially a glorified word processor thought he was changing the world. The truth is that the digital revolution, despite all the hype, has been something of a disappointment. Certainly it failed to usher in the “new economy” that many expected.

Yet what is also becoming clear is that the shortcomings have less to do with the technology itself, in fact the Covid-19 crisis has shown just how amazingly useful digital technology can be, than with ourselves. We expected technology and markets to do all the work for us. Today, as we embark on a new era of innovation, we need to reflect on what we have learned.

1. We Live In a World of Atoms, Not Bits

In 1996, as the dotcom boom was heating up, the economist W. Brian Arthur published an article in Harvard Business Review that signaled a massive shift in how we view the economy. While traditionally markets are made up of firms that faced diminishing returns, Arthur explained that information-based businesses can enjoy increasing returns.

More specifically, Arthur spelled out that if a business had high up-front costs, network effects and the ability to lock in customers it could enjoy increasing returns. That, in turn, would mean that information-based businesses would compete in winner-take-all markets, management would need to become less hierarchical and that investing heavily to win market share early could become a winning strategy.

Arthur’s article was, in many ways, prescient and before long investors were committing enormous amounts of money to companies without real businesses in the hopes that just a few of these bets would hit it big. In 2011, Marc Andreesen predicted that software would eat the world.

He was wrong. As the recent debacle at WeWork, as well as massive devaluations at firms like Uber, Lyft, and Peloton, shows that there is a limit to increasing returns for the simple reason that we live in a world of atoms, not bits. Even today, information and communication technologies make up only 6% of GDP in OECD countries. Obviously, most of our fate rests with the other 94%.

The Covid-19 crisis bears this out. Sure, being able to binge watch on Netflix and attend meetings on Zoom is enormously helpful, but to solve the crisis we need a vaccine. To do that, digital technology isn’t enough. We need to combine it with synthetic biology to make a real world impact.

2. Businesses Do Not Self Regulate

The case Steve Jobs made to John Sculley was predicated on the assumption that digital technology was fundamentally different from the sugar-water sellers of the world. The Silicon Valley ethos (or conceit as the case may be), was that while traditional businesses were motivated purely by greed, technology businesses answered to a higher calling.

This was no accident. As Arthur pointed out in his 1996 article, while atom-based businesses thrived on predictability and control, knowledge-based businesses facing winner-take-all markets are constantly in search of the “next big thing.” So teams that could operate like mission-oriented “commando units” on a holy quest would have a competitive advantage.

Companies like Google who vowed to not “be evil,” could attract exactly the type of technology “commandos” that Arthur described. They would, as Mark Zuckerberg has put it, “move fast and break things,” but would also be more likely to hit on that unpredictable piece of code that would lead to massively increasing returns.

Unfortunately, as we have seen, businesses do not self-regulate. Knowledge-based businesses like Google and Facebook have proven to be every bit as greedy as their atom-based brethren. Privacy legislation, such as GDPR, is a good first step, but we will need far more than that, especially as we move into post-digital technologies that are far more powerful.

Still, we’re not powerless. Consider the work of Stop Hate For Profit, a broad coalition that includes the Anti-Defamation League and the NAACP, which has led to an advertiser boycott of Facebook. We can demand that corporations behave how we want them to, not just what the market will bear.

3. As Our Technology Becomes More Powerful, Ethics Matter More Than Ever

Over the past several years some of the sense of wonder and possibility surrounding digital technology gave way to no small amount of fear and loathing. Scandals like the one involving Facebook and Cambridge Analytica not only alerted us to how our privacy is being violated, but also to how our democracy has been put at risk.

Yet privacy breaches are just the beginning of our problems. Consider artificial intelligence, which exposes us to a number of ethical challenges, ranging from inherent bias to life and death ethical dilemmas such as the trolley problem. It is imperative that we learn to create algorithms that are auditable, explainable and transparent.

Or consider CRISPR, the gene editing technology, available for just a few hundred dollars, that vastly accelerates our ability to alter DNA. It has the potential to cure terrible diseases such as cancer and Multiple Sclerosis, but also raises troubling issues such as biohacking and designer babies. Worried about some hacker cooking up a harmful computer virus, what about a terrorist cooking up a real virus?

That’s just the start. As quantum and neuromorphic computing become commercially available, most likely within a decade or so, our technology will become exponentially more powerful and the risks will increase accordingly. Clearly, we can no longer just “move fast and break things,” or we’re bound to break something important.

4. We Need a New Way to Evaluate Success

By some measures, we’ve been doing fairly well over the past ten years. GDP has hovered around the historical growth rate of 2.3%. Job growth has been consistent and solid. The stock market has been strong, reflecting robust corporate profits. It has, in fact, been the longest US economic expansion on record.

Yet those figures were masking some very troubling signs, even before the pandemic. Life expectancy in the US has been declining, largely due to drug overdoses, alcohol abuse and suicides. Consumer debt hit record highs in 2019 and bankruptcy rates were already rising. Food insecurity has been an epidemic on college campuses for years.

So, while top-line economic figures painted a rosy picture there was rising evidence that something troubling is afoot. The Business Roundtable partly acknowledged this fact with its statement discarding the notion that creating shareholder value is the sole purpose of a business. There are also a number of initiatives designed to replace GDP with broader measures.

The truth is that our well-being can’t be reduced to and reduced to a few tidy metrics and we need more meaning in our lives than more likes on social media. Probably the most important thing that the digital revolution has to teach us is that technology should serve people and not the other way around. If we really want to change the world for the better, that’s what we need to keep in mind.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Unlocking the Power of Cause and Effect

Unlocking the Power of Cause and Effect

GUEST POST from Greg Satell

In 2011, IBM’s Watson system beat the best human players in the game show, Jeopardy! Since then, machines have shown that they can outperform skilled professionals in everything from basic legal work to diagnosing breast cancer. It seems that machines just get smarter and smarter all the time.

Yet that is largely an illusion. While even a very young human child understands the basic concept of cause and effect, computers rely on correlations. In effect, while a computer can associate the sun rising with the day breaking, it doesn’t understand that one causes the other, which limits how helpful computers can be.

That’s beginning to change. A group of researchers, led by artificial intelligence pioneer Judea Pearl, are working to help computers understand cause and effect based on a new causal calculus. The effort is still in its nascent stages, but if they’re successful we could be entering a new era in which machines not only answer questions, but help us pose new ones.

Observation and Association

Most of what we know comes from inductive reasoning. We make some observations and associate those observations with specific outcomes. For example, if we see animals going to a drink at a watering hole every morning, we would expect to see them at the same watering hole in the future. Many animals share this type of low-level reasoning and use it for hunting.

Over time, humans learned how to store these observations as data and that’s helped us make associations on a much larger scale. In the early years of data mining, data was used to make very basic types of predictions, such as the likelihood that somebody buying beer at a grocery store will also want to buy something else, like potato chips or diapers.

The achievement over the last decade or so is that advancements in algorithms, such as neural networks, have allowed us to make much more complex associations. To take one example, systems that have observed thousands of mammograms have learned to associate the ones that show a tumor with a very high degree of accuracy.

However, and this is a crucial point, the system that detects cancer doesn’t “know” it’s cancer. It doesn’t associate the mammogram with an underlying cause, such as a gene mutation or lifestyle choice, nor can it suggest a specific intervention, such as chemotherapy. Perhaps most importantly, it can’t imagine other possibilities and suggest alternative tests.

Confounding Intervention

The reason that correlation is often very different from causality is the presence of something called a confounding factor. For example, we might find a correlation between high readings on a thermometer and ice cream sales and conclude that if we put the thermometer next to a heater, we can raise sales of ice cream.

I know that seems silly, but problems with confounding factors arise in the real world all the time. Data bias is especially problematic. If we find a correlation between certain teachers and low test scores, we might assume that those teachers are causing the low test scores when, in actuality, they may be great teachers who work with problematic students.

Another example is the high degree of correlation between criminal activity and certain geographical areas, where poverty is a confounding factor. If we use zip codes to predict recidivism rates, we are likely to give longer sentences and deny parole to people because they are poor, while those with more privileged backgrounds get off easy.

These are not at all theoretical examples. In fact, they happen all the time, which is why caring, competent teachers can, and do, get fired for those particular qualities and people from disadvantaged backgrounds get mistreated by the justice system. Even worse, as we automate our systems, these mistaken interventions become embedded in our algorithms, which is why it’s so important that we design our systems to be auditable, explainable and transparent.

Imagining A Counterfactual

Another confusing thing about causation is that not all causes are the same. Some causes are sufficient in themselves to produce an effect, while others are necessary, but not sufficient. Obviously, if we intend to make some progress we need to figure out what type of cause we’re dealing with. The way to do that is by imagining a different set of facts.

Let’s return to the example of teachers and test scores. Once we have controlled for problematic students, we can begin to ask if lousy teachers are enough to produce poor test scores or if there are other necessary causes, such as poor materials, decrepit facilities, incompetent administrators and so on. We do this by imagining counterfactual, such as “What if there were better materials, facilities and administrators?”

Humans naturally imagine counterfactuals all the time. We wonder what would be different if we took another job, moved to a better neighborhood or ordered something else for lunch. Machines, however, have great difficulty with things like counterfactuals, confounders and other elements of causality because there’s been no standard way to express them mathematically.

That, in a nutshell, is what Judea Pearl and his colleagues have been working on over the past 25 years and many believe that the project is finally ready to bear fruit. Combining humans innate ability to imagine counterfactuals with machines’ ability to crunch almost limitless amounts of data can really be a game changer.

Moving Towards Smarter Machines

Make no mistake, AI systems’ ability to detect patterns has proven to be amazingly useful. In fields ranging from genomics to materials science, researchers can scour massive databases and identify associations that a human would be unlikely to detect manually. Those associations can then be studied further to validate whether they are useful or not.

Still, the fact that our machines don’t understand concepts like the fact that thermometers don’t increase ice cream sales limits their effectiveness. As we learn how to design our systems to detect confounders and imagine counterfactuals, we’ll be able to evaluate not only the effectiveness of interventions that have been tried, but also those that haven’t, which will help us come up with better solutions to important problems.

For example, in a 2019 study the Congressional Budget Office estimated that raising the national minimum wage to $15 per hour would result in a decrease in employment from zero to four million workers, based on a number of observational studies. That’s an enormous range. However, if we were able to identify and mitigate confounders, we could narrow down the possibilities and make better decisions.

While still nascent, the causal revolution in AI is already underway. McKinsey recently announced the launch of CausalNex, an open source library designed to identify cause and effect relationships in organizations, such as what makes salespeople more productive. Causal approaches to AI are also being deployed in healthcare to understand the causes of complex diseases such as cancer and evaluate which interventions may be the most effective.

Some look at the growing excitement around causal AI and scoff that it is just common sense. But that is exactly the point. Our historic inability to encode a basic understanding of cause and effect relationships into our algorithms has been a serious impediment to making machines truly smart. Clearly, we need to do better than merely fitting curves to data.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Designing Your Organization for Transformation

Designing Your Organization for Transformation

GUEST POST from Greg Satell

The March on Washington, in which Martin Luther King Jr. delivered his famous “I Have a Dream” speech, is one of the most iconic events in American history. So it shouldn’t be surprising that when anybody wants to drive change in the United States, they often begin with trying to duplicate that success.

Yet that’s a gross misunderstanding of why the march was successful. As I explain in Cascades, the civil rights movement didn’t become powerful because of the March on Washington, the March on Washington took place because the civil rights movement became powerful. It was part of the end game, not an opening shot.

Unfortunately, many corporate transformations make the same mistake. They try to drive change without preparing the ground first. So it shouldn’t be surprising that McKinsey has found that only about a quarter of transformational efforts succeed. Make no mistake, transformation is a journey, not a destination, and you start by preparing the ground first.

Start with a Keystone Change

Every successful transformation starts out with a vision, such as racial equality in the case of the civil rights movement. Yet to be inspiring, a vision needs to be aspirational, which means it is rarely achievable in any practical time frame. A good vision is more of a beacon than it is a landmark.

That’s probably why every successful transformation I found in my research first had to identify a keystone change which had a tangible and concrete objective, involved multiple stakeholders and paved the way for future change. In some cases, there are multiple keystone changes being pursued at once seeking to influence different institutions.

For example, King and his organization, the Southern Christian Leadership Conference (SCLC), mobilized southern blacks, largely through religious organizations, to influence the media and politicians. At the same time, through their work at the NAACP, Charles Hamilton Houston and Thurgood Marshall worked to influence the judicial system to eliminate segregation.

The same principle holds for corporate transformations. When Paul O’Neill set out to turnaround Alcoa in the 1980s, he started by improving workplace safety and, more recently, at Experian, when CIO Barry Libenson set out to move his company to the cloud, he started with internal APIs. In both cases, the stakeholders won over in achieving the keystone change also played a part in bringing about the larger vision.

Lead with Values

Throughout his career, Nelson Mandela was accused of being a communist, an anarchist and worse. Yet when confronted with these, he would always point out that nobody needed to guess what he believed, because it was all written down in the Freedom Charter way back in 1955. Those values signaled to everybody, both inside and outside of the anti-apartheid movement, what they were fighting for.

In a similar vein, when Lou Gerstner arrived at IBM in the early 90s, he saw that the once great company had lost sight of its values. For example, its salespeople were famous for dressing formally, but that was merely an early manifestation of a value. The original idea was to be close to customers and, since most of IBM’s early customers were bankers, salespeople dressed formally. Yet if customers were now wearing khakis, it was okay for IBM’ers to do so as well.

Another long held value at IBM was a competitive spirit, but IBM executives had started to compete with each other internally rather than working to beat the competition. So Gerstner worked to put a stop to the bickering, even firing some high-placed executives who were known for infighting. He made it clear, through personal conversations, emails and other channels that in the new IBM the customer would come first.

What’s important to remember about values is, if they are to be anything more than platitudes, you have to be willing to incur costs to live up to them. When Nelson Mandela rose to power, he couldn’t oppress white South Africans and live up to the values in the Freedom Charter. At IBM, Gerstner was willing to give up potential revenue on some sales to make his commitment to the customer credible.

Build a Network of Small Groups

With attendance at its weekend services exceeding 20,000, Rick Warren’s Saddleback Church is one of the largest congregations in the world. Yet much like the March on Washington, the mass of people obscures the networks that underlie the church and are the source of its power.

The heart of Saddleback Church is the prayer groups of six to eight people that meet each week, build strong ties and support each other in matters of faith, family and career. It is the loose connections between these small groups that give Saddleback its combination of massive reach and internal coherence, much like the networks of small groups convened in front of the Lincoln Memorial during the civil rights movement.

One of the key findings of my research into social and political movements is that they are driven by small groups, loosely connected, but united by a common purpose. Perhaps not surprisingly, research has also shown that the structure of networks plays a major role in organizational performance.

That’s why it’s so important to network your organization by building bonds that supersede formal relationships. Experian, for example has built a robust network of clubs, where employees can share a passion, such as bike riding and employee resource groups, that are more focused on identity. While these activities are unrelated to work, the company has found that it helps employees span boundaries in the organization and collaborate more effectively.

All too often, we try to break down silos to improve information flow. That’s almost aways a mistake. To drive a true transformation, you need to connect silos so that they can coordinate action.

Make the Shift from Hierarchies to Networks

In an earlier age, organizations were far more hierarchical. Power rested at the top. Orders went down, information flowed up and decisions we made by a select priesthood of vaunted executives. In today’s highly connected marketplace, that’s untenable. The world has become fast and hierarchies are simply too slow.

That’s especially true when it comes to transformation. It doesn’t matter if the order comes from the top. If the organization itself isn’t prepared, any significant transformation is unlikely to succeed. That’s why you need to lead with vision, establish a keystone change that involves multiple stakeholders and work deliberately to network your organization.

Yet perhaps most importantly, you need to understand that in a networked world, power no longer resides at the top of hierarchies, but emanates from the center of networks. You move to center by continually widening and deepening connections. That’s how you drive a true transformation.

None of this happens overnight. It takes some time. That’s why the desire for change is not nearly as important as the will to prepare for it.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Strategy for a Post-Digital World

Strategy for a Post-Digital World

GUEST POST from Greg Satell

For decades, the dominant view of strategy was based on Michael Porter’s ideas about competitive advantage. In essence, he argued that the key to long-term success was to dominate the value chain by maximizing bargaining power among suppliers, customers, new market entrants and substitute goods.

Yet digital technology blew apart old assumptions. As technology cycles began to outpace planning cycles, traditional firms were often outfoxed by smaller competitors that were faster and more agile. Risk averse corporate cultures needed to learn how to “fail fast” or simply couldn’t compete.

Today, as the digital revolution is coming to an end, we will need to rethink strategy once again. Increasingly, we can no longer just move fast and break things, but will have to learn how to prepare, rather than just adapt, build deep collaborations and drive skills-based transformations. Make no mistake, those who fail to make the shift will struggle to survive.

Learning to Prepare Rather Than Racing to Adapt

The digital age was driven, in large part, by Moore’s law. Every 18 months or so, a new generation of chips would come out of fabs that was twice as powerful as what came before. Firms would race to leverage these new capabilities and transform them into actual products and services.

That’s what made agility and adaptation key competitive attributes over the past few decades. When the world changes every 18 months, you need to move quickly to leverage new possibilities. Today, however, Moore’s Law is ending and we’ll have to shift to new architectures, such as quantum, neuromorphic and, possibly, biological computers.

Yet the shift to this new era of heterogeneous computing will not be seamless. Instead of one fairly simple technology based on transistors, we will have multiple architectures that involve very different logical principles. These will need new programming languages and will be applied to solve very different problems than digital computers have been.

Another shift will be from bits to atoms, as fields such as synthetic biology and materials science advance exponentially. As our technology becomes infinitely more powerful, there are also increasingly serious ethical concerns. We will have to come to some consensus on issues like what accountability a machine should have and to what extent we should alter the nature of life.

If there is one thing that the Covid-19 crisis has shown is that if you don’t prepare, no amount of agility will save you.

Treating Collaboration as a New Competitive Advantage

In 1980, IBM was at an impasse. Having already missed the market for minicomputers, a new market for personal computers was emerging. So, the company’s leadership authorized a team to set up a skunk works in Boca Raton, FL. A year later, the company would bring the PC to market and change computer history.

So, it’s notable that IBM is taking a very different approach to quantum computing. Rather than working in secret, it has set up its Q Network of government agencies, academic labs, customers and start-ups to develop the technology. The reason? Quantum computing is far too complex for any one enterprise to pursue on its own.

“When we were developing the PC, the challenge was to build a different kind of computer based on the same technology that had been around for decades,” Bob Sutor, who heads up IBM’s Quantum effort, told me. “In the case of quantum computing, the technology is completely different and most of it was, until fairly recently, theoretical,” he continued. “Only a small number of people understand how to build it. That requires a more collaborative innovation model to drive it forward.”

It’s not just IBM either. We’re seeing similar platforms for collaboration at places like the Manufacturing Institutes, JCESR and the Critical Materials Institute. Large corporations, rather trying to crush startups, are creating venture funds to invest in them. The truth is that the problems we need to solve in the post-digital age are far too complex to go it alone. That’s why today, it’s not enough to have a market strategy, you need to have an ecosystem strategy.

Again, the Covid-19 crisis is instructive, with unprecedented collaborative efforts driving breakthroughs.

Drive Skills-Based Transformations

In the digital era, incumbent organizations needed to learn new skills. Organizations that mastered these skills, such as lean manufacturing, design thinking, user centered design and agile development, enjoyed a significant competitive advantage. Unfortunately, many firms still struggle to deploy critical skills at scale.

As digital technology enters an accelerated implementational phase, the need to deploy these skills at scale will only increase. You can’t expect to leverage technology without empowering your people to use it effectively. That’s why skills-based transformations have become every bit as important as strategic or technology-driven transformations.

As we enter the new post-digital era the need for skills-based transformations will only increase. Digital skills, such as basic coding and design, are relatively simple. A reasonably bright high school student can become proficient in a few months. As noted above, however, the skills needed for this new era will be far more varied and complex.

To be clear, I am not suggesting that everybody will need to have deep knowledge about things like quantum mechanics, neurology or genomics a decade from now any more than everybody needs to write code today. However, we will increasingly have to collaborate with experts in those fields and have some sort of basic understanding.

Making the Shift from Disrupting Markets to Pursuing Grand Challenges

The digital economy was largely built on disruption. As computer chips became exponentially faster and cheaper, innovative firms could develop products and services that could displace incumbent industries. Consider that a basic smartphone today can replace a bundle of technologies, such as video recorders, GPS navigators and digital music players, that would have cost hundreds of thousands of dollars when they were first introduced.

This displacement process has been highly disruptive, but there are serious questions about whether it’s been productive. In fact, for all the hype around digital technology “changing the world,“ productivity has been mostly depressed since the 1970s. In some ways, such as mental health and income inequality, we are considerably worse off than 40 or 50 years ago.

Yet the post-digital era offers us a much greater opportunity to pursue grand challenges. Over the next few decades, we’ll be able to deploy far more powerful technologies to solve problems like cancer, aging and climate change. It is, in the final analysis, these physical world applications that can not only change our lives for the better, but open up massive new markets.

The truth is that the future tends to surprise us and nobody can say for sure what the next few decades will look like. Strategy, therefore, can’t depend on prediction. However, what we can do is prepare for this new era by widening and deepening connections throughout relevant ecosystems, acquiring new skills and focusing on solving meaningful problems.

In the face of uncertainty, the best way to survive is to make yourself useful.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

How Consensus Kills Innovation

How Consensus Kills Innovation

GUEST POST from Greg Satell

“I hate consensus,” legendary Silicon Valley coach Bill Campbell used to growl. The problem, as the authors explain in the book, Trillion Dollar Coach, wasn’t that he didn’t want people to get along, but that an easy consensus often leads to groupthink and inferior decisions. It’s often just easier to fall in line than to engage in vigorous debate.

Research bears this out. In one study where college students were asked to solve a murder mystery, homogenous groups that formed an easy consensus felt more successful, but actually performed worse than more diverse teams that argued and voiced different viewpoints. When everybody agrees, nobody questions.

Make no mistake. If an idea is big enough, some people aren’t going to like it. Some will argue against it passionately and others may even try and actively undermine it. Yet rather than working to silence those voices, we need to learn to bring them to the fore. That’s how we can test our assumptions, consider other alternatives and, ultimately, come up with better ideas.

The Dangers of Consensus

Whenever the Harlem Globetrotters play the Washington Generals, there’s no doubt what the outcome will be, because the point isn’t to have a genuine contest. The games are essentially theatre set up to entertain the audience. All too often, we set up meetings in very much the same way — designed to reach a particular conclusion for the sake of expediency.

Unfortunately, leaders have strong incentives to drive quickly toward a consensus. Listening to dissenting views takes time and energy and we want to get things done quickly and move forward. So, it’s tempting to stock the room with people who are already on board and present the idea as a fait accompli.

Even if a leader isn’t consciously designing meetings for consensus, dissenting views can get squelched. In a famous series of conformity studies done in the 1950s, it was shown that we have a strong tendency to agree with the majority opinion even if it is obviously wrong. Subsequent research has generally confirmed the findings.

The truth is that majorities don’t just rule, they also influence. We can’t count on one or two lone voices having the courage to speak up. That’s why it’s not enough to simply listen to dissenting views, we must actively seek them out.

Uncovering Dissent

The biggest mistake a leader can make is to assume that they have somehow built a culture that is so unique, and that people feel so secure that they will voice their true views. We have to design for debate—it won’t just happen on its own—and there are several techniques that can help us do that.

The first is changing meeting structure. If the most senior person in the meeting voices an opinion, others will tend to fall in line. So, starting with the most junior person and then working up will encourage more debate. Another option is to require everyone to voice an opinion, either through a document or a conversation with a senior leader, before the meeting starts.

Another strategy that is often effective is called a pre-mortem analysis. Similar to a post-mortem analysis in which you try to figure out what went wrong, in a pre-mortem you assume a project has failed in the future and try to guess what killed it. It’s a great way to surface stuff you might have missed.

A third option is to set up a red team. This is an independent group whose sole purpose is to poke holes in a plan or a project. For example, while planning the Osama bin Laden Raid, a red team was set up to look at the same evidence and try to come up with different conclusions. They were able to identify a few key weaknesses in the plan that were then corrected.

Overcoming Opposition

While opening up a healthy discussion around dissenting views helps drive innovation forward, ignoring opposition can lead to its demise. Every significant innovation represents change, which creates winners and losers. There will always be some who will be so vehemently opposed that they will try to undermine an innovation moving forward.

Since my book Cascades was published, I’ve had the opportunity to work with a number of organizations working to drive transformation and have been amazed how reticent many are to identify entrenched opposition and build a strategy to overcome it. Often, they aren’t willing to admit that opposition is relevant or even that it exists at all.

Unlike those who merely have dissenting views, but share objectives and values with the transformation team, entrenched opposition wants to stop change in its tracks. For example, as I have previously noted, it was internal opposition, chiefly from franchisees and shareholders, not a lack of strategy or imagination, that killed Blockbuster Video.

That’s why, much like dissenting views, it’s important to bring opposition to the fore. In Blockbuster’s case, there were various actions that management could have taken to mollify the opposition and address some of the concerns. That wouldn’t have guaranteed success, but it would have made it far more likely.

Innovation Must Be Led

Steve Jobs was, by all accounts, a mediocre engineer. It was his passion and vision that made Apple the most valuable company on the planet. In a similar vein, there were plenty of electric car companies before Tesla, but Elon Musk was the first who showed that the technology can succeed in the marketplace.

Can you imagine what would have happened if Jobs had the iPhone designed by a committee? Or if Musk had put Tesla’s business plan to a vote? It’s hard to see either having had much success. What we would have ended up with is a watered-down version of the initial idea.

Yet all too often, managers seek out consensus because it’s easy and comfortable. It’s much harder to build a culture of trust that can handle vigorous debate, where people are willing to voice their opinions and listen to those of others without it getting personal. That, however, is what innovative cultures do.

Big ideas are never easy. Almost by definition, they are unlikely, fraught with risk and often counterintuitive. They need champions to inspire and empower beliefs around them. That’s why leadership drives innovation and consensus often kills it.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

America Needs to Innovate Its Innovation Ecosystem

America Needs to Innovate Its Innovation Ecosystem

GUEST POST from Greg Satell

The world today just seems to move faster and faster all the time. From artificial intelligence and self-driving cars to gene editing and blockchain, it seems like every time you turn around, there’s some newfangled thing that promises to transform our lives and disrupt our businesses.

Yet a paper published by a team of researchers in Harvard Business Review argues that things aren’t as they appear. They point out that total factor productivity growth has been depressed since 1970 and that recent innovations, despite all the hype surrounding them, haven’t produced nearly the impact of those earlier in the 20th century.

The truth is that the digital revolution has been a big disappointment and, more broadly, technology and globalization have failed us. However, the answer won’t be found in snazzier gadgets or some fabulous “Golden Era” of innovation of years long past. Rather we need to continually innovate how we innovate to solve problems that are relevant to our future.

The Productivity Paradox, Then and Now

In the 1970s and 80s, business investment in computer technology was increasing by more than 20% per year. Strangely though, productivity growth had decreased during the same period. Economists found this turn of events so bizarre that they called it the “productivity paradox” to underline their confusion.

Yet by the late 1990s, increased computing power combined with the Internet to create a new productivity boom. Many economists hailed the digital age as a “new economy” of increasing returns, in which the old rules no longer applied and a small initial advantage, a first mover advantage, would lead to market dominance. The mystery of the productivity paradox, it seemed, had been solved. We just needed to wait for technology to hit critical mass.

Yet by 2004 productivity growth fell once again and has not recovered since. Today, more than a decade later, we’re in the midst of a second productivity paradox, just as mysterious as the first one. New technologies like mobile computing and artificial intelligence are there for everyone to see, but they have done little, if anything, to boost productivity.

Considering the rhetoric of many of the techno-enthusiasts, this is fairly shocking. Compare the meager eight years of elevated productivity that digital technology produced with the 50-year boom in productivity created in the wake of electricity and internal combustion and it’s clear that the digital economy, for all the hype, hasn’t achieved as much as many would like to think.

Are Corporations to Blame?

One explanation that the researchers give for the low productivity growth is that large firms are cutting back on investment in science. They explain that since the 1980s, a “combination shareholder pressure, heightened competition, and public failures led firms to cut back investments in science” and point to the decline of Bell Labs and Xerox PARC as key examples.

Yet a broader analysis tells a different story. Yes, while Bell Labs and Xerox PARC still exist, they are but a shadow of their former selves, but others, such as IBM Research, have expanded their efforts. Microsoft Research, established in 1991, does cutting edge science. Google runs a highly innovative science program that partners with researchers in the academic world.

So anecdotally speaking, the idea that corporations haven’t been investing in science seems off base. However, the numbers tell an even stronger story. Data from the National Science Foundation shows that corporate research has increased from roughly 40% of total investment in the 1950s and 60s to more than 60% today. Overall R&D spending has risen over time.

Also, even where corporations have cut back, new initiatives often emerge. Consider DuPont Experimental Station which, in an earlier era, gave birth to innovations such as nylon, teflon and neoprene. In recent years, DuPont has cut back on its own research but the facility, which still employs 2000 researchers, is also home to the Delaware Incubation Space, which incubates new entrepreneurial businesses.

The Rise of Physical Technologies

One theory about the productivity paradox is that investment in digital technology, while significant, is simply not big enough to move the needle. Even today, at the height of the digital revolution, information and communication technologies only make up about 6% of GDP in advanced economies.

The truth is that we still live in a world largely made up of atoms, not bits and we continue to spend most of our money on what we live in, ride in, eat and wear. If we expect to improve productivity growth significantly, we will have to do it in the physical world. Fortunately, there are two technologies that have the potential to seriously move the needle.

The first is synthetic biology, driven largely by advances in gene editing such as CRISPR, which have dramatically lowered costs while improving accuracy. In fact, over the last decade efficiency in gene sequencing has far outpaced Moore’s Law. These advances have the potential to drive important productivity gains in healthcare, agriculture and, to a lesser extent, manufacturing.

The second nascent technology is a revolution in materials science. Traditionally a slow-moving field, over the past decade improved simulation techniques and machine learning have improved the efficiencies of materials discovery dramatically, which may have a tremendous impact in manufacturing, construction and renewable energy.

Yet none of these gains are assured. To finally break free of the productivity paradox, we need to look to the future, not the past.

Collaboration is the New Competitive Advantage

In 1900, General Electric established the first corporate research facility in Schenectady, New York. Later came similar facilities at leading firms such as Kodak, AT&T and IBM. At the time, these were some of the premier scientific institutions in the world, but they would not remain so.

In the 1920s new academic institutions, such as the Institute for Advanced Study, as well as the increasing quality of American universities, became an important driver of innovation. Later, in the 1940s, 50s and 60s, federal government agencies, such as DARPA, NIH and the national labs became hotbeds of research. More recently, the Silicon Valley model of venture funded entrepreneurship has risen to prominence.

Each of these did not replace, but added to what came before. As noted above, we still have excellent corporate research programs, academic labs and public scientific institutions as well as an entrepreneurial investment ecosystem that is the envy of the world. Yet none of these will be sufficient for the challenges ahead.

The model that seems to be taking hold now is that of consortia, such as JCESR in energy storage, Partnership on AI for cognitive technologies and the Manufacturing USA Institutes, that bring together diverse stakeholders to drive advancement in key areas. Perhaps most conspicuously, unprecedented collaboration sparked by the Covid-19 crisis has allowed us to develop therapies and vaccines faster than previously thought possible.

Most of all, we need to come to terms with the fact that the answers to the challenges of the future will not be found in the past. The truth is that we need to continually innovate how we innovate if we expect to ever return to an era of renewed productivity growth.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Four Keys to Decision Making Every Leader Should Know

Four Keys to Decision Making Every Leader Should Know

GUEST POST from Greg Satell

A leader’s primary responsibility is to make tough decisions. If the issues are unimportant and the choices are clear, someone lower down in the organization usually deals with it. The stuff that comes to you is mostly what others are unable, or unwilling, to decide themselves. That leaves you with the close calls.

All too often, we buy into the Hollywood version of leadership in which everything boils down to a single moment when the chips are down. That’s when the hero of the story has a moment of epiphany, makes a decision and sets things going in a completely new direction. Everyone is dazzled by the sudden stroke of genius.

In real life, it’s rare that things boil down to a single moment. It’s more of a continuum. In fact, the most consequential decisions you make often don’t seem that important at the time and ones that seemed pivotal can turn out to be trivial. What is true, however, is that the decisions you make will define you as a leader. You need to learn to make them wisely.

1. Your Job Is to Make Tough Decisions, With Incomplete Information, in a Compressed Time Frame

Some years ago, a young woman who worked for me went to run the digital division of another company. It was a big step for her, especially since her new employer’s digital effort was still in its infancy and her boss would be depending on her insight and expertise to drive the strategy. To a great extent, she would make the big decisions herself.

She called me one day, depressed and apprehensive. “How were you already so confident in your decisions?” she asked. Having constantly agonized over all the decisions I had to make as a CEO, I was a flabbergasted and asked her why she thought making decisions for me was easy. “Well, you always seemed confident and that made us confident.”

Finally, I understood what she was getting at. I wasn’t confident I was making the right decision, I was confident a decision had to be made, which isn’t the same thing. The truth was that I made at least a hundred decisions a day. If I wavered over each and every one, I wouldn’t ever get anything done. I wasn’t confident. I was busy.

The truth is that decision making is a skill that you acquire over time. You get better at it by doing it. You decide, see what happens and learn a little bit each time. If you shirk that responsibility, you not only fail to gain the skill, you lose the respect of those you lead. They have less confidence in the decisions you do make and they are less likely to turn out well.

2. Your Brain Works Against You

Leaders have different styles. Some are instinctive. They like to fly by the seat of their pants and go with their gut. Others are more deliberate and process driven. They like to pore over data, get input from a number of different perspectives and make decisions in a cool, rational way. Most people are a blend of the two.

Whatever your leadership style, however, you probably vastly underestimate how your brain can trick you into making a bad decision. For example, we tend to be zero in on information that’s easiest to access (known as availability bias). We also tend to lock onto the first information we see (called priming) and that affects how we see subsequent data (framing).

So even while we think we’ve examined an issue carefully, our interpretation of that data may be highly slanted in one direction or another. Even highly trained specialists are susceptible. One study of radiologists found that they contradict themselves 20% of the time and another of auditors had similar result. Daniel Kahneman’s new book Noise documents similar variance in just about every field and type of decision you can imagine.

There are several things we can do to make better decisions. First, for decisions we make regularly, research has shown that devising strict criteria for making decisions can improve accuracy. For bigger, more complex decisions, formal processes like pre-mortems and red teams, which help surface opposing perspectives, can help overcome biases.

3. Not Every Decision Needs to be Made Right Away

There are always more things to do than there is time in the day. Making decisions quickly can certainly help clear your desk. Besides, when subordinates are pestering you for a decision it makes them go away and gives you some peace. Leaders who are seen as decisive instill confidence in those around them.

Still, there are often times when you’re much better off waiting. Sometimes an issue arises and you simply don’t have a good fix on what to do about it. You outline some options, but none look particularly appealing. It feels like there should be better choices out there, but none are readily apparent. Put simply, you are at a loss.

If the matter isn’t urgent, you can take a time out. Simply put it on the shelf for a week or two. Agree to convene a meeting at that time, review the options and make a final determination. I’ve been amazed how often a perfect solution just seems to present itself in the interim and it’s rare that something can’t wait a few weeks.

The key to this decision hack is to not let it devolve into an excuse for dilly dallying or “paralysis by analysis.” If the agreed amount of time goes by and nothing fortuitous comes your way, you simply have to bite the bullet and decide among less appealing options.

4. Your Position Gives You Power, But Your Decisions Make You a Leader

When you have a position of power, either because you founded your own organization or you were promoted to a senior position, you have the ability to influence the actions of others. You can, through hard power, coerce others through combinations of threats and incentives, to do what you want. Unfortunately, exercising hard power has a corrosive effect on a relationship.

Much better is to be able to wield soft power, which Joseph Nye, who coined the term, defined as the ability to influence others without coercion. To do that requires that you build up confidence and admiration, which is no easy task. You can’t simply bully or bribe people into trusting you.

Being in a position of responsibility means that you have to make decisions without all the facts, in a rapidly changing context, often in a compressed time frame. You do so in the full knowledge that if you are wrong, you will bear the blame and no one else. You can never be certain of your decision, only that it is you who has to make one.

That’s a hard bridge to cross and many, if not most, are never quite able to get there. Yet that’s what makes the difference between a leader and someone who merely wields authority, the ability and willingness to bear the burden of your decisions, often and repeatedly, and remain focused on the mission of the enterprise.

That’s why we admire great leaders so much. True authority doesn’t come from a job title or even from great success, it comes from strength of character so inherent that it inspires others to surrender themselves to a greater cause.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why Stupid Questions Are Important to Innovation

Why Stupid Questions Are Important to Innovation

GUEST POST from Greg Satell

16 year-old girl Gracie Cunningham created a firestorm recently when she posted a video to TikTok asking “is math real?” More specifically, she wanted to know why ancient mathematicians came up with algebraic concepts such as “y=mx+b.” “What would you need it for?” she asked, when they didn’t even have plumbing.

The video went viral on twitter, gathering millions of views and the social media universe immediately pounced, with many ridiculing how stupid it was. Mathematicians and scientists, however, felt otherwise and remarked how profound her questions were. Cornell’s Steve Strogatz even sent her a thoughtful answer to her question.

We often overlook the value of simple questions, because we think intelligence has something to do with ability to recite rote facts. Yet intellect is not about knowing all the answers, but in asking better questions. That’s how we expand knowledge and gain deeper understanding. In fact, the most profound answers often come from seemingly silly questions.

What Would It Be Like to Ride on a Bolt of Lightning?

Over a century ago, a teenage boy not unlike Gracie Cunningham asked a question that was seemingly just as silly as hers. He wanted to know what it would be like to ride on a bolt of lightning shining a lantern forward. Yet much like Gracie’s, his question belied a deceptive profundity. You see, a generation earlier, the great physicist James Clerk Maxwell published his famous equations which established that the speed of light was constant.

To understand why the question was so important, think about riding on a train that’s traveling at 40 miles an hour and tossing a ball forward at 40 miles an hour. To you, the ball appears to be traveling at 40 miles an hour, but to someone standing still outside the train the ball would appear to be going 80 miles an hour (40+40).

So now you can see the problem with riding on a bolt of lightning with a lantern. According to the principle by which the ball on the train appears to be traveling at 80 miles an hour, the light from the lantern should be traveling at twice the speed of light. But according to Maxwell’s equations, the speed of light is fixed.

It took Albert Einstein 10 years to work it all out, but in 1905, he published his theory of special relativity, which stated that, while the speed of light is indeed constant, time and space are relative. As crazy as that sounds, you only need to take a drive in your car to prove it’s true. GPS satellites are calibrated according to Einstein’s equations, so if you get to where you want to go you have, in a certain sense, proved the special theory of relativity.

A bit later Einstein asked another seemingly silly question about what it would be like to travel in an elevator in space, which led him to his general theory of relativity.

Who Shaves the Barber’s Beard?

Around the time young Albert Einstein was thinking about riding on a bolt of lightning, others were pondering an obscure paradox about a barber, which went something like this:

If the barber shaves every man who does not shave himself, who shaves the barber?

If he shaves himself, he violates the statement and if he doesn’t shave himself, he also violates the statement.

Again, like Gracie’s question, the barber’s paradox seems a bit silly and childish. In reality it is a more colloquial version of Russell’s paradox about sets that are members of themselves, which shook the foundations of mathematics a century ago. Statements, such as 2+2=4, are supposed to be either true or false. If contradictions could exist, it would represent a massive hole at the center of logic.

Eventually, the crisis came to a head and David Hilbert, the greatest mathematician of the age, created a program of questions that, if answered in the affirmative, would resolve the dilemma. To everyone’s surprise, in short order, a young scholar named Kurt Gödel would publish his incompleteness theorems, which showed that a logical system could be either complete or consistent, but not both.

Put more simply, Gödel proved that every logical system would always crash. It was only a matter of time. Logic would remain broken forever. However, there was a silver lining to it all. A few years later, Alan Turing would build on Gödel’s work in his paper on computability, which itself would usher in the new era of modern computing.

Why Can’t Our Immune System Kill Cancer Cells?

The idea that our immune system could attack cancer cells doesn’t seem that silly on the surface. After all, it not only regularly kills other pathogens, such as bacteria, viruses and, in some cases, such as with autoimmune disorders like multiple sclerosis, lupus and rheumatoid arthritis, even attacks our own cells. Why would it ignore tumors?

Yet as Charles Graeber explains in his recent book, The Breakthrough, for decades most of the medical world dismissed the notion. Yes, there had been a few scattered cases in which cancer patients who had a severe infection had seen their tumors disappear, but every time they tried to design an actual cancer therapy based on immune response it failed miserably.

The mystery was eventually solved by a scientist named Jim Allison who, in 1995, had an epiphany. Maybe, he thought, that the problem wasn’t that our immune system can’t identify and attack cancer cells, but rather that the immune response is impeded somehow. He figured if he could block that process, it would revolutionize cancer care.

Today, cancer immunotherapy is considered to be the 4th pillar of cancer treatment and nobody questions whether our immune system can be deployed to fight cancer. Jim Allison won the Nobel Prize for his work in 2018.

The Power of a Question

Answers are easy. They resolve matters. Questions are harder. They point out gaps in our knowledge and inadequacies in our understanding. They make us uncomfortable. That’s why we are so apt to dismiss them altogether. So we can go about our business unhindered.

So it shouldn’t be surprising that young Gracie Cunningham’s TikTok garnered such strong reactions. It’s much easier to dismiss questions as silly than to take them on. That’s why Einstein was reduced to working in a patent office rather than at a university, why so many dismissed Russell’s paradox as meaningless and why Jim Allison had doors shut in his face for three years before he found a company willing to invest in his idea.

Yet what should also be obvious by now is that there is enormous value in raising questions that challenge things that we think we already know. Before questions were raised, it seemed obvious that time and space are absolute, that logical statements are either true or false and that our immune system can’t fight cancer.

The truth is that great innovators are not necessarily smarter, harder working or more ambitious than anyone else, but rather those who are constantly looking for new questions to ask and new problems to solve.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.