Author Archives: Greg Satell

About Greg Satell

Greg Satell is a popular speaker and consultant. His latest book, Cascades: How to Create a Movement That Drives Transformational Change, is available now. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.

Triggering Radical Transformational Change

Triggering Radical Transformational Change

GUEST POST from Greg Satell

There’s an old adage that says we should never let a crisis go to waste. The point is that during a crisis there is a visceral sense of urgency and resistance often falls by the wayside. We certainly saw that during the COVID-19 pandemic. Digital technologies such as video conferencing, online grocery and tele-health have gone from fringe to mainstream in record time.

Seasoned leaders learn how to make good use of a crisis. Consider Bill Gates and his ‘Internet Tidal Wave‘ memo, which leveraged what could have been a mortal threat to Microsoft into a springboard to even greater dominance. Or how Steve Jobs used Apple’s near-death experience to reshape the ailing company into a powerhouse.

But what if we could prepare for a trigger before it happens? The truth is that indications of trouble are often clear long before the crisis arrives. Clearly, there were a number of warning signs that a pandemic was possible, if not likely. As every good leader knows, there’s never a shortage of looming threats. If we learn to plan ahead, we can make a crisis work for us.

The Plan Hatched in a Belgrade Cafe

In the fall of 1998, five young activists met in a coffee shop in Belgrade, Serbia. Although still in their twenties, they were already grizzled veterans. In 1992, they took part in student protests against the war in Bosnia. In 1996, they helped organize a series of rallies in response to Slobodan Milošević’s attempt to steal local elections.

To date, their results were decidedly mixed. The student protests were fun, but when the semester ended, everyone went home for the summer and that was the end of that. The 1996 protests were more successful, overturning the fraudulent results, but the opposition coalition, called “Zajedno,” soon devolved into infighting.

So they met in the coffee shop to discuss their options for the upcoming presidential election to be held in 2000. They knew from experience that they could organize rallies effectively and get people to the polls. They also knew that when they got people to the polls and won, Milošević would use his power and position to steal the election.

That would be their trigger.

The next day, six friends joined them and they called their new organization Otpor. Things began slowly, with mostly street theatre and pranks, but within 2 years their ranks had swelled to more than 70,000. When Milošević tried to steal the election they were ready and what is now known as the Bulldozer Revolution erupted.

The Serbian strongman was forced to concede. The next year, Milošević would be arrested and sent to The Hague for his crimes against humanity. He would die in his prison cell in 1996, awaiting trial.

Opportunity From the Ashes

In 2014, in the wake of the Euromaidan protests that swept the thoroughly corrupt autocrat Viktor Yanukovych from power, Ukraine was in shambles. Having been looted of roughly $100 billion (roughly the amount of the country’s entire GDP) and invaded by Russia, things looked bleak. Without western aid, the proud nation’s very survival was in doubt.

Yet for Vitaliy Shabunin and the Anti-Corruption Action Center, it was a moment he had been waiting for. He established the organization with his friend Dasha Kaleniuk a few years earlier. Since then they, along with a small staff, had been working with international NGOs to document corruption and develop effective legislation to fight it.

With Ukraine’s history of endemic graft, which had greatly worsened under Yanukovych, progress had been negligible. Yet now, with the IMF and other international institutions demanding reform, Shabunin and Kaleniuk were instantly in demand to advise the government on instituting a comprehensive anti-corruption program, which passed in record time.

Yet they didn’t stop there either. “Our long-term strategy is to create a situation in which it will be impossible not to do anti-corruption reforms,” Shabunin would later tell me. “We are working to ensure that these reforms will be done, either by these politicians or by another, because they will lose their office if they don’t do these reforms.”

Vitaliy, Dasha and the Anti-Corruption Action Center continue to prepare for future triggers.

The Genius of Xerox PARC

One story that Silicon Valley folks love to tell involves Steve Jobs and Xerox. After the copier giant made an investment in Apple, which was then a fledgling company, it gave Jobs access to its Palo Alto Research Center (PARC). He then used the technology he saw there to create the Macintosh. Jobs built an empire based on Xerox’s oversight.

Yet the story misses the point. By the late 60s, its Xerox CEO Peter McColough knew that the copier business, while still incredibly profitable, was bound to be disrupted eventually. At the same time it was becoming clear that computer technology was advancing quickly and, someday, would revolutionize how we worked. PARC was created to prepare for that trigger.

The number of groundbreaking technologies created at PARC is astounding. The graphical user interface, networked computing, object oriented programing, the list goes on. Virtually everything that we came to know as “personal computing” had its roots in the work done at PARC in the 1970s.

Most of all, PARC saved Xerox. The laser printer invented there would bring in billions and, eventually, largely replace the copier business. Some technologies were spun off into new companies, such as Adobe and 3Com, with an equity stake going to Xerox. And, of course, the company even made a tidy profit off the Macintosh, because of the equity stake that gave Jobs access to the technology in the first place.

Transforming an Obstacle Into a Design Constraint

The hardest thing about change is that, typically, most people don’t want it. If they did, it have already been accepted as the normal state of affairs. That can make transformation a lonely business. The status quo has inertia on its side and never yields its power gracefully. The path for an aspiring changemaker can be heartbreaking and soul crushing.

Many would see the near-certainty that Milosevic would try to steal the election as an excuse to do nothing. Most people would look at the almost impossibly corrupt Yanukovych regime and see the idea of devoting your life to anti-corruption reforms as quixotic folly. It is extremely rare for a CEO whose firm dominates an industry to ask, “What comes after?”

Yet anything can happen and often does. Circumstances conspire. Events converge. Round-hole businesses meet their square-peg world. We can’t predict exactly when or where or how or what will happen, but we know that everybody and everything gets disrupted eventually. It’s all just a matter of time.

When that happens resistance to change temporarily abates. So there’s lots to do and no time to waste. We need to empower our allies, as well as listen to our adversaries. We need to build out a network to connect to others who are sympathetic to our cause. Transformational change is always driven by small groups, loosely connected, but united by a common purpose.

Most of all, we need to prepare. A trigger always comes and, when it does, it brings great opportunity with it.

— Article courtesy of the Digital Tonto blog
— Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

We Need to Solve the Productivity Crisis

We Need to Solve the Productivity Crisis

GUEST POST from Greg Satell

When politicians and pundits talk about the economy, they usually do so in terms of numbers. Unemployment is too high or GDP is too low. Inflation should be at this level or at that. You get the feeling that somebody somewhere is turning knobs and flicking levers in order to get the machine humming at just the right speed.

Yet the economy is really about our well being. It is, at its core, our capacity to produce goods and services that we want and need, such as the food that sustains us, the homes that shelter us and the medicines that cure us, not to mention all of the little niceties and guilty pleasures that we love to enjoy.

Our capacity to generate these things is determined by our productive capacity. Despite all the hype about digital technology creating a “new economy,” productivity growth for the past 50 years has been tremendously sluggish. If we are going to revive it and improve our lives we need to renew our commitment to scientific capital, human capital and free markets.

Restoring Scientific Capital

In 1945, Vannevar Bush, delivered a report, Science, The Endless Frontier, that argued that the US government needed to invest in “scientific capital” and through basic research and scientific education. It would set in motion a number of programs that would set the stage for America’s technological dominance during the second half of the century.

Bush’s report led to the development of America’s scientific infrastructure, including agencies such as the National Science Foundation (NSF), National Institutes of Health (NIH) and DARPA. Others, such as the National Labs and science programs at the Department of Agriculture, also contribute significantly to our scientific capital.

The results speak for themselves and returns on public research investment have been shown to surpass those in private industry. To take just one example, it has been estimated that the $3.8 billion invested in the Human Genome Project resulted in nearly $800 billion in economic impact and created over 300,000 jobs in just the first decade.

Unfortunately, we forgot those lessons. Government investment in research as a percentage of GDP has been declining for decades, limiting our ability to produce the kinds of breakthrough discoveries that lead to exciting new industries. What passes for innovation these days displaces workers, but does not lead to significant productivity gains.

So the first step to solving the productivity puzzle would be to renew our commitment to investing in the type of scientific knowledge that, as Bush put it, can “turn the wheels of private and public enterprise.” There was a bill before congress to do exactly that, but unfortunately it got bogged down in the Senate due to infighting.

Investing In Human Capital

Innovation, at its core, is something that people do, which is why education was every bit as important to Bush’s vision as investment was. “If ability, and not the circumstance of family fortune, is made to determine who shall receive higher education in science, then we shall be assured of constantly improving quality at every level of scientific activity,” he wrote.

Programs like the GI Bill delivered on that promise. We made what is perhaps the biggest investment ever in human capital, sending millions to college and creating a new middle class. American universities, considered far behind their European counterparts earlier in the century, especially in the sciences, came to be seen as the best in the world by far.

Today, however, things have gone horribly wrong. A recent study found that about half of all college students struggle with food insecurity, which is probably why only 60% of students at 4-year institutions and even less at community colleges ever earn a degree. The ones that do graduate are saddled with decades of debt

So the bright young people who we don’t starve we are condemning to decades of what is essentially indentured servitude. That’s no way to run an entrepreneurial economy. In fact, a study done by the Federal Reserve Bank of Philadelphia found that student debt has a measurable negative impact on new business creation.

Recommitting Ourselves To Free and Competitive Markets

There is no principle more basic to capitalism than that of free markets, which provide the “invisible hand” to efficiently allocate resources. When market signals get corrupted, we get less of what we need and more of what we don’t. Without vigorous competition, firms feel less of a need to invest and innovate, and become less productive.

There is abundant evidence that is exactly what has happened. Since the late 1970s antitrust enforcement has become lax, ushering in a new gilded age. While digital technology was hyped as a democratizing force, over 75% of industries have seen a rise in concentration levels since the late 1990s, which has led to a decline in business dynamism.

The problem isn’t just monopoly power dominating consumers, either, but also monopsony, or domination of suppliers by buyers, especially in labor markets. There is increasing evidence of collusion among employers designed to keep wages low, while an astonishing abuse of non-compete agreements that have affected more than a third of the workforce.

In a sense, this is nothing new. Adam Smith himself observed in The Wealth of Nations that “Our merchants and master-manufacturers complain much of the bad effects of high wages in raising the price, and thereby lessening the sale of their goods both at home and abroad. They say nothing concerning the bad effects of high profits. They are silent with regard to the pernicious effects of their own gains. They complain only of those of other people.”

Getting Back On Track

In the final analysis, solving the productivity puzzle shouldn’t be that complicated. It seems that everything we need to do we’ve done before. We built a scientific architecture that remains unparalleled even today. We led the world in educating our people. American markets were the most competitive on the planet.

Yet somewhere we lost our way. Beginning in the early 1970s, we started reducing our investment in scientific research and public education. In the early 1980s, the Chicago school of competition law started to gain traction and antitrust enforcement began to wane. Since 2000, competitive markets in the United States have been in serious decline.

None of this was inevitable. We made choices and those choices had consequences. We can make other ones. We can choose to invest in discovering new knowledge, educate our children without impoverishing them, to demand our industries compete and hold our institutions to account. We’ve done these things before and can do so again.

All that’s left is the will and the understanding that the economy doesn’t exist in the financial press, on the floor of the stock markets or in the boardrooms of large corporations, but in our own welfare as well as in our ability to actualize our potential and realize our dreams. Our economy should be there to serve our needs, not the other way around.

— Article courtesy of the Digital Tonto blog
— Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Disinformation Economics

Disinformation Economics

GUEST POST from Greg Satell

Marshal McLuhan, one of the most influential thinkers of the 20th century, described media as “extensions of man” and predicted that electronic media would eventually lead to a global village. Communities, he predicted, would no longer be tied to a single, isolated physical space but connect and interact with others on a world stage.

What often goes untold is that McLuhan did not see the global village as a peaceful place. In fact, he predicted it would lead to a new form of tribalism and result in a “release of human power and aggressive violence” greater than ever in human history, as long separated —and emotionally charged— cultural norms would now constantly intermingle, clash and explode.

Today, the world looks a whole lot like the dystopia McLuhan described. Fringe groups, nation states and profit-seeking corporations have essentially weaponized information and we are all caught in the crossfire. While the situation is increasingly dire it is by no means hopeless. What we need isn’t more fact checking, but to renew institutions and rebuild trust.

How Tribes Emerge

We tend to think of the world we live in as the result of some grand scheme. In the middle ages, the ontological argument posited the existence of an “unmoved mover” that set events in motion. James Bond movies always feature an evil genius. No conspiracy theory would be complete without an international cabal pulling the strings.

Yet small decisions, spread out over enough people, can create the illusion of a deliberate order. In his classic Micromotives and Macrobehavior, economist Thomas Schelling showed how even small and seemingly innocuous choices, when combined with those of others, can lead to outcomes no one intended or preferred.

Consider the decision to live in a particular neighborhood. Imagine a young couple who prefers to live in a mixed-race neighborhood but doesn’t want to be outnumbered. Schelling showed, mathematically, how if everybody shares those same inclinations that scenario results in extreme segregation, even though that is exactly opposite of what was intended.

This segregation model an example of a Nash equilibrium, in which individual decisions eventually settle into a stable group dynamic. No one in the system has an incentive to change his or her decision. Yet just because an equilibrium is stable doesn’t mean it’s optimal or even preferable. In fact, some Nash equilibriums, such as the famous prisoner’s dilemma and the tragedy of the commons make everyone worse off.

That, in essence, is what appears to have happened in today’s media environment with respect to disinformation.

The Power Of Local Majorities

A big part of our everyday experience is seen through the prism of people that surround us. Our social circles have a major influence on what we perceive and how we think. In fact, a series of famous experiments done at Swarthmore College in the 1950’s showed that we will conform to the opinions of those around us even if they are obviously wrong.

It isn’t particularly surprising that those closest to us influence our thinking, but more recent research has found that the effect extends to three degrees of social distance. So it is not only those we know well, but even the friends of our friend’s friends have a deep and pervasive effect how we think and behave.

This effect is then multiplied by our tendency to be tribal, even when the source of division is arbitrary. For example, in a study where young children were randomly assigned to a red or a blue group, they liked pictures of other kids who wore t-shirts that reflected their own group better. In another study of adults that were randomly assigned to “leopards” and “tigers,” fMRI studies noted hostility to out-group members regardless of their race.

The simple truth is that majorities don’t just rule, they also influence, especially local majorities. Combine that with the mathematical and psychological forces that lead us to separate ourselves from each other and we end up living in a series of social islands rather than the large, integrated society we often like to imagine.

Filter Bubbles And Echo Chambers

Clearly, the way we tend to self-sort ourselves into homophilic, homogeneous groups will shape how we perceive what we see and hear, but it will also affect how we access information. Recently, a team of researchers at MIT looked into how we share information—and misinformation—with those around us. What they found was troubling.

When we’re surrounded by people who think like us, we share information more freely because we don’t expect to be rebuked. We’re also less likely to check our facts, because we know that those we are sharing the item with will be less likely to inspect it themselves. So when we’re in a filter bubble, we not only share more, we’re also more likely to share things that are not true. Greater polarization leads to greater misinformation.

Let’s combine this insight with the profit incentives of social media companies. Obviously, they want their platforms to be more engaging than their competition. So naturally, they want people to share as much as possible and the best way to do that is to separate people into groups that think alike, which will increase the amount of disinformation produced.

Notice that none of this requires any malicious intent. The people in Schelling’s segregation model actually wanted to live in an integrated neighborhood. In much the same way, the subjects in the fMRi studies showed hostility to members of other groups regardless of race. Social media companies don’t necessarily want to promote untruths, they merely need to tune their algorithms to create maximum engagement and the same effect is produced.

Nevertheless, we have blundered into a situation in which we increasingly see—and believe—things that aren’t true. We have created a global village at war with itself.

Rebuilding Trust

At its core, the solution to the problem of disinformation has less to do with information than it has to do with trust. Living in a connected world demands that we transcend our own context and invite in the perspectives and experiences of others. That is what McLuhan meant when he argued that we electronic media would create a global village.

Inevitably, we don’t like much of what we see. When we are confronted with the strange and unusual we must decide whether to assimilate and adopt the views of others, or to assert the primacy of our own. The desire for recognition can result in clashes and confrontation, which lead us to seek out those who look, think and act in ways that reinforce our sense of self. We build echo chambers that deny external reality to satisfy these tribal instincts.

Yet as Francis Fukuyama pointed out in Identity, there is another option. We can seek to create a larger sense of self through building communities rooted in shared values. When viewed through the prism of common undertaking rather than that tribe, diverse perspectives can be integrated and contribute to a common cause.

What’s missing in our public discourse today isn’t more or better information. We already have far more access to knowledge than at any time in human history. What we lack is a shared sense of mission and purpose. We need a shared endeavor to which we can contribute the best of our energies and for which we can welcome the contributions of others.

Without shared purpose, we are left only with identity, solipsism and the myth-making we require to make ourselves feel worthwhile.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Building Competence Often More Important Than a Vision

Building Competence Often More Important Than a Vision

GUEST POST from Greg Satell

In 1993, when asked about his vision for the failing company he was chosen to lead, Lou Gerstner famously said, “The last thing IBM needs right now is a vision.” What he meant was that if IBM couldn’t figure out how to improve operations to the point where it could start making money again, no vision would matter.

Plenty of people have visions. Elizabeth Holmes had one for Theranos, but its product was a fraud and the company failed. Many still believe in Uber’s vision of “gig economy” taxis, but even after more than 10 years and $25 billion invested, it still loses billions. WeWork’s proven business model became a failure when warped by a vision.

The truth is that anyone can have a vision. Look at any successful organization, distill its approach down to a vision statement and you will easily be able to find an equal or greater success that does things very differently. There is no silver bullet. Successful leaders are not the ones with the most compelling vision, but those who build the skills to make it a reality.

Gandhi’s “Himalyan Miscalculation”

When Mahatma Gandhi returned to India in 1915, after more than two decades spent fighting for Indian rights in South Africa, he had a vision for the future of his country. His view, which he laid out in his book Hind Swaraj, was that the British were only able to rule because of Indian cooperation. If that cooperation were withheld, the British Raj would fall.

In 1919, when the British passed the repressive Rowlatt Acts, which gave the police the power to arrest anyone for any reason whatsoever, he saw an opportunity to make his vision a reality. He called for a nationwide campaign of civil disobedience, called a hartal, in which Indians would refuse to work or do business.

At first, it was a huge success and the country came to a standstill. But soon things spun wildly out of control and eventually led to the massacre at Amritsar, in which British soldiers left hundreds dead and more than a thousand wounded. He would later call the series of events his Himalayan Miscalculation and vowed never to repeat his mistake.

What Gandhi realized was that his vision was worthless without people trained in his Satyagraha philosophy and capable of implementing his methods. He began focusing his efforts on indoctrinating his followers and, a decade later, set out on the Salt March with only about 70 of his most disciplined disciples.

This time, he triumphed in what is remembered as his greatest victory. In the end, it wasn’t Gandhi’s vision, but what he learned along the way that made him a historic icon.

The Real Magic Behind Amazon’s 6-Page Memo

We tend to fetishize the habits of successful people. We probe for anomalies and, when we find something out of the ordinary, we praise it as not only for its originality, but consider it to be the source of success. There is no better example of this delusion than Jeff Bezos’s insistence on using six-page memos rather than PowerPoint in meetings at Amazon.

There are two parts to this myth. First is the aversion to PowerPoint, which most corporate professionals use, but few use well. Second, the novelty of a memo, structured in a particular way, as the basis for structuring a meeting. Put them together and you have a unique ritual which, given Amazon’s incredible success, has taken on legendary status.

But delve a little deeper and you find it’s not the memos themselves, but Amazon’s writing culture that makes the difference. When you look at the company, which thrives in such a variety of industries, there are a dizzying array of skills that need to be integrated to make it work smoothly. That doesn’t just happen by itself.

What Jeff Bezos has done is put an emphasis on communication skills, in general and writing in particular. Amazon executives, from the time they are hired, learn that the best way to get ahead in the company is to learn how to write with clarity and power. They hone that skill over the course of their careers and, if they are to succeed, must learn to excel at it.

Anyone can ban PowerPoint and mandate memos. Building top-notch communication skills across a massive enterprise, on the other hand, is not so easy.

The Real Genius Of Elon Musk

In 2007, an ambitious entrepreneur launched a new company with a compelling vision. Determined to drive the shift from fossil fuels to renewables, he would create an enterprise to bring electric cars to the masses. A master salesman, he was able to raise hundreds of millions of dollars as well as the endorsement of celebrities and famous politicians.

Yet the entrepreneur wasn’t Elon Musk and the company wasn’t Tesla. The young man’s name was Shai Agassi and his company, Better Place, failed miserably within a few years. Despite all of the glitz and glamour he was able to generate, the basic fact was that Agassi knew nothing about building cars or the economics of lithium-ion batteries.

Musk, on the other hand, did the opposite. He did not attempt to build a car for the masses, but rather for Silicon Valley millionaires who wouldn’t need to rely on a Tesla to bring the kids to soccer practice, but could use it to zoom around and show off to their friends. That gave Musk the opportunity to learn how to manufacture cars efficiently and effectively. In other words, to build competency.

When we have a big vision, we tend to want to search out the largest addressable market. Unfortunately, that is where you’ll find stiff competition and customers who are already fairly well-served. That’s why it’s almost always better to identify a hair-on-fire use case—something that a small subset of customers want or need so badly they almost literally have their hair on fire—and scale up from there.

As Steve Blank likes to put it, “no business plan survives first contact with a customer.” Every vision is wrong. Some are off by a little and some are off by a lot. But they’re all wrong in some way. The key to executing on a vision is by identifying vulnerabilities early on and then building the competencies to overcome them.

Why So Many Visions Become Delusions

When you look at the truly colossal business failures of the last 20 years, going back to Enron and LTCM at the beginning of the century to the “unicorns” of today, a common theme is the inability to make basic distinctions between visions and delusions. Delusions, like myths, always contain some kernel of truth, but dissipate when confronted with real world problems.

Also underlying these delusions is a mistrust of experts and the establishment. After all, if a fledgling venture has the right idea then, almost by definition, the establishment must have the wrong idea. As Sam Arbesman pointed out in The Half Life of Facts, what we know to be true changes all the time.

Yet that’s why we need experts. Not to give us answers, but to help us ask better questions. That’s how we can find flaws in our ideas and learn to ask better questions ourselves. Unfortunately recent evidence suggests that “founder culture” in Silicon Valley has gotten so out of hand that investors no longer ask hard questions for fear of getting cut out of deals. \

The time has come for us to retrench, much like Gerstner did a generation ago, and recommit ourselves to competence. Of course, every enterprise needs a vision, but a vision is meaningless without the ability to achieve it. That takes more than a lot of fancy talk, it requires the guts to see the world as it really is and still have the courage to try to change it.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI Requires Conversational Intelligence

AI Requires Conversational Intelligence

GUEST POST from Greg Satell

Historically, building technology had been about capabilities and features. Engineers and product designers would come up with new things that they thought people wanted, figure out how to make them work and ship “new and improved” products. The result was often things that were maddeningly difficult to use.

That began to change when Don Norman published his classic, The Design of Everyday Things and introduced concepts like dominant design, affordances and natural mapping into industrial design. The book is largely seen as pioneering the user-centered design movement. Today, UX has become a thriving field.

Yet artificial intelligence poses new challenges. We speak or type into an interface and expect machines to respond appropriately. Often they do not. With the popularity of smart speakers like Amazon Alexa and Google Home, we have a dire need for clear principles for human-AI interactions. A few years ago, two researchers at IBM embarked on a journey to do just that.

The Science Of Conversations

Bob Moore first came across conversation analysis as an undergraduate in the late 1980s, became intensely interested and later earned a PhD based on his work in the field. The central problems are well known to anybody who has ever watched Seinfeld or Curb Your Enthusiasm, our conversations are riddled with complex, unwritten rules that aren’t always obvious.

For example, every conversation has an unstated goal, whether it is just to pass the time, exchange information or to inspire an emotion. Yet our conversations are also shaped by context. For example, the unwritten rules would be different for a conversation between a pair of friends, a boss and subordinate, in a courtroom setting or in a doctor’s office.

“What conversation analysis basically tries to reveal are the unwritten rules people follow, bend and break when engaging in conversations,” Moore told me and he soon found that the tech industry was beginning to ask similar questions. So he took a position at Xerox PARC and then Yahoo! before landing at IBM in 2012.

As the company was working to integrate its Watson system with applications from other industries, he began to work with Raphael Arar, an award-winning visual designer and user experience expert. The two began to see that their interests were strangely intertwined and formed a partnership to design better conversations for machines.

Establishing The Rules Of Engagement

Typically, we use natural language interfaces, both voice and text, like a search box. We announce our intention to seek information by saying, “Hey Siri,” or “Hey Alexa,” followed by a simple query, like “where is the nearest Starbucks.” This can be useful, especially when driving or walking down the street,” but is also fairly limited, especially for more complex tasks.

What’s far more interesting — and potentially far more useful — is being able to use natural language interfaces in conjunction with other interfaces, like a screen. That’s where the marriage of conversational analysis and user experience becomes important, because it will help us build conventions for more complex human-computer interactions.

“We wanted to come up with a clear set of principles for how the various aspects of the interface would relate to each other,” Arar told me. “What happens in the conversation when someone clicks on a button to initiate an action?” What makes this so complex is that different conversations will necessarily have different contexts.

For example, when we search for a restaurant on our phone, should the screen bring up a map, information about pricing, pictures of food, user ratings or some combination? How should the rules change when we are looking for a doctor, a plumber or a travel destination?

Deriving Meaning Through Preserving Context

Another aspect of conversations is that they are highly dependent on context, which can shift and evolve over time. For example, if we ask someone for a restaurant nearby, it would be natural for them to ask a question to narrow down the options, such as “what kind of food are you looking for?” If we answer, “Mexican,” we would expect that person to know we are still interested in restaurants, not, say, the Mexican economy or culture.

Another issue is that when we follow a particular logical chain, we often find some disqualifying factor. For instance, a doctor might be looking for a clinical trial for her patient, find one that looks promising but then see that that particular study is closed. Typically, she would have to retrace her steps to go back to find other options.

“A true conversational interface allows us to preserve context across the multiple turns in the interaction,” Moore says. “If we’re successful, the machine will be able to adapt to the user’s level of competence, serving the expert efficiently but also walking the novice through the system, explaining itself as needed.”

And that’s the true potential of the ability to initiate more natural conversations with computers. Much like working with humans, the better we are able to communicate, the more value we can get out of our relationships.

Making The Interface Disappear

In the early days of web usability, there was a constant tension between user experience and design. Media designers were striving to be original. User experience engineers, on the other hand, were trying to build conventions. Putting a search box in the upper right hand corner of a web page might not be creative, but that’s where users look to find it.

Yet eventually a productive partnership formed and today most websites seem fairly intuitive. We mostly know where things are supposed to be and can navigate things easily. The challenge now is to build that same type of experience for artificial intelligence, so that our relationships with the technology become more natural and more useful.

“Much like we started to do with user experience for conventional websites two decades ago, we want the user interface to disappear,” Arar says. Because when we aren’t wrestling with the interface and constantly having to repeat ourselves or figuring out how to rephrase our questions, we can make our interactions much more efficient and productive.

As Moore put it to me, “Much of the value of systems today is locked in the data and, as we add exabytes to that every year, the potential is truly enormous. However, our ability to derive value from that data is limited by the effectiveness of the user interface. The more we can make the interface become intelligent and largely disappear, the more value we will be able unlock.”

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Three Reasons Nobody Cares About Your Ideas

Three Reasons Nobody Cares About Your Ideas

GUEST POST from Greg Satell

“Build a better mousetrap and the world will beat a path to your door,” Ralph Waldo Emerson is said to have written (he didn’t) and since that time thousands of mousetraps have been patented. Still, despite all that creative energy and all those ideas, the original “snap trap,” invented by William Hooker in 1894, remains the most popular.

We’ve come to glorify ideas, thinking that more of them will lead to better results. This cult of ideas has led to a large cottage industry of consultants that offer workshops to exercise our creative capabilities with tools like brainstorming and SWOT analysis. We are, to a large extent, still chasing better mousetraps.

Still, one thing I constantly hear from executives I work with is that no one wants to hear about their ideas. The truth is that, just like all those mousetrap patents, most ideas are useless, very few are original and many have been tried before. So if you’re frustrated that nobody listens to your ideas, here’s why that happens and what you can do to fix it.

1. Your Ideas Aren’t Original

Having a new idea is thrilling, because it takes us to new places. Once we get an idea, it leads to other ideas and, as we follow the logical chain, we can see important real-world implications. The process of connecting the dots is so exhilarating — and so personal — that it seems unlikely, impossible even, that someone else had the same thoughts at the same time.

Yet history clearly shows that’s exactly what happens. Newton and Leibniz simultaneously invented calculus. Darwin and Wallace discovered the principles of evolution at about the same time. Alexander Graham Bell just narrowly beat Elisha Gray to the patent office to receive credit for inventing the telephone. Einstein beat David Hilbert to general relativity by a matter of weeks.

In fact, in a landmark study published in 1922, sociologists William Ogburn and Dorothy Thomas identified 148 major inventions or discoveries that at least two different people, working independently, arrived at the same time. And those are historic successes that are well documented. Just imagine how often it happens with normal, everyday ideas.

The truth is that ideas don’t simply arise out of some mysterious ether. We get them by making connections between existing ideas and new things we observe ourselves. So it shouldn’t be surprising that others have seen similar things and drawn the same conclusions that we have.

2. Others Had The Same Idea — And Failed

Jim Allison spent most of his life as a fairly ordinary bench scientist and that’s all he really wanted to be. He told me once that he “just liked figuring things out” and by doing so, he gained some level of prominence in the field of immunology, making discoveries that were primarily of interest to other immunologists.

His path diverged when he began to research the ability of our immune system to fight cancer. Using a novel approach, he was able to show amazing results in mice. “The tumors just melted away,” he told me. Excited, he ran to go tell pharmaceutical companies about his idea and get them to invest in his research.

Unfortunately, they were not impressed. The problem wasn’t that they didn’t understand Jim’s idea, but that they had already invested — and squandered — billions of dollars on similar ideas. Hundreds of trials had been undertaken on immunological approaches to cancer and there hadn’t been one real success.

Nonetheless, Jim persevered and today, cancer immunotherapy has emerged as major field of its own. Today, hundreds, if not thousands, of scientists are combining their ideas with Jim’s to create amazing breakthroughs in cancer treatment and tens of thousands of people are alive today because of it.

3. You Can’t Make An Idea Work By Yourself

One of the most famous stories about innovation is that of Alexander Fleming. Returning to his lab after a summer vacation, he found that a mysterious mold had contaminated his petri dishes, which was eradicating the bacteria colonies he was working to grow. He decided to study the mold and discovered penicillin.

It’s one of those stories that’s told and retold because it encapsulates so much of what we love about innovation — the power of a single “Eureka! moment” to change the world. The problem is that innovation never really happens that way, not generally and certainly not in the case of penicillin.

The real story is decidedly different. When Alexander Fleming published his findings, no one really noticed because it had little, if any, medical value. It was just a secretion from a mold that could kill bacteria in a petri dish. The compound was unstable and you couldn’t store it. It couldn’t be injected or ingested. You also couldn’t make enough of it to cure anyone.

Ten years later, a completely different team of scientists led by Howard Florey and Ernst Chain rediscovered Fleming’s work and began adding their own ideas. Then they traveled to America to work with US labs and improved the process. Finally, pharmaceutical companies worked feverishly to mass produce penicillin.

So it wasn’t just a single person or a single “Eureka! moment,” but a number of different teams of people, working on different aspects of the problem and it took nearly 20 years to make penicillin the miracle cure we know today.

The Fundamental Difference Between Ideation and Creation

While most ideas lead to nothing, some create enormous value. Calculus, the theory of evolution and the telephone made our lives better no matter who came up with them first. That’s not because of the idea itself, but what was built on top of it. Ideas only create a better future when they mix with other ideas. Innovation, to a large degree, is combination.

The stories of Alexander Fleming and Jim Allison are instructive. In Fleming’s case it was scientists at another lab that picked up the initial idea and did the work to make it into a useful cure. Then they went to America to work with other labs and, eventually, pharmaceutical companies to do the work needed to go from milliliters in the lab to metric tons in the real world.

One thing that struck me in talking to Jim Allison was how he described having the idea for cancer immunotherapy. He didn’t talk about a flash of brilliance, but said he slowly began to piece things together, combining the work of others with what he saw in his own lab. His breakthrough discovery was the culmination of a life’s work.

That was in 1995. It then took him three more years to find the small biotech company to back his idea. Clinical trials didn’t begin until 2004. FDA approval came through in 2011. Today, 20 years after the initial idea, he still goes to the lab every day, to combine his ideas with others and enhance the initial concept.

Kevin Ashton, who himself first came up with the idea for RFID chips, wrote in his book, How to Fly A Horse, “Creation is a long journey, where most turns are wrong and most ends are dead. The most important thing creators do is work. The most important thing they don’t do is quit.”

A good idea is not a mere moment of epiphany, but a call to action. It proves its value not by its elegance or through the brilliance of its conception, but in its ability to solve problems in the real world. So if you want people to start listening to your ideas, focus less on the fact that you have them and more on what value they can deliver to others.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

You Are Probably Not Prepared to Innovate

You Are Probably Not Prepared to Innovate

GUEST POST from Greg Satell

Becoming a successful executive is a fairly linear path. You start at the bottom and learn to solve basic problems in your field or industry. As you gain experience and improve your skills you are given more responsibility, begin to manage teams and work diligently to set up the practices and processes to help your team succeed.

The best executives make those around them better, by fostering a positive work environment, minimizing drama and providing strategy and direction that will enable the team meet its objectives. That’s how you deliver consistent results and continue to rise up through the ranks to the top of your profession.

At some point, however, you need to do more than just plan and execute strategy, you have to innovate. Every business model is disrupted eventually. Changes in technology, competitive landscape and customer needs make that inevitable and, unfortunately, executive experience doesn’t equip your for it. Here’s four things that will help you make the shift from operations to innovation.

1. Learn How To Be The Dumbest Guy In The Room

Good executives are often the smartest guys in the room. Through years of experience solving tough problems, they learn to be masters of their craft and are able to mentor those around them. A great operational manager is a great coach, guiding others around them to achieve more than they thought they could.

Unfortunately, innovation isn’t about what you know, but what you don’t. It requires you to explore, push boundaries and venture into uncharted areas in which there often are no true experts. You’re basically flying blind, which can be incredibly uncomfortable, especially to those who have had a strong track record of success in a structured environment.

That’s why the first step to making the shift from operations to innovation is to learn how to become the dumbest guy in the room instead of the smartest. Admit to yourself that you don’t know what you need to succeed and begin to explore. Actively seek out those who know and understand things that you don’t.

Being the smartest guy in the room helps you operate smoothly, but being the dumbest guy in the room helps you learn. The best way to start is by seeking out new rooms to spend time in.

2. Create A Bias For Action

Operations thrive on predictability. People need to know what to expect and what’s expected of them so that things can run smoothly. Every great operation needs to coordinate activities between a diverse set of stakeholders, including team members, partners and customers. That level of interoperability doesn’t just happen by itself.

Over the years, a variety of methods, such as Total Quality Management (TQM) and Six Sigma have arisen that use rigorous statistical methods to optimize for established metrics. The idea is to hone processes continuously in order to elevate them to paragons of efficiency.

When you seek to innovate, however, established metrics are often of little use, because you are trying to do something new and change the basis of competition. Again, you are venturing into the unknown, doing things you and your organization have not developed the knowledge and skills to do well. Instead of seeking excellence, you need to dare to be crap.

The key to making this work is not to abandon all sense of restraint and accountability, but to manage risk by reducing scale. In an operational setting you always want to look for the largest addressable market you can find, but when you are trying to do something truly new, you need to find a hair on fire use case — a customer who needs a problem solved so badly that they are willing to work through the inevitable glitches and snafus with you.

3. Solve The Monkey First

Every good operational project has a roadmap, whether that is an ordinary budget, a project plan or a defined strategy. The early stages of a plan are usually the easiest. You want to get everybody on board, build momentum and then begin to tackle tougher problems. When you are trying to do something new and different, however, you often want to do exactly the opposite.

Every significant innovation involves something that’s never been done before, so you can’t be sure how long it will take or even if the core objectives can be achieved at all. So it’s best to get started working on the toughest problems early, because until you resolve those unknowns, the whole project is unworkable.

At Google’s X division, the company’s “moonshot factory,” the mantra is #MonkeyFirst. The idea is that if you want to get a monkey to recite Shakespeare on a pedestal, you’d better start by training the monkey, not building the pedestal, because training the monkey is the hard part. Anyone can build a pedestal.

Operational executives like to build pedestals so that they can show early progress against a timeline. Unfortunately, when you are striking out into the unknown, building a pedestal gets you nowhere. Unless you can actually train the monkey, working on the pedestal is wasted effort. You have to learn how to train monkeys.

4. Move from Metrics To Mission

Good operational executives sweat the numbers. They work within existing frameworks and hone operations to improve performance against established metrics. Yet when you are trying to do something truly new, established metrics often tell you little. The goal isn’t to play the game better, but to change it entirely.

In fact, established businesses often get disrupted precisely because they are focusing on outdated metrics. For example, when digital cameras first came out, they performed poorly by traditional standards of quality. They did, however, perform much better in terms of convenience and, as the quality of the pictures improved, replaced the earlier technology.

In a similar vein, while traditional brokerages focused on service, Charles Schwab offered minimal service at a far lower price. At first, it didn’t seem like a threat to incumbents, but as technology improved, it was able to improve service and keep the low flat fees. The model ended up transforming the industry.

So it’s important to not get blinded by metrics and focus on your mission. True innovation never happens in a straight line or proceeds at a measured pace. That’s why there is a basic tradeoff between innovation and optimization and very few people can do both. The best executives, however, learn how to bridge that gap.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Five Things Most Managers Don’t Know About Innovation

Five Things Most Managers Don't Know About Innovation

GUEST POST from Greg Satell

Every business knows it needs to innovate. What isn’t so clear is how to go about it. There is no shortage of pundits, blogs and conferences that preach the gospel of agility, disruptive innovation, open innovation, lean startups or whatever else is currently in vogue. It can all be overwhelming.

The reality is that there is no one ‘true’ path to innovation. In researching my book, Mapping Innovation, I found that organizations of all shapes and sizes can be great innovators. Some are lean and nimble, while others are large and bureaucratic. Some have visionary leaders, others don’t. No one model prevails.

However, there are common principles that we can apply. While there is no “right way” to innovate, there are plenty of wrong ways. So perhaps the best way forward is to avoid the pitfalls that can undermine innovative efforts in your organization and kill promising new solutions. Here are five things every business should know about innovation.

1. Every Square-Peg Business Eventually Meets Its Round-Hole World

IBM is many peoples’ definition of a dinosaur. Not too long ago, it announced its 22nd consecutive quarter of declining revenues. Nevertheless, it seems to be turning a corner. What’s going on? How can a century-old technology company survive against the onslaught of the 21st century phenoms like Google, Amazon, Apple and Facebook?

The truth is that this is nothing new for IBM. Today, its business of providing installed solutions for large enterprises is collapsing due to the rise of the cloud. In the 90s it was near bankruptcy. In the 50s, its tabulating machine business was surpassed by digital technology. Each time eulogies are paraded around for Big Blue it seems to come back even stronger.

What IBM seems to understand better than just about anybody else is that every square-peg business eventually meets its round-hole world. Changes in technology, customer preferences and competitive environment eventually render every business model irrelevant. That’s just reality and there really is no changing it.

IBM’s secret weapon is its research division, which explores pathbreaking technologies long before they have a clear path to profitability. So when one business dies they have something to replace it with. Despite those 22 quarters of declining revenues it has a bright future with things like Watson, quantum computing and neuromorphic chips.

It’s better to prepare than adapt.

2. Innovation Isn’t About Ideas, It’s About Solving Problems

Probably the biggest misconception about innovation is that it’s about ideas. So there is tons of useless advice about brainstorming methods, standing meetings and word games, such as replacing “can’t” with “can if.” If these things help you work more productively, great, but they will not make you an innovator.

In my work, I speak to top executives, amazingly successful entrepreneurs and world class scientists. Some of these have discovered or created things that truly changed the world. Yet not once did anyone tell me that a brainstorming session or “productivity hack” set them on the road to success. They were simply trying to solve a problem that was meaningful to them.

What I do hear a lot from mid-level and junior executives is that they are not given “permission” to innovate and that nobody wants to hear about their ideas. That’s right. Nobody wants to hear about your ideas. People are busy with their own ideas.

So stop trying to come up with some earth shattering idea. Go out and find a good problem and start figuring out how to solve it. Nobody needs an idea, but everybody has a problem they need solved.

3. You Don’t Hire Or Buy Innovation, You Empower It

One of the questions I always get asked when I advise organizations is how to recruit and retain more innovative people. I know the type they have in mind. Someone fashionably dressed, probably with some tasteful piercings and some well placed ink, that spouts off a never-ending stream of ideas.

Yet that’s exactly what you don’t want. That’s exactly the type of unproductive hotshot that can stop innovation in its tracks. They talk over other people, which discourages new ideas from being voiced and their constant interruptions kill collaboration.

The way you create innovation is by empowering an innovative culture. That means creating a safe space for ideas, fostering networks inside and outside the organization, promoting collaboration and instilling a passion for solving problems. That’s how you promote creativity.

So if you feel that your people are not innovating, ask yourself what you’re doing to get in their way.

4. If Something Is Truly New And Different, You Need a “Hair On Fire” Use Case

As a general operational rule, you should seek out the largest addressable market you can find. Larger markets not only have more money, they are more stable and usually more diverse. Identifying even a small niche in a big market can make for a very profitable business.

Unfortunately, what thrives in operations can often fail for innovation. When you have an idea that’s truly new and different, you don’t want to start with a large addressable market. You want to find a hair-on-fire use case — somebody that needs a problem solved so badly that they either already have a budget for it or have scotched-taped together some half solution.

The reason you want to find a hair-on-fire use case is that when something is truly new and different, it is untested and poorly understood. But someone who needs a problem solved really badly will be willing to work with you to find flaws, fix them and improve your offer. From there you can begin to scale up and hunt larger game.

5. You Need To Seek Out A Grand Challenge

Most of the problems we deal with are relatively small. We cater to changing customer tastes, respond to competitive threats and fix things that are broken. Sometimes we go a bit further afield and enter a new market or develop a new capability. These are the bread and butter of a good business. That’s how you win in the marketplace.

Yet every business is ultimately disrupted. When that happens, normal operating practice will only make you better and better at things people care less and less about. You can’t build the future by looking to the past. You build the future by creating something that’s new and important, that solves problems that are currently unsolvable.

That’s why every organization needs to seek out grand challenges. These are long, sustainable efforts that solve a fundamental problem in your industry or field that change the realm of what’s considered possible. They are not “bet the company” initiatives and shouldn’t present a material risk to the business if they fail, but have a transformational impact if they succeed.

As I noted above, there is no one “true” path to innovation. Everybody needs to find their own way. Still, there are common principles and by applying them, every business can up their innovation game.

— Article courtesy of the Digital Tonto blog and previously appeared on Harvard Business Review
— Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

How to Avoid AI Project Failures

How To Avoid AI Project Failures

GUEST POST from Greg Satell

A survey a few years ago by Deloitte of “aggressive adopters” of cognitive technologies found that 76% believe that they will “substantially transform” their companies within the next three years. There probably hasn’t been this much excitement about a new technology since the dotcom boom years in the late 1990s.

The possibilities would seem to justify the hype. AI isn’t just one technology, but a wide array of tools, including a number of different algorithmic approaches, an abundance of new data sources and advancement in hardware. In the future, we will see new computing architectures, like quantum computing and neuromorphic chips, propel capabilities even further.

Still, there remains a large gap between aspiration and reality. Gartner estimated that 85% of big data projects fail. There have also been embarrassing snafus, such as when Dow Jones reported that Google was buying Apple for $9 billion and the bots fell for it or Microsoft’s Tay chatbot went berserk on Twitter. Here’s how to transform the potential of AI into real results.

Make Your Purpose Clear

AI does not exist in a vacuum, but in the context of your business model, processes and culture. Just as you wouldn’t hire a human employee without an understanding of how he or she would fit into your organization, you need to think clearly about how an artificial intelligence application will drive actual business results.

“The first question you have to ask is what business outcome you are trying to drive,” Roman Stanek, CEO at GoodData, told me. “All too often, projects start by trying to implement a particular technical approach and not surprisingly, front-line managers and employees don’t find it useful. There’s no real adoption and no ROI.”

While change always has to be driven from the top, implementation is always driven lower down. So it’s important to communicate a sense of purpose clearly. If front-line managers and employees believe that artificial intelligence will help them do their jobs better, they will be much more enthusiastic and effective in making the project successful.

“Those who are able to focus on business outcomes are finding that AI is driving bottom-line results at a rate few had anticipated,” Josh Sutton, CEO of Agorai.ai, told me. He pointed to a McKinsey study from a few years ago that pegs the potential economic value of cognitive tools at between $3.5 trillion and $5.8 trillion as just one indication of the possible impact.

Choose The Tasks You Automate Wisely

While many worry that cognitive technologies will take human jobs, David Autor, an economist at MIT, sees the the primary shift as one of between routine and nonroutine work. In other words, artificial intelligence is quickly automating routine cognitive processes much like industrial era machines automated physical labor.

To understand how this can work, just go to an Apple store. Clearly, Apple is a company that clearly understands how to automate processes, but the first thing you see when you walk into an Apple store you see is a number employees waiting to help you. That’s because it has chosen to automate background tasks, not customer interactions.

However, AI can greatly expand the effectiveness of human employees. For example, one study cited by a White House report during the Obama Administration found that while machines had a 7.5 percent error rate in reading radiology images and humans had a 3.5% error rate, when humans combined their work with machines the error rate dropped to 0.5%.

Perhaps most importantly, this approach can actually improve morale. Factory workers actively collaborate with robots they program themselves to do low-level tasks. In some cases, soldiers build such strong ties with robots that do dangerous jobs that they hold funerals for them when they “die.”

Data Is Not Just An Asset, It Can Also Be A Liability

For a long time more data was considered better. Firms would scoop up as much of it as they could and then feed it into sophisticated algorithms to create predictive models with a high degree of accuracy. Yet it’s become clear that’s not a great approach.

As Cathy O’Neil explains in Weapons of Math Destruction, we often don’t understand the data we feed into our systems and data bias is becoming a massive problem. A related problem is that of over-fitting. It may sound impressive to have a model that is 99% accurate, but if it is not robust to changing conditions, you might be better off with one that is 70% accurate and simpler.

Finally, with the implementation of GDPR in Europe and the likelihood that similar legislation will be adopted elsewhere, data is becoming a liability as well as an asset. So you should think through which data sources you are using and create models that humans can understand and verify. “Black boxes” serve no one.

Shift Humans To Higher Value Tasks

One often overlooked fact about automation is that once you automate a task, it becomes largely commoditized and value shifts somewhere else. So if you are merely looking to use cognitive technologies to replace human labor and cut costs, you are most probably on the wrong track.

One surprising example of this principle comes from the highly technical field of materials science. A year ago, I was speaking to Jim Warren of the Materials Genome Initiative about the exciting possibility of applying machine learning algorithms to materials research. More recently, he told me that this approach has increasingly become a focus of materials research.

That’s an extraordinary shift in one year. So should we be expecting to see a lot of materials scientists at the unemployment office? Hardly. In fact, because much of the grunt work of research is being outsourced to algorithms, the scientists themselves are able to collaborate more effectively. As George Crabtree, Director of the Joint Center for Energy Storage Research, which has been a pioneer in automating materials research put it to me, “We used to advance at the speed of publication. Now we advance at the speed of the next coffee break.”

And that is the key to understanding how to implement cognitive technologies effectively. Robots are not taking our jobs, but rather taking over tasks. That means that we will increasingly see a shift in value from cognitive skills to social skills. The future of artificial intelligence, it seems, is all too human.

— Article courtesy of the Digital Tonto blog and previously appeared on Harvard Business Review
— Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Only One Type of Innovation Will Win the Future

Only One Type of Innovation Will Win the Future

GUEST POST from Greg Satell

Very few businesses last. While we like to think we live in a particularly disruptive era, this has always been true. Entrepreneurs start businesses because they see opportunity and build skills, practices and processes to leverage it. Yet as the world changes, these strengths often become vulnerabilities.

The problem is that the past is not always a good guide to the future. Business models, even the successful ones, are designed for inertia. They are great for leveraging past insights, but are often resistant to change. Success does not, in fact, always breed more success, sometimes it breeds failure.

That’s why every business needs to innovate. Yet innovation is not, as some would have us believe, just about moving fast and breaking things. It’s about solving the problems you need to create a better future. What most fail to grasp is that a key factor of success is how you source problems, build a pipeline and, ultimately, choose which ones you will work on.

1. Getting Better At What You Already Do

Every year, Apple comes up with a new iPhone. That’s not as exciting as it used to be, but it’s still key to the company maintaining its competitive edge. Every model is a bit faster, more secure and has new features that make it more capable. It’s still an iPhone, but better.

Some self-appointed ‘innovation gurus” often scoff at this type of innovation as “incremental” and favor new technologies that are more “radical” or “disruptive,” but the truth is that this is where you derive the most value from innovation — getting better at what you already do and selling to customers what you already know.

So the first line of defense against irrelevance is to identify ways to improve performance in current practices and processes. The challenge, of course, with this type of innovation is that your competitors will be working on the same problems you are and it takes no small amount of agility and iteration to stay ahead. Even then, any victory is short-lived.

Still, most technologies can be improved for a long time. Moore’s Law, for example, has been around for almost 50 years and is just ending now.

2. Applying What You’re Already Good At To A Different Context

Amazon started out selling books online. It then applied its approach to other categories, such as electronics and toys. That took enormous investments in technology, which it then used to create new businesses, such as Amazon Web Services (AWS), Kindle tablets and its Echo line of smart speakers.

In each case, the company took what it already did well and expanded to an adjacent set of markets or capabilities, often with great success. The Kindle helped the company dominate e-books and strengthened its core business. AWS is far more profitable than online retail and accounts for virtually all of Amazon’s operating income.

Still, adjacent opportunities are can be risky. Amazon, despite its huge successes, has had its share of flops too. Whenever you go into a new business you are, to a greater or lesser extent, charting a course into the unknown. So you need to proceed with some caution. When you launch a new business into an adjacency, you are basically launching a startup and most of those fail.

3. Finding A Completely New Problem To Solve

Besides getting better at what you already do and applying things you already know to a different market or capability, you can also look for a new problem to solve. Clearly, this the most uncertain type of opportunity, because no one knows what a good solution will look like.

To return to the Moore’s law example, everybody knows what a 20% performance improvement in computer chips looks like. Metrics for speed and power consumption have long been established, so there is little ambiguity around what would constitute success. Customers will instantly recognize the improvement as having a specific market value.

On the other hand, no one knows what the value of a quantum computer will be. It’s a fundamentally new kind of technology that will solve new types of problems. So customers will have to explore the technology and figure out how to use it to create better products and services.

Despite the uncertainty though, I found in the research that led to my book, Mapping Innovation, that this type of exploration is probably the closest thing to a sure bet that you’re going to find. Every single organization I studied that invested in exploration found that it paid off big, with extremely high returns even accounting for the inevitable wrong turns and blind alleys.

The 70-20-10 Rule

Go to any innovation conference and you will find no shortage of debates about what type of approach creates the most value, usually ending with no satisfying conclusion. The truth is that every organization needs to improve what they already do, search for opportunities in adjacencies and explore new problems. The key is how you manage resources.

One popular approach is the 70-20-10 rule, which prescribes investing 70% of your innovation resources in improving existing technologies, 20% in adjacent markets and capabilities and 10% in markets and capabilities that don’t exist yet. That’s more of a rule of thumb than a physical law and should be taken with a grain of salt, but it’s a good guide.

Practically speaking, however, I have found that the exploration piece is the most neglected. All too often, in our over-optimized business environment, any business opportunity that can’t be immediately quantified in considered a non-starter. So we fail to begin to explore new problems until their market value has been unlocked by someone else. By that point, we are already behind the curve.

Make no mistake. The next big thing always starts out looking like nothing at all. Things that change the world always arrive out of context for the simple reason that the world hasn’t changed yet. But if you do not explore, you will not discover. If you do not discover, you will not invent. And if you do not invent, you will be disrupted. It’s just a matter of time.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.