Author Archives: Greg Satell

About Greg Satell

Greg Satell is a popular speaker and consultant. His latest book, Cascades: How to Create a Movement That Drives Transformational Change, is available now. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.

A Shared Language for Radical Change

A Shared Language for Radical Change

GUEST POST from Greg Satell

One of the toughest things about change is simply to have your idea understood. The status quo always has inertia on its side and never yields its power gracefully. People need a reason to believe in change, but they never need much convincing to allow things to go along as they always have. Inaction is the easiest thing in the world.

This can be incredibly frustrating. It doesn’t matter if you’re a political revolutionary, a social visionary or an entrepreneur, if you have an idea you think can impact the world, you want people to be as excited about it as you are. So you try to describe it in vivid language that highlights how wonderfully different it really is.

The pitfall that many would-be revolutionaries fall into is they fail to communicate in terms that others are able to accept and internalize. Make no mistake. Nobody needs to understand your idea. If you think your idea is important and want it to spread, then you need to meet people where they are, not where you’d like them to be. That’s how you make change real.

The Importance Of Finding Your Tribe

There’s no question that Pixar is one of the most successful creative enterprises ever. Yet in his memoir, Creativity, Inc., Pixar founder Ed Catmull wrote that “early on, all of our movies suck.” Catmull calls initial ideas “ugly babies,” because they start out, “awkward and unformed, vulnerable and incomplete.” Few can see what those ugly babies can grow into.

That’s why it’s important to start with a majority. You can always expand a majority out, but once you are in the minority you will either immediately feel pushback or, even worse, you will simply be ignored. If you can find a tribe of people who are as passionate about your idea as you are, you can empower them to succeed and bring in others to join you as well.

There is, however, a danger to this approach. Consider a study that examined networks of the cast and crew of Broadway plays. The researchers found that if no one had ever worked together before, results tended to be poor. However, if the networks among the cast and crew became too dense— becoming a close-knit tribe—performance also suffered.

The problem is that tribes tend to be echo chambers that filter outside voices. Consensus becomes doctrine and, eventually, gospel. Dissension is not only discouraged, but often punished. Eventually, a private language emerges that encodes the gospel into linguistic convention and customs. The outside world loses internal tribal relevance.

The Pitfalls Of A Private Language

Every field of endeavor must navigate the two competing needs: specialization and relevance. For example, a doctor treating a complex disease must master the private, technical language of her field to confer with colleagues, but must also translate those same concepts to a public, common language to communicate with patients in ways they can understand.

Yet as the philosopher Ludwig Wittgenstein explained, these types of private languages can be problematic. He made the analogy of a beetle in a box. If everybody had something in a box that they called a beetle, but no one could examine each other’s box, there would be no way of knowing whether everybody was actually talking about the same thing or not.

What Wittgenstein pointed out was that in this situation, the term “beetle” would lose relevance and meaning. It would simply refer to something that everybody had in their box, whatever that was. Everybody could just nod their heads not knowing whether they were talking about an insect, a German automobile or a British rock band. The same also happens with professional jargon and lingo.

I see this problem all the time in my work helping organizations to bring change about. People leading, say, a digital transformation are, not surprisingly, enthusiastic about digital technology and speak to other enthusiasts in the private, technical language native to their tribe. Unfortunately, to everyone else, this language holds little meaning or relevance. For all practical purposes, it might as well be a “beetle in a box.”

Creating A Shared Identity Through Shared Values And Shared Purpose

The easiest way to attack change is to position it as fundamentally at odds with the prevailing culture. In an organizational environment, those who oppose change often speak of undermining business models or corporate “DNA.” In much the same way, social and political movements are often portrayed as “foreign” or “radical.”

That’s why successful change efforts create shared identity through shared values and shared purpose. In the struggle for women’s voting rights in America, groups of Silent Sentinels would picket the White House with slogans taken from President Woodrow Wilson’s own books. To win over nationalistic populations in rural areas, the Serbian revolutionary movement Otpor made the patriotic plea, “Resistance, Because I Love Serbia.”

We find the same strategy effective in our work with organizational transformations. Not everybody loves technology, for example, but everybody can see the value of serving customers better, in operating more efficiently and in creating a better workplace. If you can communicate the need for change in terms of shared values and purpose, it’ll be easier for others to accept.

Even more importantly, people need to see that change can work. That’s why we always recommend starting with a keystone change, which represents a clear and tangible objective, involves multiple stakeholders and paves the way for future change. For example, with digital transformations, we advise our clients to automate the most mundane tasks first, even if those aren’t necessarily the highest priority tasks for the project.

Would You Rather Make A Point Or Make A Difference?

One of the most difficult things about leading change is that you need to let people embrace it for their own reasons, which might not necessarily be your own. When you’re passionate about an idea, you want others to see it the same way you do, with all its beautiful complexity and nuance. You want people to share your devotion and fervor.

Many change efforts end up sabotaging themselves for exactly this reason. People who love technology want others to love it too. Those who feel strongly about racial and gender-based diversity want everyone to see injustice and inequality just as they do. Innovators in any area can often be single-minded in their pursuit of change.

The truth is that we all have a need to be recognized and when others don’t share a view that we feel strongly about, it offends our sense of dignity. The danger, of course, is that in our rapture we descend into solipsism and fail to recognize the dignity of others. We proudly speak in a private language amongst our tribe and expect others to try and find a way in.

Yet the world simply doesn’t work that way. If you care about change, you need to hold yourself accountable to be an effective messenger. You have to make the effort to express yourself in terms that your targets of influence are willing to accept. That doesn’t in any way mean you have to compromise. It simply means that you need to advocate effectively.

In the final analysis, you need to decide whether you’d rather make a point, or make a difference.

— Article courtesy of the Digital Tonto blog
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

We Are Starving Our Innovation Economy

We Are Starving Our Innovation Economy

GUEST POST from Greg Satell

The Cold War was fundamentally different from any conflict in history. It was, to be sure, less over land, blood and treasure than it was about ideas. Communist countries believed that their ideology would prevail. They were wrong. The Berlin Wall fell and capitalism, it seemed, was triumphant.

Today, however, capitalism is in real trouble. Besides the threat of a rising China, the system seems to be crumbling from within. Income inequality in developed countries is at 50-year highs. In the US, the bastion of capitalism, markets have weakened by almost every imaginable metric. This wasn’t what we imagined winning would look like.

Yet we can’t blame capitalism. The truth is that its earliest thinkers warned about the potential for excesses that lead to market failure. The fact is that we did this to ourselves. We believed that we could blindly leave our fates to market and technological forces. We were wrong. Prosperity doesn’t happen by itself. We need to invest in an innovation economy.

Capitalism’s (Seemingly) Fatal Contradiction

Anyone who’s taken an “Economics 101” course knows about Adam Smith and his invisible hand. Essentially, the forces of self-interest, by their very nature, work to identify the optimal price that attracts just enough supply of a particular good or service to satisfy demand. This magical equilibrium point creates prosperity through an optimal use of resources.

However, some argued that the story wasn’t necessarily a happy one. After all, equilibrium implies a lack of economic profit and certainly businesses would want to do better than that. They would seek to gain a competitive advantage and, in doing so, create surplus value, which would then be appropriated to accumulate power to rig the system further in their favor.

Indeed, Adam Smith himself was aware of this danger. “People of the same trade seldom meet together, even for merriment and diversion, but the conversation ends in a conspiracy against the public, or in some contrivance to raise prices,” he wrote. In fact, the preservation of free markets was a major concern that ran throughout his work.

Yet as the economist Joseph Schumpeter pointed out, with innovation the contradiction dissipates. As long as we have creative destruction, market equilibriums are constantly shifting and don’t require capitalists to employ extractive, anti-competitive practices in order to earn excellent profits.

Two Paths To Profit

Anyone who manages a business must pursue at least one of two paths to profit. The first is to innovate. By identifying and solving problems in a competitive marketplace, firms can find new ways to create, deliver and capture value. Everybody wins.

Google’s search engine improved our lives in countless ways. Amazon and Walmart have dramatically improved distribution of goods throughout the economy, making it possible for us to pay less and get more. Pfizer and Moderna invested in an unproven technology that uses mRNA to deliver life-saving molecules and saved us from a deadly pandemic.

Still, the truth is that the business reality is not, “innovate or die,” but rather “innovate or find ways to reduce competition.” There are some positive ways to tilt the playing field, such as building a strong brand or specializing in some niche market. However, other strategies are not so innocent. They seek to profit by imposing costs on the rest of us

The first, called rent seeking, involves businesses increasing profits through getting litigation passed in their favor, as when car dealerships in New Jersey sued against Tesla’s direct sales model. The second, regulatory capture, seeks to co-opt agencies that are supposed to govern industry, resulting in favorable implementation and enforcement of the legal code.

Why “Pro-Business” Often Means Anti-Market

Corporations lobby federal, state and local governments to advance their interests and there’s nothing wrong with that. Elected officials should be responsive to their constituents’ concerns. That is, after all, how democracy is supposed to work. However, very often business interests try to maintain that they are arguing for the public good rather than their own.

Consider the issue of a minimum wage. Businesses argue that government regulation of wages is an imposition on the free market and that, given the magical forces of the invisible hand, letting the market set the price for wages would produce optimal outcomes. Artificially increasing wages, on the other hand, would unduly raise prices on the public and reduce profits needed to invest in competitiveness.

This line of argument is nothing new, of course. In fact, Adam Smith addressed it in The Wealth of Nations nearly 250 years ago:

Our merchants and master-manufacturers complain much of the bad effects of high wages in raising the price, and thereby lessening the sale of their goods both at home and abroad. They say nothing concerning the bad effects of high profits. They are silent with regard to the pernicious effects of their own gains. They complain only of those of other people.

At the same time corporations have themselves been undermining the free market for wages through the abuse of non-compete agreements. Incredibly, 38% of American workers have signed some form of non-compete agreement. Of course, most of these are illegal and wouldn’t hold up in court, but serve to intimidate employees, especially low-wage workers.

That’s just for starters. Everywhere you look, free markets are under attack. Occupational licensing, often the result of lobbying by trade associations, has increased five-fold since the 1950s. Antitrust regulation has become virtually nonexistent, while competition has been reduced in the vast majority of American industries.

Perhaps not surprisingly, while all this lobbying has been going on, recent decades have seen business investment and innovation decline, and productivity growth falter while new business formation has fallen by 50%. Corporate profits, on the other hand, are at record highs.

Getting Back On Track

At the end of World War II, America made important investments to create the world’s greatest innovation economy. The GI Bill made what is perhaps the biggest investment ever in human capital, sending millions to college and creating a new middle class. Investments in institutions such as the National Science Foundation (NSF) and the National Institutes of Health (NIH) would create scientific capital that would fuel US industry.

Unfortunately, we abandoned that very successful playbook. Over the past 20 years, college tuition in the US has roughly doubled in the last 20 years. Perhaps not surprisingly, we’ve fallen to ninth among OECD countries for post-secondary education. The ones who do graduate are often forced into essentially decades of indentured servitude in the form of student loans.

At the same time, government investment in research as a percentage of GDP has been declining for decades, limiting our ability to produce the kinds of breakthrough discoveries that lead to exciting new industries. What passes for innovation these days displaces workers, but does not lead to significant productivity gains. Legislation designed to rectify the situation and increase our competitiveness stalled in the Senate.

So after 250 years, capitalism remains pretty much as Adam Smith first conceived, powerful yet fragile, always at risk of being undermined and corrupted by the same basic animal spirits that it depends on to set prices efficiently. He never wrote, nor is there any indication he ever intended, that markets should be left to their own devices. In fact, he and others warned us that markets need to be actively promoted and protected.

We are free to choose. We need to choose more wisely.

— Article courtesy of the Digital Tonto blog
— Image credits: Microsoft CoPilot

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

We Must Unlearn These Three Management Myths

We Must Unlearn These Three Management Myths

GUEST POST from Greg Satell

Mark Twain is reported to have said, “It’s not what you don’t know that kills you, it’s what you know for sure that ain’t true.” Ignorance of facts is easily remedied. We can read books, watch documentaries or simply do a quick Google search. Yet our misapprehensions and biases endure, even in the face of contradicting facts.

The truth is that much of what we believe has less to do with how we weigh evidence than how we see ourselves. In fact, fMRI studies have suggested have shown that evidence which contradicts our firmly held beliefs violates our sense of identity. Instead of adapting our views, we double down and lash out at those who criticize them.

This can be problematic in our personal lives, but in business it can be fatal. There is a reason that even prominent CEOs can pursue failed strategies and sophisticated investors will back hucksters to the hilt. Yet as Adam Grant points out in Think Again, we can make the effort to reexamine and alter our beliefs. Here are three myths that we need to watch out for.

Myth #1: The “Global Village” Will Be A Nice Place

Marshal McLuhan, in Understanding Media, one of the most influential books of the 20th century, described media as “extensions of man” and predicted that electronic media would eventually lead to a global village. Communities would no longer be tied to a single, isolated physical space but connect and interact with others on a world stage.

To many, the rise of the Internet confirmed McLuhan’s prophecy and, with the fall of the Berlin Wall, digital entrepreneurs saw their work elevated to a sacred mission. In Facebook’s IPO filing, Mark Zuckerberg wrote, “Facebook was not originally created to be a company. It was built to accomplish a social mission — to make the world more open and connected.

Yet, importantly, McLuhan did not see the global village as a peaceful place. In fact, he predicted it would lead to a new form of tribalism and result in a “release of human power and aggressive violence” greater than ever in human history, as long separated—and emotionally charged—cultural norms would now constantly intermingle, clash and explode.

For many, if not most, people on earth, the world is often a dark and dangerous place. When your world is not secure, “open” is less of an opportunity to connect than it is a vulnerability to exploit. Things can look fundamentally different from the vantage point of, say, a tech company in Menlo Park, California then it does from, say, a dacha outside Moscow.

Context matters. Our most lethal failures are less often those of planning, logic or execution than they are that of imagination. Chances are, most of the world does not see things the way we do. We need to avoid strategic solipsism and constantly question our own assumptions.

Myth #2: Winning The “War For Talent” Will Make You More Competitive

In 1997, three McKinsey consultants published a popular book titled The War for Talent, which argued that due to demographic shifts, recruiting the “best and the brightest” was even more important than “capital, strategy, or R&D.” The idea made a lot of sense. What could be more important for a company than its people?

Yet as Malcolm Gladwell explained in an article about Enron, strict adherence to the talent rule contributed to the firm’s downfall. Executives that were perceived to be talented moved up fast. So fast, in fact, that it became impossible to evaluate their performance. People began to worry more about impressing their boss and appearing to be clever than doing their jobs.

The culture became increasingly toxic and management continued to bet on the same failed platitude until the only way to move up in the organization was to undermine others. As we now know, it didn’t end well. Enron went bankrupt in 2001, just four years after The War for Talent highlighted it as a model for others to follow.

The simple truth is that talent isn’t what you win in a battle. It’s what you build by actualizing the potential of those in your organization and throughout your ecosystem, including partners, customers and the communities in which you operate. In the final analysis, Enron didn’t fail because it lost the war for talent, it failed because it was at war with itself.

Myth #3: We Can “Engineer” Management

In 1911, Frederick Winslow Taylor published The Principles of Scientific Management, based on his experience as a manager in a steel factory. It took aim at traditional management methods and suggested a more disciplined approach. Rather than have workers pursue tasks in their own manner, he sought to find “the one best way” and train accordingly.

Before long, Taylor’s ideas became gospel, spawning offshoots such as scientific marketing, financial engineering and the six sigma movement. It was no longer enough to simply work hard, you had to measure, analyze and optimize everything. Over the years these ideas became so central to business thinking that they were rarely questioned.

Yet they should have been. The truth is that this engineering mindset is a zombie idea, a remnant of the logical positivism that was discredited way back in the 1930s and more recent versions haven’t fared any better. To take just one example, a study found that of 58 large companies that announced Six Sigma programs, 91 percent trailed the S&P 500 in stock performance. Yet that didn’t stop the endless parade of false promises.

At the root of the problem is a simple fact: We don’t manage machines, we manage ecosystems and we need to think more about networks and less about nodes. Our success or failure depend less on individual entities, than the connections between them. We need to think less like engineers and more like gardeners.

Don’t Believe Everything You Think

At any given time, there are any number of clever people saying clever things. When you invoke a legendary icon like Marshall McLuhan and say “Global Village,” the concept acquires the glow of some historical, unalterable destiny. But that’s an illusion, just like the “War for Talent” and the idea of “engineering” your way out of managing a business and making wise choices.

Yet notice the trap. None of these things were put forward as mere opinions or perspectives. The McKinsey consultants who declared the “War for Talent” weren’t just expressing an opinion, but revealing the results of a “yearlong study…involving 77 companies and almost 6,000 managers and executives.” (And presumably, they sold the study right back to every one of those 77 companies).

The truth is that an idea can never be validated backward, only forward. No amount of analysis can shape reality. We need to continually test our ideas, reconsider them and adapt them to ever-changing conditions. The problem with concepts like six sigma isn’t necessarily in their design, but that they become elevated something approaching the sublime.

That’s why we shouldn’t believe everything we think. There are simply too many ways to get things wrong, while getting them right is always a relatively narrow path. Or, as Richard Feynman put it, “The first principle is that you must not fool yourself—and you are the easiest person to fool.”

— Article courtesy of the Digital Tonto blog
— Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Change Leaders Must Anticipate and Overcome Resistance

Change Leaders Must Anticipate and Overcome Resistance

GUEST POST from Greg Satell

When Barry Libenson arrived at Experian as Global CIO in 2015, he devoted his first few months to speaking with customers. Everywhere he went he heard the same thing: they wanted access to real-time data. On the surface, it was a straightforward business transformation, but Libenson knew that it was far more complicated than that

To switch from batch processed credit reports to real-time access would require a technology transformation—from an on-premise to a cloud architecture—and in order to develop cloud applications effectively, he would have to initiate a skills-based transformation—from waterfall to agile development.”

So what at first appeared to be a straightforward initiative was actually three separate transformations stacked on top of one another. To make things even more difficult, people had good reason to be hostile to each aspect. Still, by being strategic about overcoming resistance from the start, he achieved a full transformation in less than three years.

Understanding Cognitive Biases

One of the key concerns about Libenson’s program at Experian was that the company would lose control over its business model. The firm had prospered selling processed credit reports. Giving customers real-time access to data seemed to undercut a value proposition that had proven itself over decades, almost as if McDonald’s decided to stop selling hamburgers.

These were not casual criticisms. In fact, they reflected instinctual cognitive biases that are deeply rooted in our consciousness. The first, loss aversion, reflects our tendency to avoid losses rather than seek out new gains. The second, called the availability heuristic, reflects our preference for information that is easy to access and internalize, such as the decades of profits generated by credit reports rather than the vague promise of a new cloud-driven business model.

A similar dynamic is plays out between the Black Lives Matter movement and police unions. One could argue, with significant evidence, that the smart play for police unions would be to come to some accommodation with protesters’ concerns to avoid more draconian policies later on. Yet after meticulously building their power base for decades, they have shown little willingness to make concessions.

Libensen and his team were able to navigate these challenges with two key strategies. First, he started with internal API’s, rather than fully open applications, as a keystone change,. That helped bridge the gap between the initial and desired future state. Second, the program was opt-in at first. Those program managers who were excited about creating cloud-based products got significant support. Those who weren’t were left alone.

Navigating Asymmetrical Impacts

Another obstacle to overcome was the fact that some people were more affected than others. In the case of Experian’s skills-based transformation from waterfall to agile development, which was essential to making the business and technology transformations possible, the change hit more senior personnel harder than junior ones.

Many of the project managers at the company had been doing their jobs for years—even decades—and took great pride in their work. Now they were being told they needed to do their jobs very differently. For a junior employee with limited experience, that can be exciting. For those more invested in traditional methods, the transition can more difficult.

Here again, the opt-in strategy helped navigate some thorny issues. Because no one was being forced to switch to agile development, it was hard for anyone to muster much resistance. At the same time, Libenson established an “API Center of Excellence” to empower those who were enthusiastic about creating cloud-based products.

As the movement to the cloud gained steam and began to generate real business results, the ability to build cloud-based projects became a performance issue. Managers that lagged began to feel subtle pressure to get with the program and to achieve what their colleagues had been able to deliver.

Overcoming Switching Costs

Experian facilitates billions of transactions a month. At that scale, you can’t just turn the ship on a dime. Another factor that increased the risk is the very nature of the credit business itself, which makes cybersecurity a major concern. In fact, one of Experian’s direct competitors, Equifax, had one of the biggest data breaches of the decade.

Every change encounters switching costs and that can slow the pace of change. In one particularly glaring example, the main library at Princeton University took 120 years to switch to the Library of Congress classification system because of the time and expense involved. Clearly, that’s an extreme case, but every change effort needs to take inevitable frictions into account.

That’s why Libenson didn’t push for speed initially, but started small, allowing the cloud strategy to slowly prove itself over time. As win piled upon win, the process accelerated and the transformation became more ingrained in the organization. Within just a few years, those who opposed the move to the cloud were in the distinct minority.

As General Stanley McChrystal explained in Team of Teams, he experienced a similar dynamic revamping Special Operations in Iraq. By shifting his force’s focus from individual team goals to effective collaboration between teams, he may have slowed down individual units. However, as a collective, his forces increased their efficiency by a factor of seventeen, measured by the amount of raids they were able to execute.

In every transformation, there is an inherent efficiency paradox. In order to produce change for the long-term, you almost always lose a little bit of efficiency in the short-term. That’s why it’s important to start small and build momentum as you go.

Leveraging Resistance To Forge A New Vision

Any change, if it is important and potentially impactful, is going to encounter fierce resistance. As Saul Alinsky noted, every revolution inspires its own counter-revolution. That’s why three quarters of organizational transformations fail, because managers too often see it as a communication exercise, rather than a strategic effort to empower those who are already enthusiastic about change to influence everyone else.

In the case of Experian’s move to the cloud, the objections were not unfounded. Offering customers real-time access to data did have the potential to upend the traditional credit report business model. Switching to a new technology architecture does raise cybersecurity concerns. Many senior project managers really had served the company well for decades with traditional development methods.

As Global CIO, Libenson could have ignored these concerns. He could have held a “townhall” and launched a major communication effort to convince the skeptics. Yet he did neither of these things. Instead, he treated the resistance not as an obstacle, but as a design constraint. He identified people who were already enthusiastic about the shift and empowered them to make it work. Their success built momentum and paved the way for what became a major transformation .

In fact, Experian’s cloud architecture unlocked enormous value for the firm and its customers. The company’s API hub made good on Libenson’s initial promise of supporting real-time access to data and today processes over 100 million transactions a month. It has also enabled a completely new business, called Ascend, now one of the company’s most successful products.

The truth is that bringing about fundamental, transformational change takes more than clever slogans and happy talk. The status quo always has inertia on its side and never yields its power gracefully. You need to be clear-eyed and hard-nosed. You need to understand that for every significant change, there will be some who seek to undermine it in ways that are dishonest, underhanded and deceptive.

The difference between successful revolutionaries and mere dreamers is that those who succeed anticipate resistance and build a plan to overcome it.

— Article courtesy of the Digital Tonto blog
— Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Innovation is Combination

Silicon Valley’s Innovator’s Dilemma – The Atom, the Bit and the Gene

Innovation is Combination

GUEST POST from Greg Satell

Over the past several decades, innovation has become largely synonymous with digital technology. When the topic of innovation comes up, somebody points to a company like Apple, Google or Meta rather than, say, a car company, a hotel or a restaurant. Management gurus wax poetically about the “Silicon Valley way.”

Of course, that doesn’t mean that other industries haven’t been innovative. In fact, there are no shortage of excellent examples of innovation in cars, hotels, restaurants and many other things. Still, the fact remains that for most of recent memory digital technology has moved further and faster than anything else.

This has been largely due to Moore’s Law, our ability to consistently double the number of transistors we’re able to cram onto a silicon wafer. Now, however, Moore’s Law is ending and we’re entering a new era of innovation. Our future will not be written in ones and zeros, but will be determined by our ability to use information to shape the physical world.

The Atom

The concept of the atom has been around at least since the time of the ancient Greek philosopher Democritus. Yet it didn’t take on any real significance until the early 20th century. In fact, the paper Albert Einstein used for his dissertation helped to establish the existence of atoms through a statistical analysis of Brownian motion.

Yet it was the other papers from Einstein’s miracle year of 1905 that transformed the atom from an abstract concept to a transformative force, maybe even the most transformative force in the 20th century. His theory of mass-energy equivalence would usher in the atomic age, while his work on black-body radiation would give rise to quantum mechanics and ideas so radical that even he would refuse to accept them.

Ironically, despite Einstein’s reluctance, quantum theory would lead to the development of the transistor and the rise of computers. These, in turn, would usher in the digital economy, which provided an alternative to the physical economy of goods and services based on things made from atoms and molecules.

Still, the vast majority of what we buy is made up of what we live in, ride in, eat and wear. In fact, information and communication technologies only make up about 6% of GDP in advanced countries, which is what makes the recent revolution in materials science is so exciting. We’re beginning to exponentially improve the efficiency of how we design the materials that make up everything from solar panels to building materials.

The Bit

While the concept of the atom evolved slowly over millennia, the bit is one of the rare instances in which an idea seems to have arisen in the mind of a single person with little or no real precursor. Introduced by Claude Shannon in a paper in 1948—incidentally, the same year the transistor was invented—the bit has shaped how we see and interact with the world ever since.

The basic idea was that information isn’t a function of content, but the absence of ambiguity, which can be broken down to a single unit – a choice between two alternatives. Much like how a coin toss which lacks information while in the air, but takes on a level of certainty when it lands, information arises when ambiguity disappears.

He called this unit, a “binary digit” or a “bit” and much like the pound, quart, meter or liter, it has become such a basic unit of measurement that it’s hard to imagine our modern world without it. Shannon’s work would soon combine with Alan Turing’s concept of a universal computer to create the digital computer.

Now the digital revolution is ending and we will soon be entering a heterogeneous computing environment that will include things like quantum, neuromorphic and biological computing. Still, Claude Shannon’s simple idea will remain central to how we understand how information interacts with the world it describes.

The Gene

The concept of the gene was first discovered by an obscure Austrian monk named Gregor Mendel, but in one of those strange peculiarities of history, his work went almost totally unnoticed until the turn of the century. Even then, no one really knew what a gene was or how they functioned. The term was, for the most part, just an abstract concept.

That changed abruptly when James Watson and Francis Crick published their article in the scientific journal Nature. In a single stroke, the pair were able to show that genes were, in fact, made up of a molecule called DNA and that they operated through a surprisingly simple code made up of A,T,C and G.

Things really began to kick into high gear when the Human Genome Project was completed in 2003. Since then the cost to sequence a genome has been falling faster than the rate of Moore’s Law, which has unleashed a flurry of innovation. Jennifer Doudna’s discovery of CRISPR in 2012 revolutionized our ability to edit genes. More recently, mRNA technology has helped develop COVID-19 vaccines in record time.

Today, we have entered a new era of synthetic biology in which we can manipulate the genetic code of A,T,C and G almost as easily as we can the bits in the machines that Turing imagined all those years ago. Researchers are also exploring how we can use genes to create advanced materials and maybe even create better computers.

Innovation Is Combination

The similarity of the atom, the bit and the gene as elemental concepts is hard to miss and they’ve allowed us to understand our universe in a visceral, substantial way. Still, they arose in vastly different domains and have been largely applied to separate and distinct fields. In the future, however, we can expect vastly greater convergence between the three.

We’ve already seen glimpses of this. For example, as a graduate student Charlie Bennett was a teaching assistant for James Watson. Yet in between his sessions instructing undergraduates in Watson’s work on genes, he took an elective course on the theory of computing in which he learned about the work of Shannon and Turing. That led him to go work for IBM and become a pioneer in quantum computing.

In much the same way, scientists are applying powerful computers to develop new materials and design genetic sequences. Some of these new materials will be used to create more powerful computers. In the future, we can expect the concepts of the atom, the bit and the gene to combine and recombine in exciting ways that we can only begin to imagine today.

The truth is that innovation is combination and has, in truth, always been. The past few decades, in which one technology so thoroughly dominated that it was able to function largely in isolation to other fields, was an anomaly. What we are beginning to see now is, in large part, a reversion to the mean, where the most exciting work will be interdisciplinary.

This is Silicon Valley’s innovator’s dilemma. Nerdy young geeks will no longer be able to prosper coding blithely away in blissful isolation. It is no longer sufficient to work in bits alone. Increasingly we need to combine those bits with atoms and genes to create significant value. If you want to get a glimpse of the future, that’s where to look.

— Article courtesy of the Digital Tonto blog
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Triggering Radical Transformational Change

Triggering Radical Transformational Change

GUEST POST from Greg Satell

There’s an old adage that says we should never let a crisis go to waste. The point is that during a crisis there is a visceral sense of urgency and resistance often falls by the wayside. We certainly saw that during the COVID-19 pandemic. Digital technologies such as video conferencing, online grocery and tele-health have gone from fringe to mainstream in record time.

Seasoned leaders learn how to make good use of a crisis. Consider Bill Gates and his ‘Internet Tidal Wave‘ memo, which leveraged what could have been a mortal threat to Microsoft into a springboard to even greater dominance. Or how Steve Jobs used Apple’s near-death experience to reshape the ailing company into a powerhouse.

But what if we could prepare for a trigger before it happens? The truth is that indications of trouble are often clear long before the crisis arrives. Clearly, there were a number of warning signs that a pandemic was possible, if not likely. As every good leader knows, there’s never a shortage of looming threats. If we learn to plan ahead, we can make a crisis work for us.

The Plan Hatched in a Belgrade Cafe

In the fall of 1998, five young activists met in a coffee shop in Belgrade, Serbia. Although still in their twenties, they were already grizzled veterans. In 1992, they took part in student protests against the war in Bosnia. In 1996, they helped organize a series of rallies in response to Slobodan Milošević’s attempt to steal local elections.

To date, their results were decidedly mixed. The student protests were fun, but when the semester ended, everyone went home for the summer and that was the end of that. The 1996 protests were more successful, overturning the fraudulent results, but the opposition coalition, called “Zajedno,” soon devolved into infighting.

So they met in the coffee shop to discuss their options for the upcoming presidential election to be held in 2000. They knew from experience that they could organize rallies effectively and get people to the polls. They also knew that when they got people to the polls and won, Milošević would use his power and position to steal the election.

That would be their trigger.

The next day, six friends joined them and they called their new organization Otpor. Things began slowly, with mostly street theatre and pranks, but within 2 years their ranks had swelled to more than 70,000. When Milošević tried to steal the election they were ready and what is now known as the Bulldozer Revolution erupted.

The Serbian strongman was forced to concede. The next year, Milošević would be arrested and sent to The Hague for his crimes against humanity. He would die in his prison cell in 1996, awaiting trial.

Opportunity From the Ashes

In 2014, in the wake of the Euromaidan protests that swept the thoroughly corrupt autocrat Viktor Yanukovych from power, Ukraine was in shambles. Having been looted of roughly $100 billion (roughly the amount of the country’s entire GDP) and invaded by Russia, things looked bleak. Without western aid, the proud nation’s very survival was in doubt.

Yet for Vitaliy Shabunin and the Anti-Corruption Action Center, it was a moment he had been waiting for. He established the organization with his friend Dasha Kaleniuk a few years earlier. Since then they, along with a small staff, had been working with international NGOs to document corruption and develop effective legislation to fight it.

With Ukraine’s history of endemic graft, which had greatly worsened under Yanukovych, progress had been negligible. Yet now, with the IMF and other international institutions demanding reform, Shabunin and Kaleniuk were instantly in demand to advise the government on instituting a comprehensive anti-corruption program, which passed in record time.

Yet they didn’t stop there either. “Our long-term strategy is to create a situation in which it will be impossible not to do anti-corruption reforms,” Shabunin would later tell me. “We are working to ensure that these reforms will be done, either by these politicians or by another, because they will lose their office if they don’t do these reforms.”

Vitaliy, Dasha and the Anti-Corruption Action Center continue to prepare for future triggers.

The Genius of Xerox PARC

One story that Silicon Valley folks love to tell involves Steve Jobs and Xerox. After the copier giant made an investment in Apple, which was then a fledgling company, it gave Jobs access to its Palo Alto Research Center (PARC). He then used the technology he saw there to create the Macintosh. Jobs built an empire based on Xerox’s oversight.

Yet the story misses the point. By the late 60s, its Xerox CEO Peter McColough knew that the copier business, while still incredibly profitable, was bound to be disrupted eventually. At the same time it was becoming clear that computer technology was advancing quickly and, someday, would revolutionize how we worked. PARC was created to prepare for that trigger.

The number of groundbreaking technologies created at PARC is astounding. The graphical user interface, networked computing, object oriented programing, the list goes on. Virtually everything that we came to know as “personal computing” had its roots in the work done at PARC in the 1970s.

Most of all, PARC saved Xerox. The laser printer invented there would bring in billions and, eventually, largely replace the copier business. Some technologies were spun off into new companies, such as Adobe and 3Com, with an equity stake going to Xerox. And, of course, the company even made a tidy profit off the Macintosh, because of the equity stake that gave Jobs access to the technology in the first place.

Transforming an Obstacle Into a Design Constraint

The hardest thing about change is that, typically, most people don’t want it. If they did, it have already been accepted as the normal state of affairs. That can make transformation a lonely business. The status quo has inertia on its side and never yields its power gracefully. The path for an aspiring changemaker can be heartbreaking and soul crushing.

Many would see the near-certainty that Milosevic would try to steal the election as an excuse to do nothing. Most people would look at the almost impossibly corrupt Yanukovych regime and see the idea of devoting your life to anti-corruption reforms as quixotic folly. It is extremely rare for a CEO whose firm dominates an industry to ask, “What comes after?”

Yet anything can happen and often does. Circumstances conspire. Events converge. Round-hole businesses meet their square-peg world. We can’t predict exactly when or where or how or what will happen, but we know that everybody and everything gets disrupted eventually. It’s all just a matter of time.

When that happens resistance to change temporarily abates. So there’s lots to do and no time to waste. We need to empower our allies, as well as listen to our adversaries. We need to build out a network to connect to others who are sympathetic to our cause. Transformational change is always driven by small groups, loosely connected, but united by a common purpose.

Most of all, we need to prepare. A trigger always comes and, when it does, it brings great opportunity with it.

— Article courtesy of the Digital Tonto blog
— Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

We Need to Solve the Productivity Crisis

We Need to Solve the Productivity Crisis

GUEST POST from Greg Satell

When politicians and pundits talk about the economy, they usually do so in terms of numbers. Unemployment is too high or GDP is too low. Inflation should be at this level or at that. You get the feeling that somebody somewhere is turning knobs and flicking levers in order to get the machine humming at just the right speed.

Yet the economy is really about our well being. It is, at its core, our capacity to produce goods and services that we want and need, such as the food that sustains us, the homes that shelter us and the medicines that cure us, not to mention all of the little niceties and guilty pleasures that we love to enjoy.

Our capacity to generate these things is determined by our productive capacity. Despite all the hype about digital technology creating a “new economy,” productivity growth for the past 50 years has been tremendously sluggish. If we are going to revive it and improve our lives we need to renew our commitment to scientific capital, human capital and free markets.

Restoring Scientific Capital

In 1945, Vannevar Bush, delivered a report, Science, The Endless Frontier, that argued that the US government needed to invest in “scientific capital” and through basic research and scientific education. It would set in motion a number of programs that would set the stage for America’s technological dominance during the second half of the century.

Bush’s report led to the development of America’s scientific infrastructure, including agencies such as the National Science Foundation (NSF), National Institutes of Health (NIH) and DARPA. Others, such as the National Labs and science programs at the Department of Agriculture, also contribute significantly to our scientific capital.

The results speak for themselves and returns on public research investment have been shown to surpass those in private industry. To take just one example, it has been estimated that the $3.8 billion invested in the Human Genome Project resulted in nearly $800 billion in economic impact and created over 300,000 jobs in just the first decade.

Unfortunately, we forgot those lessons. Government investment in research as a percentage of GDP has been declining for decades, limiting our ability to produce the kinds of breakthrough discoveries that lead to exciting new industries. What passes for innovation these days displaces workers, but does not lead to significant productivity gains.

So the first step to solving the productivity puzzle would be to renew our commitment to investing in the type of scientific knowledge that, as Bush put it, can “turn the wheels of private and public enterprise.” There was a bill before congress to do exactly that, but unfortunately it got bogged down in the Senate due to infighting.

Investing In Human Capital

Innovation, at its core, is something that people do, which is why education was every bit as important to Bush’s vision as investment was. “If ability, and not the circumstance of family fortune, is made to determine who shall receive higher education in science, then we shall be assured of constantly improving quality at every level of scientific activity,” he wrote.

Programs like the GI Bill delivered on that promise. We made what is perhaps the biggest investment ever in human capital, sending millions to college and creating a new middle class. American universities, considered far behind their European counterparts earlier in the century, especially in the sciences, came to be seen as the best in the world by far.

Today, however, things have gone horribly wrong. A recent study found that about half of all college students struggle with food insecurity, which is probably why only 60% of students at 4-year institutions and even less at community colleges ever earn a degree. The ones that do graduate are saddled with decades of debt

So the bright young people who we don’t starve we are condemning to decades of what is essentially indentured servitude. That’s no way to run an entrepreneurial economy. In fact, a study done by the Federal Reserve Bank of Philadelphia found that student debt has a measurable negative impact on new business creation.

Recommitting Ourselves To Free and Competitive Markets

There is no principle more basic to capitalism than that of free markets, which provide the “invisible hand” to efficiently allocate resources. When market signals get corrupted, we get less of what we need and more of what we don’t. Without vigorous competition, firms feel less of a need to invest and innovate, and become less productive.

There is abundant evidence that is exactly what has happened. Since the late 1970s antitrust enforcement has become lax, ushering in a new gilded age. While digital technology was hyped as a democratizing force, over 75% of industries have seen a rise in concentration levels since the late 1990s, which has led to a decline in business dynamism.

The problem isn’t just monopoly power dominating consumers, either, but also monopsony, or domination of suppliers by buyers, especially in labor markets. There is increasing evidence of collusion among employers designed to keep wages low, while an astonishing abuse of non-compete agreements that have affected more than a third of the workforce.

In a sense, this is nothing new. Adam Smith himself observed in The Wealth of Nations that “Our merchants and master-manufacturers complain much of the bad effects of high wages in raising the price, and thereby lessening the sale of their goods both at home and abroad. They say nothing concerning the bad effects of high profits. They are silent with regard to the pernicious effects of their own gains. They complain only of those of other people.”

Getting Back On Track

In the final analysis, solving the productivity puzzle shouldn’t be that complicated. It seems that everything we need to do we’ve done before. We built a scientific architecture that remains unparalleled even today. We led the world in educating our people. American markets were the most competitive on the planet.

Yet somewhere we lost our way. Beginning in the early 1970s, we started reducing our investment in scientific research and public education. In the early 1980s, the Chicago school of competition law started to gain traction and antitrust enforcement began to wane. Since 2000, competitive markets in the United States have been in serious decline.

None of this was inevitable. We made choices and those choices had consequences. We can make other ones. We can choose to invest in discovering new knowledge, educate our children without impoverishing them, to demand our industries compete and hold our institutions to account. We’ve done these things before and can do so again.

All that’s left is the will and the understanding that the economy doesn’t exist in the financial press, on the floor of the stock markets or in the boardrooms of large corporations, but in our own welfare as well as in our ability to actualize our potential and realize our dreams. Our economy should be there to serve our needs, not the other way around.

— Article courtesy of the Digital Tonto blog
— Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Disinformation Economics

Disinformation Economics

GUEST POST from Greg Satell

Marshal McLuhan, one of the most influential thinkers of the 20th century, described media as “extensions of man” and predicted that electronic media would eventually lead to a global village. Communities, he predicted, would no longer be tied to a single, isolated physical space but connect and interact with others on a world stage.

What often goes untold is that McLuhan did not see the global village as a peaceful place. In fact, he predicted it would lead to a new form of tribalism and result in a “release of human power and aggressive violence” greater than ever in human history, as long separated —and emotionally charged— cultural norms would now constantly intermingle, clash and explode.

Today, the world looks a whole lot like the dystopia McLuhan described. Fringe groups, nation states and profit-seeking corporations have essentially weaponized information and we are all caught in the crossfire. While the situation is increasingly dire it is by no means hopeless. What we need isn’t more fact checking, but to renew institutions and rebuild trust.

How Tribes Emerge

We tend to think of the world we live in as the result of some grand scheme. In the middle ages, the ontological argument posited the existence of an “unmoved mover” that set events in motion. James Bond movies always feature an evil genius. No conspiracy theory would be complete without an international cabal pulling the strings.

Yet small decisions, spread out over enough people, can create the illusion of a deliberate order. In his classic Micromotives and Macrobehavior, economist Thomas Schelling showed how even small and seemingly innocuous choices, when combined with those of others, can lead to outcomes no one intended or preferred.

Consider the decision to live in a particular neighborhood. Imagine a young couple who prefers to live in a mixed-race neighborhood but doesn’t want to be outnumbered. Schelling showed, mathematically, how if everybody shares those same inclinations that scenario results in extreme segregation, even though that is exactly opposite of what was intended.

This segregation model an example of a Nash equilibrium, in which individual decisions eventually settle into a stable group dynamic. No one in the system has an incentive to change his or her decision. Yet just because an equilibrium is stable doesn’t mean it’s optimal or even preferable. In fact, some Nash equilibriums, such as the famous prisoner’s dilemma and the tragedy of the commons make everyone worse off.

That, in essence, is what appears to have happened in today’s media environment with respect to disinformation.

The Power Of Local Majorities

A big part of our everyday experience is seen through the prism of people that surround us. Our social circles have a major influence on what we perceive and how we think. In fact, a series of famous experiments done at Swarthmore College in the 1950’s showed that we will conform to the opinions of those around us even if they are obviously wrong.

It isn’t particularly surprising that those closest to us influence our thinking, but more recent research has found that the effect extends to three degrees of social distance. So it is not only those we know well, but even the friends of our friend’s friends have a deep and pervasive effect how we think and behave.

This effect is then multiplied by our tendency to be tribal, even when the source of division is arbitrary. For example, in a study where young children were randomly assigned to a red or a blue group, they liked pictures of other kids who wore t-shirts that reflected their own group better. In another study of adults that were randomly assigned to “leopards” and “tigers,” fMRI studies noted hostility to out-group members regardless of their race.

The simple truth is that majorities don’t just rule, they also influence, especially local majorities. Combine that with the mathematical and psychological forces that lead us to separate ourselves from each other and we end up living in a series of social islands rather than the large, integrated society we often like to imagine.

Filter Bubbles And Echo Chambers

Clearly, the way we tend to self-sort ourselves into homophilic, homogeneous groups will shape how we perceive what we see and hear, but it will also affect how we access information. Recently, a team of researchers at MIT looked into how we share information—and misinformation—with those around us. What they found was troubling.

When we’re surrounded by people who think like us, we share information more freely because we don’t expect to be rebuked. We’re also less likely to check our facts, because we know that those we are sharing the item with will be less likely to inspect it themselves. So when we’re in a filter bubble, we not only share more, we’re also more likely to share things that are not true. Greater polarization leads to greater misinformation.

Let’s combine this insight with the profit incentives of social media companies. Obviously, they want their platforms to be more engaging than their competition. So naturally, they want people to share as much as possible and the best way to do that is to separate people into groups that think alike, which will increase the amount of disinformation produced.

Notice that none of this requires any malicious intent. The people in Schelling’s segregation model actually wanted to live in an integrated neighborhood. In much the same way, the subjects in the fMRi studies showed hostility to members of other groups regardless of race. Social media companies don’t necessarily want to promote untruths, they merely need to tune their algorithms to create maximum engagement and the same effect is produced.

Nevertheless, we have blundered into a situation in which we increasingly see—and believe—things that aren’t true. We have created a global village at war with itself.

Rebuilding Trust

At its core, the solution to the problem of disinformation has less to do with information than it has to do with trust. Living in a connected world demands that we transcend our own context and invite in the perspectives and experiences of others. That is what McLuhan meant when he argued that we electronic media would create a global village.

Inevitably, we don’t like much of what we see. When we are confronted with the strange and unusual we must decide whether to assimilate and adopt the views of others, or to assert the primacy of our own. The desire for recognition can result in clashes and confrontation, which lead us to seek out those who look, think and act in ways that reinforce our sense of self. We build echo chambers that deny external reality to satisfy these tribal instincts.

Yet as Francis Fukuyama pointed out in Identity, there is another option. We can seek to create a larger sense of self through building communities rooted in shared values. When viewed through the prism of common undertaking rather than that tribe, diverse perspectives can be integrated and contribute to a common cause.

What’s missing in our public discourse today isn’t more or better information. We already have far more access to knowledge than at any time in human history. What we lack is a shared sense of mission and purpose. We need a shared endeavor to which we can contribute the best of our energies and for which we can welcome the contributions of others.

Without shared purpose, we are left only with identity, solipsism and the myth-making we require to make ourselves feel worthwhile.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Building Competence Often More Important Than a Vision

Building Competence Often More Important Than a Vision

GUEST POST from Greg Satell

In 1993, when asked about his vision for the failing company he was chosen to lead, Lou Gerstner famously said, “The last thing IBM needs right now is a vision.” What he meant was that if IBM couldn’t figure out how to improve operations to the point where it could start making money again, no vision would matter.

Plenty of people have visions. Elizabeth Holmes had one for Theranos, but its product was a fraud and the company failed. Many still believe in Uber’s vision of “gig economy” taxis, but even after more than 10 years and $25 billion invested, it still loses billions. WeWork’s proven business model became a failure when warped by a vision.

The truth is that anyone can have a vision. Look at any successful organization, distill its approach down to a vision statement and you will easily be able to find an equal or greater success that does things very differently. There is no silver bullet. Successful leaders are not the ones with the most compelling vision, but those who build the skills to make it a reality.

Gandhi’s “Himalyan Miscalculation”

When Mahatma Gandhi returned to India in 1915, after more than two decades spent fighting for Indian rights in South Africa, he had a vision for the future of his country. His view, which he laid out in his book Hind Swaraj, was that the British were only able to rule because of Indian cooperation. If that cooperation were withheld, the British Raj would fall.

In 1919, when the British passed the repressive Rowlatt Acts, which gave the police the power to arrest anyone for any reason whatsoever, he saw an opportunity to make his vision a reality. He called for a nationwide campaign of civil disobedience, called a hartal, in which Indians would refuse to work or do business.

At first, it was a huge success and the country came to a standstill. But soon things spun wildly out of control and eventually led to the massacre at Amritsar, in which British soldiers left hundreds dead and more than a thousand wounded. He would later call the series of events his Himalayan Miscalculation and vowed never to repeat his mistake.

What Gandhi realized was that his vision was worthless without people trained in his Satyagraha philosophy and capable of implementing his methods. He began focusing his efforts on indoctrinating his followers and, a decade later, set out on the Salt March with only about 70 of his most disciplined disciples.

This time, he triumphed in what is remembered as his greatest victory. In the end, it wasn’t Gandhi’s vision, but what he learned along the way that made him a historic icon.

The Real Magic Behind Amazon’s 6-Page Memo

We tend to fetishize the habits of successful people. We probe for anomalies and, when we find something out of the ordinary, we praise it as not only for its originality, but consider it to be the source of success. There is no better example of this delusion than Jeff Bezos’s insistence on using six-page memos rather than PowerPoint in meetings at Amazon.

There are two parts to this myth. First is the aversion to PowerPoint, which most corporate professionals use, but few use well. Second, the novelty of a memo, structured in a particular way, as the basis for structuring a meeting. Put them together and you have a unique ritual which, given Amazon’s incredible success, has taken on legendary status.

But delve a little deeper and you find it’s not the memos themselves, but Amazon’s writing culture that makes the difference. When you look at the company, which thrives in such a variety of industries, there are a dizzying array of skills that need to be integrated to make it work smoothly. That doesn’t just happen by itself.

What Jeff Bezos has done is put an emphasis on communication skills, in general and writing in particular. Amazon executives, from the time they are hired, learn that the best way to get ahead in the company is to learn how to write with clarity and power. They hone that skill over the course of their careers and, if they are to succeed, must learn to excel at it.

Anyone can ban PowerPoint and mandate memos. Building top-notch communication skills across a massive enterprise, on the other hand, is not so easy.

The Real Genius Of Elon Musk

In 2007, an ambitious entrepreneur launched a new company with a compelling vision. Determined to drive the shift from fossil fuels to renewables, he would create an enterprise to bring electric cars to the masses. A master salesman, he was able to raise hundreds of millions of dollars as well as the endorsement of celebrities and famous politicians.

Yet the entrepreneur wasn’t Elon Musk and the company wasn’t Tesla. The young man’s name was Shai Agassi and his company, Better Place, failed miserably within a few years. Despite all of the glitz and glamour he was able to generate, the basic fact was that Agassi knew nothing about building cars or the economics of lithium-ion batteries.

Musk, on the other hand, did the opposite. He did not attempt to build a car for the masses, but rather for Silicon Valley millionaires who wouldn’t need to rely on a Tesla to bring the kids to soccer practice, but could use it to zoom around and show off to their friends. That gave Musk the opportunity to learn how to manufacture cars efficiently and effectively. In other words, to build competency.

When we have a big vision, we tend to want to search out the largest addressable market. Unfortunately, that is where you’ll find stiff competition and customers who are already fairly well-served. That’s why it’s almost always better to identify a hair-on-fire use case—something that a small subset of customers want or need so badly they almost literally have their hair on fire—and scale up from there.

As Steve Blank likes to put it, “no business plan survives first contact with a customer.” Every vision is wrong. Some are off by a little and some are off by a lot. But they’re all wrong in some way. The key to executing on a vision is by identifying vulnerabilities early on and then building the competencies to overcome them.

Why So Many Visions Become Delusions

When you look at the truly colossal business failures of the last 20 years, going back to Enron and LTCM at the beginning of the century to the “unicorns” of today, a common theme is the inability to make basic distinctions between visions and delusions. Delusions, like myths, always contain some kernel of truth, but dissipate when confronted with real world problems.

Also underlying these delusions is a mistrust of experts and the establishment. After all, if a fledgling venture has the right idea then, almost by definition, the establishment must have the wrong idea. As Sam Arbesman pointed out in The Half Life of Facts, what we know to be true changes all the time.

Yet that’s why we need experts. Not to give us answers, but to help us ask better questions. That’s how we can find flaws in our ideas and learn to ask better questions ourselves. Unfortunately recent evidence suggests that “founder culture” in Silicon Valley has gotten so out of hand that investors no longer ask hard questions for fear of getting cut out of deals. \

The time has come for us to retrench, much like Gerstner did a generation ago, and recommit ourselves to competence. Of course, every enterprise needs a vision, but a vision is meaningless without the ability to achieve it. That takes more than a lot of fancy talk, it requires the guts to see the world as it really is and still have the courage to try to change it.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI Requires Conversational Intelligence

AI Requires Conversational Intelligence

GUEST POST from Greg Satell

Historically, building technology had been about capabilities and features. Engineers and product designers would come up with new things that they thought people wanted, figure out how to make them work and ship “new and improved” products. The result was often things that were maddeningly difficult to use.

That began to change when Don Norman published his classic, The Design of Everyday Things and introduced concepts like dominant design, affordances and natural mapping into industrial design. The book is largely seen as pioneering the user-centered design movement. Today, UX has become a thriving field.

Yet artificial intelligence poses new challenges. We speak or type into an interface and expect machines to respond appropriately. Often they do not. With the popularity of smart speakers like Amazon Alexa and Google Home, we have a dire need for clear principles for human-AI interactions. A few years ago, two researchers at IBM embarked on a journey to do just that.

The Science Of Conversations

Bob Moore first came across conversation analysis as an undergraduate in the late 1980s, became intensely interested and later earned a PhD based on his work in the field. The central problems are well known to anybody who has ever watched Seinfeld or Curb Your Enthusiasm, our conversations are riddled with complex, unwritten rules that aren’t always obvious.

For example, every conversation has an unstated goal, whether it is just to pass the time, exchange information or to inspire an emotion. Yet our conversations are also shaped by context. For example, the unwritten rules would be different for a conversation between a pair of friends, a boss and subordinate, in a courtroom setting or in a doctor’s office.

“What conversation analysis basically tries to reveal are the unwritten rules people follow, bend and break when engaging in conversations,” Moore told me and he soon found that the tech industry was beginning to ask similar questions. So he took a position at Xerox PARC and then Yahoo! before landing at IBM in 2012.

As the company was working to integrate its Watson system with applications from other industries, he began to work with Raphael Arar, an award-winning visual designer and user experience expert. The two began to see that their interests were strangely intertwined and formed a partnership to design better conversations for machines.

Establishing The Rules Of Engagement

Typically, we use natural language interfaces, both voice and text, like a search box. We announce our intention to seek information by saying, “Hey Siri,” or “Hey Alexa,” followed by a simple query, like “where is the nearest Starbucks.” This can be useful, especially when driving or walking down the street,” but is also fairly limited, especially for more complex tasks.

What’s far more interesting — and potentially far more useful — is being able to use natural language interfaces in conjunction with other interfaces, like a screen. That’s where the marriage of conversational analysis and user experience becomes important, because it will help us build conventions for more complex human-computer interactions.

“We wanted to come up with a clear set of principles for how the various aspects of the interface would relate to each other,” Arar told me. “What happens in the conversation when someone clicks on a button to initiate an action?” What makes this so complex is that different conversations will necessarily have different contexts.

For example, when we search for a restaurant on our phone, should the screen bring up a map, information about pricing, pictures of food, user ratings or some combination? How should the rules change when we are looking for a doctor, a plumber or a travel destination?

Deriving Meaning Through Preserving Context

Another aspect of conversations is that they are highly dependent on context, which can shift and evolve over time. For example, if we ask someone for a restaurant nearby, it would be natural for them to ask a question to narrow down the options, such as “what kind of food are you looking for?” If we answer, “Mexican,” we would expect that person to know we are still interested in restaurants, not, say, the Mexican economy or culture.

Another issue is that when we follow a particular logical chain, we often find some disqualifying factor. For instance, a doctor might be looking for a clinical trial for her patient, find one that looks promising but then see that that particular study is closed. Typically, she would have to retrace her steps to go back to find other options.

“A true conversational interface allows us to preserve context across the multiple turns in the interaction,” Moore says. “If we’re successful, the machine will be able to adapt to the user’s level of competence, serving the expert efficiently but also walking the novice through the system, explaining itself as needed.”

And that’s the true potential of the ability to initiate more natural conversations with computers. Much like working with humans, the better we are able to communicate, the more value we can get out of our relationships.

Making The Interface Disappear

In the early days of web usability, there was a constant tension between user experience and design. Media designers were striving to be original. User experience engineers, on the other hand, were trying to build conventions. Putting a search box in the upper right hand corner of a web page might not be creative, but that’s where users look to find it.

Yet eventually a productive partnership formed and today most websites seem fairly intuitive. We mostly know where things are supposed to be and can navigate things easily. The challenge now is to build that same type of experience for artificial intelligence, so that our relationships with the technology become more natural and more useful.

“Much like we started to do with user experience for conventional websites two decades ago, we want the user interface to disappear,” Arar says. Because when we aren’t wrestling with the interface and constantly having to repeat ourselves or figuring out how to rephrase our questions, we can make our interactions much more efficient and productive.

As Moore put it to me, “Much of the value of systems today is locked in the data and, as we add exabytes to that every year, the potential is truly enormous. However, our ability to derive value from that data is limited by the effectiveness of the user interface. The more we can make the interface become intelligent and largely disappear, the more value we will be able unlock.”

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.