Author Archives: Greg Satell

About Greg Satell

Greg Satell is a popular speaker and consultant. His latest book, Cascades: How to Create a Movement That Drives Transformational Change, is available now. Follow his blog at Digital Tonto or on Twitter @Digital Tonto.

Preparing the Next Generation for a Post-Digital Age

Preparing the Next Generation for a Post-Digital Age

GUEST POST from Greg Satell

An education is supposed to prepare you for the future. Traditionally, that meant learning certain facts and skills, like when Columbus discovered America or how to do long division. Today, curricula have shifted to focus on a more global and digital world, like cultural history, basic computer skills and writing code.

Yet the challenges that our kids will face will be much different than we did growing up and many of the things a typical student learns in school today will no longer be relevant by the time he or she graduates college. In fact, a study at the University of Oxford found that 47% of today’s jobs will be eliminated over the next 20 years.

In 10 or 20 years, much of what we “know” about the world will no longer be true. The computers of the future will not be digital. Software code itself is disappearing, or at least becoming far less relevant. Many of what are considered good jobs today will be either automated or devalued. We need to rethink how we prepare our kids for the world to come.

Understanding Systems

The subjects we learned in school were mostly static. 2+2 always equaled 4 and Columbus always discovered America in 1492. Interpretations may have differed from place to place and evolved over time, but we were taught that the world was based on certain facts and we were evaluated on the basis on knowing them.

Yet as the complexity theorist Sam Arbesman has pointed out, facts have a half life and, as the accumulation of knowledge accelerates, those half lives are shrinking. For example, when we learned computer programming in school, it was usually in BASIC, a now mostly defunct language. Today, Python is the most popular language, but will likely not be a decade from now.

Computers themselves will be very different as well, based less on the digital code of ones and zeros and more on quantum laws and the human brain. We will likely store less information on silicon and more in DNA. There’s no way to teach kids how these things will work because nobody, not even experts, is quite sure yet.

So kids today need to learn less about how things are today and more about the systems future technologies will be based on, such as quantum mechanics, genetics and the logic of code. One thing economists have consistently found is that it is routine jobs that are most likely to be automated. The best way to prepare for the future is to develop the ability to learn and adapt.

Applying Empathy And Design Skills

While machines are taking over many high level tasks, such as medical analysis and legal research, there are some things they will never do. For example, a computer will never strike out in a Little League game, have its heart broken or see its child born. So it is very unlikely, if not impossible, that a machine will be able to relate to a human like other humans can.

That absence of empathy makes it hard for machines to design products and processes that will maximize enjoyment and utility for humans. So design skills are likely to be in high demand for decades to come as basic production and analytical processes are increasingly automated.

We’ve already seen this process take place with regard to the Internet. In the early days, it was a very technical field. You had to be a highly skilled engineer to make a website work. Today, however, building a website is something any fairly intelligent high school student can do and much of the value has shifted to front-end tasks, like designing the user experience.

With the rise of artificial intelligence and virtual reality our experiences with technology will become more far immersive and that will increase the need for good design. For example, conversational analysts (yes, that’s a real job) are working with designers to create conversational intelligence for voice interfaces and, clearly, virtual reality will be much more design intensive than video ever was.

The Ability To Communicate Complex Ideas

Much of the recent emphasis in education has been around STEM subjects (science, technology, engineering and math) and proficiency in those areas is certainly important for today’s students to understand the world around them. However, many STEM graduates are finding it difficult to find good jobs.

On the other hand, the ability to communicate ideas effectively is becoming a highly prized skill. Consider Amazon, one of the most innovative and technically proficient organizations on the planet. However, a key factor to its success its writing culture. The company is so fanatical about the ability to communicate that developing good writing skills are essential to building a successful career there.

Think about Amazon’s business and it becomes clear why. Sure, it employs highly adept engineers, but to create a truly superior product those people need to collaborate closely with designers, marketers, business development executives and others. To coordinate all that activity and keep everybody focused on delivering a specific experience to the customer, communication needs to be clear and coherent.

So while learning technical subjects like math and science is always a good idea, studying things like literature, history and philosophy is just as important.

Collaborating And Working In Teams

Traditionally, school work has been based on individual accomplishment. You were supposed to study at home, come in prepared and take your test without help. If you looked at your friend’s paper, it was called cheating and you got in a lot of trouble for it. We were taught to be accountable for achievements on our own merits.

Yet consider how the nature of work has changed, even in highly technical fields. In 1920, most scientific papers were written by sole authors, but by 1950 that had changed and co-authorship became the norm. Today, the average paper has four times as many authors as it did then and the work being done is far more interdisciplinary and done at greater distances than in the past.

Make no mistake. The high value work today is being done in teams and that will only increase as more jobs become automated. The jobs of the future will not depend as much on knowing facts or crunching numbers, but will involve humans collaborating with other humans to design work for machines. Collaboration will increasingly be a competitive advantage.

That’s why we need to pay attention not just to how our kids work and achieve academically, but how they play, resolve conflicts and make others feel supported and empowered. The truth is that value has shifted from cognitive skills to social skills. As kids will increasingly be able to learn complex subjects through technology, the most important class may well be recess.

Perhaps most of all, we need to be honest with ourselves and make peace with the fact that our kids educational experience will not — and should not — mirror our own. The world which they will need to face will be far more complex and more difficult to navigate than anything we could imagine back in the days when Fast Times at Ridgemont High was still popular.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Change the World With a Keystone Change

Change the World With a Keystone Change

GUEST POST from Greg Satell

On December 31st, 1929, the Indian National Congress, the foremost nationalist group on the subcontinent, issued a Declaration of Purna Swaraj, or complete independence from British rule. It also announced a campaign of civil disobedience, but no one had any idea what form it should take. That task fell to Mohandas Gandhi.

The Mahatma returned to his ashram to contemplate next steps. After his efforts to organize against the Rowlatt Act a decade earlier ended in disaster, he struggled to find a way forward. As he told a friend at the time, “I am furiously thinking day and night and I do not see a way out of the darkness.”

Finally, he decided he would march for salt, which impressed almost no one. It seemed to be an incredibly inconsequential issue, especially considering what was at stake. Yet what few realized at the time was that he had identified a keystone change that would break the logjam and the British hold on power. Today the Salt March is known as Gandhi’s greatest triumph.

A Tangible And Achievable Goal

One of Gandhi’s biggest challenges was to connect the lofty goals and high-minded rhetoric of the elites who led the Indian National Congress with the concerns of everyday Indians. These destitute masses didn’t much care whether they were ruled by British elites or Indian elites and, to them, abstract concepts like “freedom” and “independence” meant little.

Salt, on the other hand, was something that was tangible for everyone, but especially for the poorest Indians and the British salt laws provided a clear and actionable target. All you had to do to defy them was to boil seawater to produce salt. What at first seemed trivial became a powerful call for mass action.

In my book, Cascades, I found that every successful movement for change, whether it was a corporate turnaround, a social initiative or a political uprising, began with a keystone change like Gandhi’s salt protests. To achieve a grand vision, you always have to start somewhere and the best place to begin is with a clear and achievable goal.

In some cases, as with voting rights in the women’s movement in the 19th century and, more recently, marriage equality for the LGBT movement, identifying a keystone change took decades. In other cases, such as improving worker safety in Paul O’Neil’s turnaround of Alcoa or a campaign to save 100,000 lives in Don Berwick’s quest to improve quality in medical care, the keystone change was part of the initial plan.

Involving Multiple Stakeholders

The concept of Indian independence raised a number of thorny issues, many of which have not been resolved to this day. Tensions between majority Hindus and minority Muslims created suspicions about how power would be structured after British rule. Similarly, coordinating action between caste Hindus and “untouchables” was riddled with difficulty. Christians and Sikhs had their own concerns.

Yet anger about the Salt Laws helped bring all of these disparate groups together. It was clear from the outset that everyone would benefit from a repeal. Also, because participating was easy—again, it was as simple as boiling sea water—little coordination was needed. Most of all, being involved in a collective effort helped to ease tensions somewhat.

Wyeth Pharmaceuticals took a similar approach to its quest to reduce costs by 25% through implementing lean manufacturing methods at its factories. Much like Gandhi, the executives understood that transforming the behaviors of 20,000 employees across 16 large facilities, most of whom were skeptical of the change, was no simple task.

So they started with one process — factory changeovers — and reduced the time it took to switch from producing one product to another in half. “That changed assumptions of what was possible,” an advisor that worked on the project told me. “It allowed us to implement metrics, improve collaboration and trained the supervisor to reimagine her perceived role from being a taskmaster that pushed people to work harder to a coach that enables improved performance.”

Breaking Through Higher Thresholds Of Resistance

By now most people are familiar with the diffusion of innovations theory developed by Everett Rogers. A new idea first gains traction among a small group of innovators and early adopters, then later spreads to the mainstream. Some have suggested that early adopters act as “influentials” or “opinion leaders” that spur an idea forward, but that is largely a myth.

What is much closer to the truth is that we all have different thresholds of resistance to a new idea and these thresholds are highly contextual. For example, as a Philadelphia native, I will enthusiastically try out a new cheesesteak place, but have kept the same hairstyle for 30 years. My wife, on the other hand, is much more adventurous with hairstyles than she is with cheesesteaks.

Yet we are all influenced by those around us. So if our friends and neighbors start raving about a cheesesteak, she might give it a try and may even tell people about it. Or, as network theory pioneer Duncan Watts explained to me, an idea propagates through “easily influenced people influencing other easily influenced people.”

That’s how transformative ideas gain momentum and it’s easy to see how a keystone change can help move the process along. By starting out with a tangible goal, such as protesting the salt tax or reducing changeover time at a single factory, you can focus your efforts on people who have lower thresholds of resistance and they, in turn, can help the idea spread to others who are more reticent.

Paving The Way For Future Change

Perhaps most importantly, a keystone change paves the way for larger changes later on. Gandhi’s Salt March showed that the British Raj could be defied. Voting rights for women and, later, blacks, allowed them to leverage their newfound power at the polls. Reducing changeover time showed how similar results could be achieved in other facets of manufacturing. The 100,000 lives campaign helped spur a a quality movement in healthcare.

None of these things happened all at once, but achieving a keystone change showed what was possible, attracted early adopters to the cause and helped give them a basis for convincing others that even more could be achieved. As one of Gandhi’s followers remarked, before the Salt March, the British “were all sahibs and we were obeying. No more after that.”

Another benefit of a keystone change is that it is much less likely to provoke a backlash than a wider, sweeping vision. One of the reasons that the Salt March was successful is that the British didn’t actually gain that much revenue from the tax on salt, so were slow to react to it. The 100,000 lives campaign involved only six relatively easy to implement procedures, rather than pushing hospitals to pursue wholesale change all at once.

So while it’s important to dream big and have lofty goals, the first step is always a keystone change. That’s how you first build a sense of shared purpose and provide a platform from which a movement for change can spread. Before the Salt March, Gandhi was considered by many to be a Hindu nationalist. It was only after that he truly became an inspiration to all Indian people and many others around the world.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Quiet Geniuses Excel at Breakthroughs

Why Quiet Geniuses Excel at Breakthroughs

GUEST POST from Greg Satell

When you think of breakthrough innovation, someone like Steve Jobs, Jeff Bezos or Elon Musk often comes to mind. Charismatic and often temperamental, people like these seem to have a knack for creating the next big thing and build great businesses on top of them. They change the world in ways that few can.

Yet what often goes unnoticed is that great entrepreneurs build their empires on the discoveries of others. Steve jobs didn’t invent the computer or the mobile phone any more than Jeff Bezos discovered e-commerce or Elon Musk dreamed up electric cars. Those things were created by scientists and engineers that came long before.

In researching my book, Mapping Innovation, I got to know many who truly helped create the future and I found them to be different than most people, but not in a way that you’d expect. While all were smart and hardworking, the most common trait among them was their quiet generosity and that can teach us a lot about how innovation really works.

How Jim Allison Figured it All Out

At least in appearance, Jim Allison is a far cry from how you would normally picture a genius to look like. Often disheveled with a scruffy beard, he kind of mumbles out a slow Texas drawl that belies his amazingly quick mind. Unassuming almost to a fault, when I asked him about his accomplishments he just said, “well, I always did like figuring things out.”

When Jim was finishing up graduate school, scientists had just discovered T-cells and he told me that he was fascinated by how these things could zip around your body and kill things for you, but not actually hurt you. The thing was, nobody had the faintest idea how it all worked. So Jim decided to become an immunologist and devote his life to figuring it all out.

Over the next few decades, he and his colleagues at other labs did indeed do much to figure it out. They found one receptor, called B-7, which acts like an ignition switch that initiates the immune response, another, CD-28, that acts like a gas pedal and revs things up into high gear and a third, called CTLA-4, that puts on the brakes so things don’t spin out of control.

Jim played a part in all of this, but his big breakthrough came from the work of another scientist in his lab, which made him suspect that the problem with cancer wasn’t that our immune system can’t fight it, but that it puts the brakes on too soon. He thought that if he could devise a way to pull those brakes off, we could cure cancer in a new and different way.

As it turned out, Jim was right. Today, cancer immunotherapy has become a major field unto itself and, in October 2018, he won the Nobel Prize for his discovery of it. Yet the truth is that it wasn’t one major breakthrough, but a decades-long process of slowly putting the pieces together that made it all possible.

How Gary Starkweather Went From Blowup To Breakthrough

Gary Starkweather is every bit as quiet and unassuming as Jim Allison. Yet when I talked to him a few years ago, I could still hear the anger in his voice as he told me about an incident that happened almost 50 years before. In the late 60s, Gary had an idea to invent a new kind of printer, but his boss at Xerox was thwarting his efforts.

At the time, Gary was one of the few experts in the emerging field of laser optics, so there weren’t many others who could understand his work, much less how it could be applied to the still obscure field of computers. His boss was, in fact, was so hostile to Gary’s project that he threatened to fire anyone who worked with him on it.

Furious, the normally mild mannered Gary went over his boss’s head. He walked into the Senior Vice President’s office and threatened, “Do you want me to do this for you or for someone else?” For the stuffy, hierarchical culture of Xerox, it was outrageous behavior, but as luck would have it, the stunt paid off. News of Gary’s work made it across the country to the fledgling computer lab that Xerox had recently established in California, the Palo Alto Research Center (PARC).

Gary thrived in the freewheeling, collaborative culture at PARC. The researchers there had developed a graphical technology called bitmapping, but had no way to print the images out until he showed up. His development of the laser printer was not only a breakthrough in its own right, but with the decline of Xerox’s copier business, it actually saved the company.

The Wild Ideas Of Charlie Bennett

Charlie Bennett is one of those unusual minds that amazes everyone he meets. He told me that when he was growing up in the quiet Westchester village of Croton-on-Hudson he was a “geek before geeks were cool.” While the other kids were playing sports and trading baseball cards, what really inspired Charlie was Watson and Crick’s discovery of the structure of DNA.

So he went to college and majored in biochemistry and then went on to Harvard to do his graduate work, where he served as James Watson’s teaching assistant. Yet it was an elective course he took on the theory of computation that would change his fate. That’s where he first encountered the concept of a Turing Machine and he was amazed how similar it was to DNA.

So Charlie never became a geneticist, but went to work for IBM as a research scientist. It proved to be just the kind of place where a mind like his could run free, discussing wild ideas like quantum cryptography with colleagues around the globe. It was one of those discussions, with Gilles Brassard, that led to his major breakthrough.

What the two discussed was the wildest idea yet. They proposed to transfer information by quantumly entangling photons, something that Einstein had derisively called “spooky action at a distance” and was adamant couldn’t happen. Yet the two put a team together and, in 1993, successfully completed the quantum teleportation experiment.

That, in turn, led Charlie just a few months later to write down his four laws of quantum information, which formed the basis for IBM’s quantum computing program. Today, in his eighties, Charlie is semi-retired, but still goes into the labs at IBM research to quietly discuss wild ideas with the younger scientists, such as the quantum internet that’s continuing to emerge now.

For Innovation, Generosity Is A Competitive Advantage

My conversations with Jim, Gary, Charlie and many others made an impression on me. They were all giants in their fields (although Jim hadn’t won his Nobel yet) and I was a bit intimidated talking to them. Yet I found them to be some of the kindest, most generous people I ever met. Often, they seemed as interested in me as I was in them.

In fact, the behavior was so consistent that I figured it couldn’t be an accident. So I researched the matter further and found a number of studies that helped explain it. One, at Bell Labs, found that star engineers had a knack for “knowing who knows.” Another at the design firm IDEO found that great innovators essentially act as “knowledge brokers.“

A third study helps explain why knowledge brokering is so important. Analyzing 17.9 million papers, the researchers found that the most highly cited work tended to be mostly rooted within a traditional field, with just a smidgen of insight taken from some unconventional place. Breakthrough creativity occurs at the nexus of conventionality and novelty.

So as it turns out, generosity is often a competitive advantage for innovators. By actively sharing their ideas, they build up larger networks of people willing to share with them. That makes it that much more likely that they will come across that random piece of information and insight that will help them crack a really tough problem.

So if you want to find a truly great innovator, don’t look for the ones that make the biggest headlines are that are most inspiring on stage. Look for those who spend their time a bit off to the side, sharing ideas, supporting others and quietly pursuing a path that few others are even aware of.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Value Doesn’t Disappear

It Shifts From One Place to Another

Value Doesn't Disappear

GUEST POST from Greg Satell

A few years ago, I published an article about no-code software platforms, which was very well received. Before long, however, I began to get angry — and sometimes downright nasty — comments from software engineers who were horrified by the notion that you can produce software without actually understanding the code behind it.

Of course, no-code platforms don’t obviate the need for software engineers, but rather automate basic tasks so that amateurs can design applications by themselves. These platforms are, necessarily, limited but can increase productivity dramatically and help line managers customize technology to fit the task at hand.

Similarly, when FORTRAN, the first real computer language, was invented, many who wrote machine code objected, much like the software engineers did to my article. Yet Fortran didn’t destroy computer programming, but democratized and expanded it. The truth is that value never disappears. It just shifts to another place and that’s what we need to learn to focus on.

Why Robots Aren’t Taking Our Jobs

Ever since the financial crisis we’ve been hearing about robots taking our jobs. Yet just the opposite seems to be happening. In fact, we increasingly find ourselves in a labor shortage. Most tellingly, the shortage is especially acute in manufacturing, where automation is most pervasive. So what’s going on?

The fact is that automation doesn’t actually replace jobs, it replaces tasks. To understand how this works, think about the last time you walked into a highly automated Apple store, which actually employs more people than a typical retail location of the same size. They aren’t there to ring up your purchase any faster, but to do all the things that a machine can’t do, like answer your questions and solve your problems.

A few years ago I came across an even more stark example when I asked Vijay Mehta, Chief Innovation Officer for Consumer Information Services at Experian about the effect that shifting to the cloud had on his firm’s business. The first order effect was simple, they needed a lot less technicians to manage its infrastructure and those people could easily be laid off.

Yet they weren’t. Instead Experian shifted a lot of that talent and expertise to focus on creating new services for its customers. One of these, a cloud enabled “data on demand” platform called Ascend has since become one of the $4 billion company’s most profitable products.

Now think of what would have happened if Experian had merely seen cloud technology as an opportunity to cut costs. Sure, it would have fattened its profit margins temporarily, but as its competitors moved to the cloud that advantage would have soon been eroded and, without new products its business would soon decline.

The Outsourcing Dilemma

Another source of disruption in the job market has been outsourcing. While no one seemed to notice when large multinational corporations were outsourcing blue-collar jobs to low cost countries, now so-called “gig economy” sites like Upwork and Fiverr are doing the same thing for white collar professionals like graphic designers and web developers.

So you would expect to see a high degree of unemployment for those job categories, right? Actually no. The Bureau of Labor Statistics expects demand for graphic designers to increase 4% by 2026 and web developers to increase 15%. The site Mashable recently named web development as one of 8 skills you need to get hired in today’s economy.

It’s not hard to see why. While it is true that a skilled professional in a low-cost country can do small projects of the same caliber as those in high cost countries, those tasks do not constitute a whole job. For large, important projects, professionals must collaborate closely to solve complex problems. It’s hard to do that through text messages on a website.

So while it’s true that many tasks are being outsourced, the number of jobs has actually increased. Just like with automation, outsourcing doesn’t make value disappear, but shifts it somewhere else.

The Social Impact

None of this is to say that the effects of technology and globalization hasn’t been real. While it’s fine to speak analytically about value shifting here and there, if a task that you spent years to learn to do well becomes devalued, you take it hard. Economists have also found evidence that disruptions in the job market have contributed to political polarization.

The most obvious thing to do is retrain workers that have been displaced, but it turns out that’s not so simple. In Janesville, a book which chronicles a small town’s struggle to recover from the closing of a GM plant, author Amy Goldstein found that the workers that sought retraining actually did worse than those that didn’t.

When someone loses their job, they don’t need training. They need another job and removing yourself from the job market to take training courses can have serious costs. Work relationships begin to decay and there is no guarantee that the new skills you learn will be in any more demand than the old ones you already had.

In fact, Peter Capelli at the Wharton School argues that the entire notion of a skills gap in America is largely a myth. One reason that there is such a mismatch between the rhetoric about skills and the data is that the most effective training often comes on the job from an employer. It is augmenting skills, not replacing them that creates value.

At the same time, increased complexity in the economy is making collaboration more important, so often the most important skills workers need to learn are soft skills, like writing, listening and being a better team player.

You Can’t Compete With A Robot By Acting Like One

The future is always hard to predict. While it was easy to see that Amazon posed a real problem for large chain bookstores like Barnes & Noble and Borders, it was much less obvious that small independent bookstores would thrive. In much the same way, few saw that ten years after the launch of the Kindle that paper books would surge amid a decline in e-books.

The one overriding trend over the past 50 years or so is that the future is always more human. In Dan Schawbel’s recent book, Back to Human, the author finds that the antidote for our overly automated age is deeper personal relationships. Things like trust, empathy and caring can’t be automated or outsourced.

There are some things a machine will never do. It will never strike out in a little league game, have its heart broken or see its child born. That makes it hard — impossible really — for a machine ever to work effectively with humans as a real person would. The work of humans is increasingly to work with other humans to design work for machines.

That why perhaps the biggest shift in value is from cognitive to social skills. The high paying jobs today have less to do with the ability to retain facts or manipulate numbers (we now use a computer for those things), but require more deep collaboration, teamwork and emotional intelligence.

So while even the most technically inept line manager can now easily produce an application that it would have once required a highly skilled software engineer, to design the next generation of technology, we need engineers and line managers to work more closely together.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Trust as a Competitive Advantage

Trust as a Competitive Advantage

GUEST POST from Greg Satell

One of the most rewarding things about writing my book Mapping Innovation was talking to the innovators themselves. All of them were prominent (one recently won the Nobel Prize), but I found them to be the among the kindest and most generous people you can imagine, nothing like the difficult and mercurial stereotype.

At first, this may seem counterintuitive, because any significant innovation takes ambition, drive and persistence. Yet a study at the design firm IDEO sheds some light. It found that great innovators are essentially knowledge brokers who place themselves at the center of information networks. To do that, you need to build trust.

A report from Accenture Strategy analyzing over 7,000 firms found this effect to be even more widespread than I had thought. When evaluating competitive agility, it found trust “disproportionately impacts revenue and EBITDA.” The truth is that to compete effectively you need to build deep bonds of trust throughout a complex ecosystem of stakeholders.

From Value Chain To Value Ecosystem

In Michael Porter’s landmark book, Competitive Advantage, the Harvard professor argued that the key to long-term success was to dominate the value chain by maximizing bargaining power among suppliers, customers, new market entrants and substitute goods. The goal was to create a sustainable competitive advantage your rivals couldn’t hope to match.

Many of the great enterprises of the 20th century were built along those lines. Firms like General Motors under Alfred Sloan, IBM under Thomas J. Watson (and later, his son Thomas Watson Jr.) as well as others so thoroughly dominated the value chains in their respective industries that they were able to maintain leading positions in their industries for decades.

Clearly, much has changed since Porter wrote his book nearly 40 years ago. Today, we live in a networked world and competitive advantage is no longer the sum of all efficiencies, but the sum of all connections. Strategy, therefore, must be focused on widening and deepening links to resources outside the firm.

So you can see why trust has taken on greater importance. Today, firms like General Motors and IBM need to manage a complex ecosystem of partners, suppliers, investors and customer relationships and these depend on trust. If one link is broken anywhere in the ecosystem, the others will weaken too and business will suffer.

The Cost Of A Trust Event

The study was not originally designed to measure the effect of trust specifically, but overall competitive agility. It looked at revenue growth and profitability over time and then incorporated metrics measuring Sustainability and Trust to get a larger picture of a firm’s ability to compete.

The Accenture Strategy analysis is wide ranging, incorporating over 4 million data points. It also included Arabesque’s S-Ray data from over 50,000 sources to come up with a quantitative score and rate companies on their sustainability practices, as well as a proprietary measurement of trust across customers, employees, investors, suppliers, analysts, and the media.

Yet when the analysts began to examine the data, they found that the trust metrics disproportionately affected the overall score. For example, a consumer focused company that had a sustainability-oriented publicity event backfire lost an estimated $400 million in future revenues. Another company that was named in a money laundering scandal lost $1 billion.

All too often, acting expediently is seen as being pragmatic, because cutting corners can save you money up front. Yet what the report makes clear is that companies today need to start taking trust more seriously. In today’s voraciously competitive environment, taking a major hit of any kind can hamstring operations for years and sometimes permanently.

Where Trust Hits The Hardest

When the issues of trust come up, we immediately think about consumers. With social media increasing the velocity of information, even a seemingly minor incident can go viral, causing widespread outrage. That kind of thing can send customers flocking to competitors.

Yet as I dug into the report’s data more deeply, I found that the effect varied widely by industry. For example, in manufacturing, media and insurance, the cost of a trust incident was fairly low, but in industries such as banking, retail and industrial services, the impact could be five to ten times higher.

What seems to make the difference is that industries that are most sensitive to a trust event have more complex ecosystems. For example, a retail operation needs to maintain strong relationships with hundreds and sometimes thousands of suppliers. Banking, on the other hand, is highly sensitive to the cost of capital. A drop in trust can send costs surging.

Further, in industries like high tech and industrial services, companies need to stay on the cutting edge to compete. That requires highly collaborative partnerships with other companies to share knowledge and expertise. Once trust is lost, it’s devilishly hard to earn back and competitors gain an edge.

Building Resiliency

The trust problem is amazingly widespread. Accenture found that 54% of firms in the study experienced some kind of trust event and these can come from anywhere: a careless employee, a data breach, a defective product, etc. Yet Jessica Long, one of the Accenture Strategy Managing Directors who led the study, told me that a company can improve its resiliency significantly.

“It’s not so much a matter of preventing a trust event,” she says. “The world is a messy place and things happen. The real difference is how you respond and the resiliency you’ve built up through forging strong foundations in the crucial components of competitive agility: growth, profitability, sustainability and trust.”

Think about Steve Jobs and Apple, which encountered a number of trust events during his tenure. However, because he so clearly demonstrated his commitment to “insanely great” products, customers, employees and partners were more forgiving than they would be with another company. Or, more recently, the scandal when two men were arrested at a Starbucks store. Because Howard Schultz has built a reputation for fairness and because he acted decisively, the impact was far less than it could have been.

Perhaps most crucial is to build a culture of empathy. One of the things that most surprised me about the innovators I researched for my book is that many seemed almost as interested in me and my project as I was in them. I could see how others would want to work with them and share information and insights. It was that kind of access that led them to solve problems no one else could.

What the Accenture report shows is that the same thing is true for profit seeking companies. The best strategy to build trust is to actually be trustworthy. Think about how your actions affect customers, employees, partners and other stakeholders and treat their success as you would your own.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Don’t Blame Technology When Innovation Goes Wrong

Don't Blame Technology When Innovation Goes Wrong

GUEST POST from Greg Satell

When I speak at conferences, I’ve noticed that people are increasingly asking me about the unintended consequences of technological advance. As our technology becomes almost unimaginably powerful, there is growing apprehension and fear that we will be unable to control what we create.

This, of course, isn’t anything new. When trains first appeared, many worried that human bodies would melt at the high speeds. In ancient Greece, Plato argued that the invention of writing would destroy conversation. None of these things ever came to pass, of course, but clearly technology has changed the world for good and bad.

The truth is that we can’t fully control technology any more than we can fully control nature or each other. The emergence of significant new technologies unleash forces we can’t hope to understand at the outset and struggle to deal with long after. Yet the most significant issues are most likely to be social in nature and those are the ones we desperately need to focus on.

The Frankenstein Archetype

It’s no accident that Mary Shelley’s novel Frankenstein was published at roughly the same time as the Luddite movement was in full swing. As cottage industries were replaced by smoke belching factories, the sense that man’s creations could turn against him was palpable and the gruesome tale, considered by many to be the first true work of science fiction, touched a nerve.

In many ways, trepidation about technology can be healthy. Concern about industrialization led to social policies that helped mitigate its worst effects. In much the same way, scientists concerned about the threat of nuclear Armageddon did much to help establish policies that would prevent it.

Yet the initial fears almost always prove to be unfounded. While the Luddites burned mills and smashed machines to prevent their economic disenfranchisement, the industrial age led to a rise in the living standards of working people. In a similar vein, more advanced weapons has coincided with a reduction of violent deaths throughout history.

On the other hand, the most challenging aspects of technological advance are often things that we do not expect. While industrialization led to rising incomes, it also led to climate change, something neither the fears of the Luddites nor the creative brilliance of Shelley could have ever conceived of.

The New Frankensteins

Today, the technologies we create will shape the world as never before. Artificially intelligent systems are automating not only physical, but cognitive labor. Gene editing techniques, such as CRISPR, are enabling us to re-engineer life itself. Digital and social media have reshaped human discourse.

So it’s not surprising that there are newfound fears about where it’s all going. A study at Oxford found that 47% of US jobs are at risk of being automated over the next 20 years. The speed and ease of gene editing raises the possibility of biohackers wreaking havoc and the rise of social media has coincided with a disturbing rise of authoritarianism around the globe.

Yet I suspect these fears are mostly misplaced. Instead of massive unemployment, we find ourselves in a labor shortage. While it is true that the biohacking is a real possibility, our increased ability to cure disease will most probably greatly exceed the threat. The increased velocity of information also allows good ideas to travel faster and farther.

On the other hand, these technologies will undoubtedly unleash new challenges that we are only beginning to understand. Artificial intelligence raises disturbing questions about what it means to be human, just as the power of genomics will force us to grapple with questions about the nature of the individual and social media forces us to define the meaning of truth.

Revealing And Building

Clearly, Shelly and the Luddites were very different. While Shelley was an aristocratic intellectual, the Luddites were working class weavers. Yet both saw the rise of technology as the end to a way of life and, in that way, both were right. Technology, if nothing else, forces us to adapt, often in ways we don’t expect.

In his 1954 essay, The Question Concerning Technology the German philosopher Martin Heidegger sheds some light on these issues. He described technology as akin to art, in that it reveals truths about the nature of the world, brings them forth and puts them to some specific use. In the process, human nature and its capacity for good and evil is also revealed.

He gives the example of a hydroelectric dam, which reveals the energy of a river and puts it to use making electricity. In much the same sense, Mark Zuckerberg did not “build” a social network at Facebook, but took natural human tendencies and channeled them in a particular way. After all, we go online not for bits or electrons, but to connect with each other.

Yet in another essay, Building Dwelling Thinking, he explains that building also plays an important role, because to build for the world, we first must understand what it means to live in it. The revealing power of technology forces us to rethink old truths and re-imagine new societal norms. That, more than anything else, is where the challenges lie.

Learning To Ask The Hard Questions

We are now nearing the end of the digital age and entering a new era of innovation which will likely be more impactful than anything we’ve seen since the rise of electricity and internal combustion a century ago. This, in turn, will initiate a new cycle of revealing and building that will be as challenging as anything humanity has ever faced.

So while it is unlikely that we will ever face a robot uprising, artificial intelligence does pose a number of troubling questions. Should safety systems in a car prioritize the life of a passenger or a pedestrian? Who is accountable for the decisions an automated system makes? We worry about who is teaching our children, but scarcely stop to think about who is training our algorithms.

These are all questions that need answers within the next decade. Beyond that, we will have further quandaries to unravel, such as what is the nature of work and how do we value it? How should we deal with the rising inequality that automation creates? Who should benefit from technological breakthroughs?

The unintentional consequences of technology have less to do with the relationship between us and our inventions than it does between us and each other. Every technological shift brings about a societal shift that reshapes values and norms. Clearly, we are not helpless, but we are responsible. These are very difficult questions and we need to start asking them. Only then can we begin the cycle of revealing truths and building a better future.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Transformation is Human Not Digital

Transformation is Human Not Digital

GUEST POST from Greg Satell

A decade ago, many still questioned the relevance of digital technology. While Internet penetration was already significant, e-commerce made up less than 6% of retail sales. Mobile and cloud computing were just getting started and artificial intelligence was still more science fiction than reality.

Yet today, all of those things are not only viable technologies, but increasingly key to effectively competing in the marketplace. Unfortunately, implementing these new technologies can be a thorny process. In fact, research by McKinsey found that fewer than one third of digital transformation efforts succeed.

For the most part, these failures have less to do with technology and more to do with managing the cultural and organizational challenges that a technological shift creates. It’s relatively easy to find a vendor that can implement a system for you, but much harder to prepare your organization to adapt to new technology. Here’s what you need to keep in mind:

Start With Business Objectives

Probably the most common trap that organizations fall into is focusing on technology rather than on specific business objectives. All too often, firms seek to “move to the cloud” or “develop AI capabilities.” That’s a sure sign you’re headed down the wrong path.

“The first question you have to ask is what business outcome you are trying to drive,” Roman Stanek, CEO at GoodData, told me. “Projects start by trying to implement a particular technical approach and not surprisingly, front-line managers and employees don’t find it useful. There’s no real adoption and no ROI.”

So start by asking yourself business related questions, such as “How could we better serve our customers through faster, more flexible technology?” or “How could artificial intelligence transform our business?” Once you understand your business goals, you can work your way back to the technology decisions.

Automate The Most Tedious Tasks First

Technological change often inspires fear. One of the most basic mistakes many firms make is to try to use new technology to try and replace humans and save costs rather than to augment and empower them to improve performance and deliver added value. This not only kills employee morale and slows adoption, it usually delivers worse results.

A much better approach is to use technology to improve the effectiveness of human employees. For example, one study cited by a White House report during the Obama Administration found that while machines had a 7.5 percent error rate in reading radiology images and humans had a 3.5% error rate, when humans combined their work with machines the error rate dropped to 0.5%.

The best way to do this is to start with the most boring and tedious tasks first. Those are what humans are worst at. Machines don’t get bored or tired. Humans, on the other hand, thrive on interaction and like to solve problems. So instead of looking to replace workers, look instead to make them more productive.

Perhaps most importantly, this approach can actually improve morale. Factory workers actively collaborate with robots they program themselves to do low-level tasks. In some cases, soldiers build such strong ties with robots that do dangerous jobs that they hold funerals for them when they “die.”

Shift Your Organization And Your Business Model

Another common mistake is to think that you can make a major technological shift and keep the rest of your business intact. For example, shifting to the cloud can save on infrastructure costs, but the benefits won’t last long if you don’t figure out how to redeploy those resources in some productive way.

For example, when I talked to Barry Libenson, Global CIO of the data giant, Experian, about his company’s shift to the cloud, he told me that “The organizational changes were pretty enormous. We had to physically reconfigure how people were organized. We also needed different skill sets in different places so that required more changes and so on.”

The shift to the cloud made Experian more agile, but more importantly it opened up new business opportunities. Its shift to the cloud allowed the company to create Ascend, a “data on demand” platform that allows its customers to make credit decisions based on near real time data, which is now its fastest growing business.

“All of the shifts we made were focused on opening up new markets and serving our customers better,” Libenson says, and that’s what helped make the technological shift so successful. Because it was focused on business results, it was that much easier to get everybody behind it, gain momentum and create a true transformation.

Humans Collaborating With Machines

Consider how different work was 20 years ago, when Windows 95 was still relatively new and only a minority of executives regularly used programs like Word, Excel and PowerPoint. We largely communicated by phone and memos typed up by secretaries. Data analysis was something you did with a pencil, paper and a desk calculator.

Clearly, the nature of work has changed. We spend far less time quietly working away at our desks and far more interacting with others. Much of the value has shifted from cognitive skills to social skills as collaboration increasingly becomes a competitive advantage. In the future, we can only expect these trends to strengthen and accelerate.

To understand what we can expect, look at what’s happened in the banking industry. When automatic teller machines first appeared in the early 1970s, most people thought it would lead to less branches and tellers, but actually just the opposite happened. Today, there are more than twice the number of bank tellers employed as in the 1970s, because they do things that machines can’t do, like solve unusual problems, show empathy and up-sell.

That’s why we need to treat any technological transformation as a human transformation. The high value work of the future will involve humans collaborating with other humans to design work for machines. Get the human part right and the technology will take care of itself.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Dall-E via Bing

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why More Women Are Needed in Innovation

Why More Women Are Needed in Innovation

GUEST POST from Greg Satell

Every once in a while I get a comment from an audience member after a keynote speech or from someone who read my book, Mapping Innovation, about why so few women are included. Embarrassed, I try to explain that, as in many male dominated fields, women are woefully underrepresented in science and technology.

This has nothing to do with innate ability. In fact, you don’t have to look far to find women at the very apex of innovation, such as Jennifer Doudna, who pioneered CRISPR or Jocelyn Bell Burnell, who received the Breakthrough Prize for her discovery of pulsars a few years ago. In earlier days, women like Grace Hopper and Marie Curie made outsized impacts.

The preponderance of evidence shows that women can vastly improve innovation efforts, but are often shunted aside. In fact, throughout history, men have taken credit for discoveries that were actually achieved by women. So, while giving women a larger role in innovation would be just and fair, even more importantly it would improve performance.

The Power Of Diversity

Over the past few decades there have been many efforts to increase diversity in organizations. Unfortunately, all too often these are seen more as a matter of political correctness than serious management initiatives. After all, so the thinking goes, why not just pick the best man for the job?

The truth is that there is abundant scientific evidence that diversity improves performance. For example, researchers at the University of Michigan found that diverse groups can solve problems better than a more homogenous team of greater objective ability. Another study that simulated markets showed that ethnic diversity deflated asset bubbles.

While the studies noted above merely simulate diversity in a controlled setting there is also evidence from the real world that diversity produces better outcomes. A McKinsey report that covered 366 public companies in a variety of countries and industries found that those which were more ethnically and gender diverse performed significantly better than others.

The problem is that when you narrow the backgrounds, experiences and outlooks of the people on your team, you are limiting the number of solution spaces that can be explored. At best, you will come up with fewer ideas and at worst, you run the risk of creating an echo chamber where inherent biases are normalized and groupthink sets in.

How Women Improve Performance

While increasing diversity in general increases performance, there is also evidence that women specifically have a major impact. In fact, in one wide ranging study, in which researchers at MIT and Carnegie Mellon sought to identify a general intelligence score for teams, they not only found that teams that included women got better results, but that the higher the proportion of women was, the better the teams did.

At first, the finding seems peculiar, but when you dig deeper it begins to make more sense. The study also found that in the high performing teams members rated well on a test of social sensitivity and took turns when speaking. Perhaps not surprisingly, women do better on these parameters than men do.

Social sensitivity tests ask respondents to infer someone’s emotional state by looking at a picture and women tend score higher than men. As for taking turns in conversation, there’s a reason why we call it “mansplaining” and not “womansplaining.” Women usually are better listeners.

The findings of the study are consistent with something I’ve noticed in my innovation research. The best innovators are nothing like the mercurial, aggressive stereotype, but tend to be quiet geniuses. Often they aren’t the kinds of people that are immediately impressive, but those who listen to others and generously share insights.

Changing The Social Dynamic

One of the reasons that women often get overlooked, besides good old fashioned sexism, is that that there are vast misconceptions about what makes someone a good innovator. All too often, we imagine the best innovators to be like Steve Jobs—brash, aggressive and domineering—when actually just the opposite is true.

Make no mistake, great innovators are great collaborators. That’s why the research finds that successful teams score high in social sensitivity, take turns talking and listening to each other rather, rather than competing to dominate the conversation. It is never any one idea that solves a difficult problem, but how ideas are combined to arrive at an optimal solution.

So while it is true that these skills are more common in women, men have the capacity to develop them as well. In fact, probably the best way for men to learn them is to have more exposure to women in the workplace. Being exposed to a more collaborative working style can only help.

So besides the moral and just aspects of getting more women into innovation related fields and giving them better access to good, high paying jobs, there is also a practical element as well. Women make teams more productive.

Building The Next Generation

Social researchers have found evidence that that the main reason that women are less likely to go into STEM fields has more to do with cultural biases than it does with any innate ability. For example, boys are more encouraged to play with building toys during childhood and develop spatial skills early on, while girls can build the same skills with the same training.

Cultural bias also plays a role in the amount of encouragement young students get. STEM subjects can be challenging, and studies have found that boys often receive more support than girls because of educators’ belief in their innate talent. That’s probably why even girls who have high aptitude for math and science are less likely to choose a STEM major than boys of even lesser ability.

Yet cultural biases can evolve over time and there are a number of programs designed to change attitudes about women and innovation. For example Girls Who Code provides training and encouragement for young women and UNESCO’s TeachHer initiative is designed to provide better educational opportunities.

Perhaps most of all, initiatives like these can create role models and peer support. When young women see people like the Jennifer Doudna, Jocelyn Bell Burnell and the star physicist Lisa Randall achieve great things in STEM fields, they’ll be more likely to choose a similar path. With more women innovating, we’ll all be better off.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Dall-E via Bing

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

4 Ways to Create Something Truly Original

4 Ways to Create Something Truly Original

GUEST POST from Greg Satell

I study innovators for a living. Every year, I interview dozens of men and women who’ve achieved remarkable things. For my own part, I publish about a hundred articles a year and my second book, Cascades, has sold well since coming out five years ago. While my achievements pale in comparison to many of those I interview, many believe my work to be original.

The most destructive myth about creativity is that there are innate traits that allow some people to be creative, while others, who lack these, cannot. The truth is that in decades of research on creativity, nobody has been able to identify any such traits. In my experience, great innovators come in all shapes and sizes.

Still, despite the diversity of original innovators themselves, there are some common principles in how they approach their work and these are things that anyone can apply. That doesn’t mean everyone can be world famous, but the evidence clearly shows that anyone can be creative and, even if it’s not a major breakthrough, make some contribution to the world.

1. Explore

In 2006, Jennifer Doudna got a call from a colleague at the University of California at Berkeley, Jillian Banfield, who she knew only by reputation. Banfield’s area of research interest, obscure bacteria living in extreme conditions, was only tangentially related to Doudna’s work, studying the biochemistry of RNA and other cell structures.

The purpose of the call was to interest Doudna in studying an emerging phenomenon that was recently discovered in microbiology, a strange sequence of DNA found in bacteria. The function of the sequences were not yet clear, but some early evidence suggested that they might be involved in some kind of immune function, helping bacteria to defend themselves against viruses.

Intrigued, Doudna began to research the sequences, called CRISPR, in her own lab and, in 2012, discovered that they could be used as a powerful new tool for editing genes. Today, CRISPR is creating a revolution in genomics, completely redefining what was considered to be possible in just a few short years.

Many have observed the role of serendipity in innovation, such as in Alexander Fleming’s chance discovery of penicillin. Yet in every case, once you look a little deeper, you find that even the most unexpected discoveries were the product of intense exploration. Like Fleming and penicillin, Doudna wasn’t looking for a gene editing technology, but she was investigating a wide number of phenomena that were previously unexplained.

The first step for innovation is exploration. All who wander are not lost.

2. Combine

I’m a relentless fact checker. Over the years, I’ve found that even if you’ve done significant research, reading papers and interviewing experts, it’s amazingly easy to get things wildly wrong. I’ve also found that fact checking can lead you to new information you didn’t know existed. So before I publish anything of significance, I always make sure to reach out to someone who can correct my foolishness before it becomes public.

That’s why when I was finishing up Cascades, I reached out to Duncan Watts to look over two chapters on the science of networks, a field which he helped pioneer. As usual, Duncan was gracious and helpful, and pointed me towards a paper of his that I might want to include. He did so somewhat apologetically, not wanting to push his work on me, but observed that since I had largely based both chapters on his work already, it was probably okay.

This was entirely true. Much of the first half of my book is based on Duncan’s ideas. What’s more, much of the second half of the book is based on insights from my friend Srdja Popović , who trains activists around the world to create revolutionary movements. There are a number of others as well, all of who shared their wisdom with me.

None of this, of course, was at all original, but the combination is. In fact, the key insight of the book is that Duncan’s mathematical models and the on-the-ground tactics of Srdja and others are intensely related. They can inform each other in ways that both men, who are mostly unfamiliar with each other’s work, had not addressed and, I believe, are important.

3. Refine

I first got interested in Duncan’s work in 2006. I was running a large digital business at the time and, with social networks becoming a powerful force online, I thought that learning some basic concepts of network science would be useful. Much to my surprise, I found that the ideas had a powerful resonance in an unexpected area.

Two years earlier, I had found myself in the middle of the Orange Revolution in Ukraine. What struck me at the time was how nobody seemed to have the first idea what was happening or why — not the journalists I worked with everyday, or the political and business leaders I would meet with regularly, nobody.

So I was excited to find, in Duncan’s work, a mathematical explanation for many of the seemingly inexplicable things that I had seen and experienced first-hand. Yet still, I had only a faint sense of what I was on to. Sure, there were obvious connections and possibilities, but I had no real framework to make the insights actionable.

That was 12 years ago (and 15 since the Orange Revolution began) and I’ve been working to refine those initial ideas ever since. Over that period, there has been no shortage of blind allies and wrong turns. Nevertheless, I kept at it and continued to learn. It took over a decade before I was able to pull everything together into something worth publishing.

4. Validate

The connection between Duncan and Srdja’s work wasn’t completely out of the blue. In fact, Duncan had made a short reference to Otpor, the movement which Srdja had helped lead, and its overthrow of Serbian dictator Slobodan Milošević in his book, Six Degrees. Yet there was no guarantee that the significance went any further than that.

So I began to widen my search. I looked at social movements throughout history to see if similar patterns held or whether the Orange Revolution in Ukraine and similar events in Serbia were anomalies. I struck up a working friendship with Srdja, read his book, Blueprint for Revolution and pored through the training materials on his organization’s website.

Yet to be truly useful, I needed to see if the same concepts could be applied more broadly. So I also researched and spoke to a number of leaders in other fields, such as corporate executives and people who led movements to transform heathcare, education and other things. Anywhere I could find anyone that created transformational change, I sought them out to find how they were able to succeed where so many others failed.

What I found was that while there were vast difference among changemakers, they had all eventually arrived at similar principles that made them successful, which I could validate. It took me nearly 15 years, but the journey that began with that initial connection between two vastly different sets of ideas eventually became something that I could consider to be coherent and useful.

In that way, my experience reflects many of the innovators of vastly greater accomplishment that I research and study. Truly original work doesn’t emerge fully formed from a brainstorm or sudden epiphany. It’s long years that follow, combining, refining and validating that makes the difference between an errant idea and something useful.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Why Materials Science is the Most Important Technology of This Decade

Why Materials Science is the Most Important Technology of This Decade

GUEST POST from Greg Satell

Think of just about any major challenge we will face over the next decade and materials are at the center of it. To build a new clean energy future, we need more efficient solar panels, wind turbines and batteries. Manufacturers need new materials to create more advanced products. We also need to replace materials subject to supply disruptions, like rare earth elements.

Traditionally, developing new materials has been a slow, painstaking process. To find the properties they’re looking for, researchers would often have to test hundreds — or even thousands — of materials one by one. That made materials research prohibitively expensive for most industries.

Yet today, we’re in the midst of a materials revolution. Scientists are using powerful simulation techniques, as well as machine learning algorithms, to propel innovation forward at blazing speed and even point them toward possibilities they had never considered. Over the next decade, the rapid advancement in materials science will have a massive impact.

The Seeds Of The Materials Revolution

In 2005, Gerd Ceder was a Professor of Materials Science at MIT working on computational methods to predict new materials. Traditionally, materials scientists worked mostly through trial and error, working to identify materials that had properties which would be commercially valuable. Gerd was working to automate that process using sophisticated computer models that simulate the physics of materials.

Things took a turn when an executive at Duracell, then a division of Procter & Gamble, asked if Ceder could use the methods he was developing to explore possibilities on a large scale to discover and design new materials for alkaline batteries. So he put together a team of a half dozen “young guns” and formed a company to execute the vision.

The first project went well and the team was able to patent a number of new materials that hadn’t existed before. Then another company came calling, which led to another project and more after that. Yet despite the initial success, Ceder began to realize that there was a problem. Although the team’s projects were successful, the overall impact was limited.

“We began to realize we’re generating all this valuable data and it’s being locked away in corporate vaults. We wanted to do something in a more public way,” Ceder told me. As luck would have it, it was just then that one of the team members was leaving MIT for family reasons and that chance event would propel the project to new heights.

The Birth Of The Materials Project

In 2008, Kristin Persson’s husband took a job in California, so she left Ceder’s group at MIT and joined Lawrence Berkeley National Laboratory (LBL) as a research scientist. Yet rather than mourn the loss of a key colleague, the team saw the move as an opportunity to shift their work into high gear.

“At MIT, we pretty much hacked everything together,” Ceder explains. “It all worked, but it was a bit buggy and would have never scaled beyond our small team. At a National Lab, however, they had the resources to build it out properly and create a platform that could really drive things forward.” So Persson hit the ground running, got a small grant and stitched together a team to combine the materials work with the high performance supercomputing done at the lab.

“At LBL there were world class computing people,” Persson told me. “So we began an active collaboration with people that were on the cutting edge of computer science, but didn’t know anything about materials and our little band of ‘materials hackers’. It was that interdisciplinary collaboration that was really the secret sauce and helped us gain ground quickly.”

Traditional, materials science could take a class of alloys for use in, say, the auto industry and calculate things like weight vs. tensile strength. There might be a few hundred of those materials in the literature. But with the system they built at LBL, they could calculate thousands. That meant engineers could identify candidate materials exponentially faster, test them in the real world and create better products.

Yet again, they felt that the impact of their work was limited. After all, not many engineers from private industry spend time at National Laboratories. “Our earlier work convinced us that we were on the cusp of something much bigger,” Persson remembers. That’s what led them to create The Materials Project, a massive online database that anyone in the world can access.

A Massive Materials Initiative

The Materials Project went online early in 2011 and drew a few thousand people. From there it grew like a virus and today has more than 50,000 users, a number that grows by about 50-100 per day. Yet its impact has become even greater than that. The success of the project caught the attention of Tom Kalil, then Deputy Director at the White House Office of Science and Technology Policy, who saw the potential to create a much wider initiative.

In the summer of 2011, the Obama administration announced the Materials Genome Initiative (MGI) to coordinate work across agencies such as the Department of Energy, NASA, the Department of Energy and others to expand and complement the work being done at LBL. These efforts, taken together, are creating a revolution in materials science and the impacts are just beginning to be felt by private industry.

The MGI is based on three basic pillars. The first is computational approaches that can accurately predict materials properties, like the ones Gerd Ceder’s team pioneered. The second is high throughput experimentation to expand materials libraries and the third are programs that mine existing materials in the scientific literature and promote the sharing of materials data.

For example, one project applied machine learning algorithms to experimental materials data to identify forms of a super strong alloy called metallic glass. While scientists have long recognized its value as an alternative to steel and as a protective coating, it is so rare that relatively few forms of it were known. Using the new methods, however, researchers were able to perform the work 200 times faster and identify 20,000 in a single year!

Creating A True Materials Revolution

Thomas Edison famously remarked that if he tried 10,000 experiments that failed, he didn’t actually consider it a failure, but found 10,000 things that didn’t work. That’s true, but it’s also incredibly tedious, time consuming and expensive. The new methods, however, have the potential to automate those 10,000 failures, which is creating a revolution in materials science.

For example, at the Joint Center for Energy Storage Research (JCESR), a US government initiative to create the next generation of advanced batteries, the major challenge now is not so much to identify potential battery chemistries, but that the materials to make those chemistries work don’t exist yet. Historically, that would have been an insurmountable problem, but not anymore.

“Using high performance computing simulations, materials genomes and other techniques that have been developed over the last decade or so, we can often eliminate as much as 99% of the possibilities that won’t work,” George Crabtree, Director at JCESR told me. “That means we can focus our efforts on the remaining 1% that may have serious potential, and we can advance much farther, much faster for far less money.”

The work is also quickly making an impact on Industry. Greg Mulholland, President of Citrine Informatics, a firm that applies machine learning to materials development, told me, “We’ve seen a huge broadening of companies and industries that are contacting us and a new sense of urgency. For companies that historically invested in materials research, they want everything yesterday. For others that haven’t, they are racing to get up to speed.”

Jim Warren, a Director at the Materials Genome Initiative, thinks that is just the start. “When you can discover new materials for hundreds of thousands or millions dollars rather than tens or hundreds of millions you are going to see a vast expansion of use cases and industries that benefit,” he told me.

As we have learned from the digital revolution, any time you get a 10x improvement in efficiency, you end up with a transformative commercial impact. Just about everybody I’ve talked to working in materials thinks that pace of advancement is easily achievable over the next decade. Welcome to the materials revolution.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Dall-E on Bing

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.