If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!
Have something to contribute?
Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.
P.S. Here are our Top 40 Innovation Bloggers lists from the last three years:
Agility, the ability to think fast and move quickly, is an imperative for every team this year.
That’s because there’s never been more uncertainty – around technology, the economy, global-political turmoil, and just about everything else.
I’ve led teams in both big companies and startups. I’ve built new teams. And I’ve helped fix dysfunctional ones.
There’s no shortage of research on the success factors for creating high-performing teams. In a landmark study, for example, Google identified “psychological safety” as the top characteristic of its most successful teams. Teams with people who feel safe to take risks and be vulnerable with one another showed better results.
Psychological safety is indeed important. Yet we can’t lose sight of another critical success factor for navigating today’s highly uncertain world, especially in 2023: agility.
Three Steps to Strategic Agility
The concept of “agility” in business originated from the field of agile software development. Agile software development involves “sprints” in which teams define short-term goals (typically two weeks), work diligently to achieve them, and then apply what they learned from the sprint to their next sprint’s goals.
Any team can apply the principles of agile software development to create greater overall agility. Whatever your team’s cadence of work, consider using the following approach to structure your work:
Define short-term goals: What do you need to accomplish by the end of your sprint?
Do the work: What work must be done and how will you do it?
Evaluate progress: Based on what you achieved, what did you learn, and what’s the next logical set of short-term goals?
Approach these steps as a repetitive cycle. For example, you might have a project you expect to take three months to complete. Most traditional teams might go through a single cycle — they define their end goal, create a three-month plan, do their work, and then after the three months are up, they reflect on their progress.
If you were to work in two-week sprints during the three-month project, however, you would have five cycles of defining goals, achieving them, and then applying your learning to make your project even more effective along the way. The agile approach accelerates and leverages continuous learning, which reduces the overall risk of your project.
Instill Agile Mindsets, Abilities, and Know-How Into Your Team
The definition of agility isn’t just about being adaptable. Agility is the ability to think and understand quickly, so you can move faster and easier. It’s a mindset. That’s why it’s important to instill specific attitudes and beliefs in your team around the importance of being flexible and accepting that goals and work can, and should, change on a regular basis.
In my latest book, Experiential Intelligence, I highlight the importance of understanding and developing your team’s mindsets, abilities, and know-how.
From the book Experiential Intelligence
For example, consider reinforcing the following mindsets, delivering training to build certain agile abilities, and providing certain tools to help your team apply specific skills as you implement your projects:
Mindsets (attitudes and beliefs)
Flexibility is a key success factor
Assumptions always exist but can be tested
Iteration drives learning and success
Abilities (high-level competencies)
Learning by doing
Know-how (knowledge and skills)
Project and task prioritization
“Five whys” analysis
Agile teams possess mindsets focused on moving quickly and modifying plans on a regular basis. It’s the exact opposite of how many big companies set annual plans and stick to them no matter what. Agile teams go from sprint to sprint, challenging their mindsets and identifying the abilities and know-how necessary to achieve the goals of the following sprint. That is, before they complete a sprint, they’ve already started planning for the next one. Agility becomes a core competency of the team, supported by know-how in agile methodologies and tools.
As we’ve seen over and over, every product, service, and business model eventually gets disrupted. Agility may ultimately be your only source of sustainable competitive advantage.
BONUS: Get a free sample chapter from my latest book Experiential Intelligence – here.
Image credit: Pixabay
Sign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.
For the past 50 years, innovation has largely been driven by our ability to cram more transistors onto a silicon wafer. That’s what’s allowed us to double the power of our technology every two years or so and led to the continuous flow of new products and services streaming out of innovative organizations.
Perhaps not surprisingly, over the past few decades agility has become a defining competitive attribute. Because the fundamentals of digital technology have been so well understood, much of the value has shifted to applications and things like design and user experience. Yet that will change in the years ahead.
Over the next few decades we will struggle to adapt to a post-digital age and we will need to rethink old notions about agility. To win in this new era of innovation we will have to do far more than just move fast and break things. Rather, we will have to manage four profound shifts in the basis of competition that will challenge some of our most deeply held notions.
Shift 1: From Transistor-Based Computers to New Computing Architectures
In 1965, Intel’s Gordon Moore published a paper that established predicted Moore’s Law, the continuous doubling of transistors that can fit on an integrated circuit. With a constant stream of chips that were not only more powerful, but cheaper, successful firms would rapidly prototype and iterate to speed new applications to market.
Yet now Moore’s Law is ending. Despite the amazing ingenuity of engineers, the simple reality is that every technology eventually hits theoretical limits. The undeniable fact is that atoms are only so small and the speed of light is only so fast and that limits what we can do with transistors. To advance further, we will simply have to find a different way to compute things.
The two most promising candidates are quantum computing and neuromorphic chips, both of which are vastly different from digital computing, utilizing different logic and require different computer languages and algorithmic approaches than classical computers. The transition to these architectures won’t be seamless.
We will also use these architectures in much different ways. Quantum computers will be able to handle almost incomprehensible complexity, generating computing spaces larger than the number of atoms in the known universe. Neuromorphic chips are potentially millions of times more efficient than conventional chips and are much more effective with continuous streams of data, so may be well suited for edge computing and tasks like machine vision.
Shift 2: From Bits to Atoms
The 20th century saw two major waves of innovation. The first, dominated by electricity and internal combustion, revolutionized how we could manipulate the physical world. The second, driven by quantum physics, microbial science and computing, transformed how we could work with the microscopic and the virtual.
The past few decades have been dominated by the digital revolution and it seems like things have been moving very fast, but looks can be deceiving. If you walked into an average 1950s era household, you would see much that you would recognize, including home appliances, a TV and an automobile. On the other hand, if you had to live in a 1900’s era home, with no running water or electricity, you would struggle to survive.
The next era will combine aspects of both waves, essentially using bits to drive atoms. We’re building vast databases of genes and materials, cataloging highly specific aspects of the physical world. We are also using powerful machine learning algorithms to analyze these vast droves of data and derive insights. The revolution underway is so profound that it’s reshaping the scientific method.
In the years to come, new computing architectures are likely to accelerate this process. Simulating chemistry is one of the first applications being explored for quantum computers, which will help us build larger and more detailed databases. Neuromorphic technology will allow us to shift from the cloud to the edge, enabling factories to get much smarter.
The way we interface with the physical world is changing as well. New techniques such as CRISPR helps us edit genes at will. There is also an emerging revolution in materials science that will transform areas like energy and manufacturing. These trends are still somewhat nascent, but have truly transformative potential.
Shift 3: From Rapid Iteration to Exploration
Over the past 30 years, we’ve had the luxury of working with technologies we understand extremely well. Every generation of microchips opened vast new possibilities, but worked exactly the same way as the last generation, creating minimal switching costs. The main challenge was to design applications.
So it shouldn’t be surprising that rapid iteration emerged as a key strategy. When you understand the fundamental technology that underlies a product or service, you can move quickly, trying out nearly endless permutations until you arrive at an optimized solution. That’s often far more effective than a planned, deliberate approach.
Over the next decade or two, however, the challenge will be to advance technology that we don’t understand well at all. As noted above, quantum and neuromorphic computing are still in their nascent stages. Improvements in genomics and materials science are redefining the boundaries of those fields. There are also ethical issues involved with artificial intelligence and genomics that will require us to tread carefully.
So in the future, we will need to put greater emphasis on exploration to understand these new technologies and how they relate to our businesses. Instead of looking to disrupt markets, we will need to pursue grand challenges to solve fundamental problems. Most of all, it’s imperative to start early. By the time many of these technologies hit their stride, it will be too late to catch up.
Shift 4. From Hyper Competition to Mass Collaboration
The competitive environment we’ve become used to has been relatively simple. For each particular industry, there have been distinct ecosystems based on established fields of expertise. Competing firms raced to transform fairly undifferentiated inputs into highly differentiated products and services. You needed to move fast to get an edge.
This new era, on the other hand, will be one of mass collaboration in which government partners with academia and industry to explore new technologies in the pre competitive phase. For example, the Joint Center for Energy Storage Research combines the work of five national labs, a dozen or so academic institutions and hundreds of companies to develop advance batteries. Covid has redefined how scientists collaborate across institutional barriers.
Or consider the Manufacturing Institutes set up under the Obama administration. Focusing on everything from advanced fabrics to biopharmaceuticals, these allow companies to collaborate with government labs and top academics to develop the next generation of technologies. They also operate dozens of testing facilities to help bring new products to market faster.
I’ve visited some of these facilities and have had the opportunity to talk with executives from participating companies. What struck me was how palpable the excitement about the possibilities of this new era was. Agility for them didn’t mean learning to run faster down a chosen course, but to widen and deepen connections throughout a technological ecosystem.
Over the past few decades, we have largely been moving faster and faster down a predetermined path. Over the next few decades, however, we’ll increasingly need to explore multiple domains at once and combine them into something that produces value. We’ll need to learn how to go slower to deliver much larger impacts.
On July 16th, 1945, when the world’s first nuclear explosion shook the plains of New Mexico, the leader of the Manhattan Project, J. Robert Oppenheimer quoted from the Bhagavad Gita, “Now I am become Death, the destroyer of worlds.” Clearly, he was troubled by what he had unleashed and for good reason. The world was never truly the same after that.
Today, however, we have lost much of that reverence for the power of technology. Instead of proceeding deliberately and with caution, tech entrepreneurs have prided themselves on their willingness to “move fast and break things” and, almost reflexively, casually deride anyone who questions the practice as those who “don’t get it.”
It’s hard to see how, by any tangible metric, any of this has made us better off. We set out to disrupt industries, but disrupted people instead. It wasn’t always like this. Throughout our history we have asked hard questions and made good choices about technological progress. As we enter a new era of innovation, we desperately need to recapture some of that wisdom.
How We Put the Nuclear Genie Back in the Bottle
The story of nuclear weapons didn’t start with Oppenheimer, not by a long shot. In fact, if we were going to attribute the Manhattan Project to a single person, it would probably be a Hungarian immigrant physicist named Leo Szilard, who was one of the first to conceive of the possibility of a nuclear chain reaction.
In 1939, upon hearing of the discovery of nuclear fission in Germany he, along with fellow Hungarian emigre Eugene Wigner, decided that the authorities needed to be warned. Szilard then composed a letter warning of the possibility of a nuclear bomb that was eventually signed by Albert Einstein and sent to President Roosevelt. That’s what led to the American development program.
Yet after the explosions at Hiroshima and Nagasaki, many of the scientists who worked to develop the bomb wanted to educate the public of its dangers. In 1955, the philosopher Bertrand Russell issued a manifesto signed by a number of scientific luminaries. Based on this, a series of conferences at Pugwash, Nova Scotia were convened to discuss different approaches to protect the world from weapons of mass destruction.
These efforts involved far more than talk, but helped to shape the non-proliferation agenda and led to concrete achievements such as the Partial Test Ban Treaty. In fact, these contributions were so crucially important that the organizers of the Pugwash conferences were awarded the Nobel Peace Prize in 1995 and they continue even today.
Putting Limits On What We Do With the Code of Life
While the nuclear age started with a bang, the genetic age began with a simple article in the scientific journal Nature, written by two relatively unknown scientists named James Watson and Francis Crick, that described the structure of DNA. It was one of those few watershed moments when an entirely new branch of science arose from a single event.
The field progressed quickly and, roughly 20 years later, a brilliant researcher named Paul Berg discovered that you could merge human DNA with that from other living things, creating new genetic material that didn’t exist in nature. Much like Oppenheimer, Berg understood that, due to his work, humanity stood on a precipice and it wasn’t quite clear where the edge was.
He organized a conference at Asilomar State Beach in California to establish guidelines. Importantly, participation wasn’t limited to scientists. A wide swath of stakeholders were invited, including public officials, members of the media and ethical specialists. The result, now known as the Berg Letter, called for a moratorium on the riskiest experiments until the dangers were better understood. These norms were respected for decades.
Today, we’re undergoing another revolution in genomics and synthetic biology. New technologies, such as CRISPR and mRNA techniques, have opened up incredible possibilities, but also serious dangers. Yet here again, pioneers in the field like Jennifer Doudna are taking the lead in devising sensible guardrails and using the technology responsibly.
The New Economy Meets the New Era of Innovation
When Netscape went public in 1995, it hit like a bombshell. It was the first big Internet stock and, although originally priced at $14 per share, it opened at double that amount and quickly zoomed to $75. By the end of the day, it had settled back at $58.25. Still, a tiny enterprise with no profits was almost instantly worth $2.9 billion.
By the late 1990s, increased computing power combined with the Internet to create a new productivity boom. Many economists hailed the digital age as a “new economy” of increasing returns, in which the old rules no longer applied and a small initial advantage would lead to market dominance.
Yet today, it’s clear that the “new economy” was a mirage. Despite very real advances in processing speed, broadband penetration, artificial intelligence and other things, we seem to be in the midst of a second productivity paradox in which we see digital technology everywhere except in the economic statistics.
The digital revolution has been a real disappointment. In fact, when you look at outcomes, if anything we’re worse off. Rather than a democratized economy, market concentration has markedly increased in most industries. Income inequality in advanced economies has soared. In America wages have stagnated and social mobility has declined for decades. At the same time, social media has been destroying our mental health.
Now we’re entering a new era of innovation, in which we will unleash technologies much more powerful. New computing architectures like quantum and neuromorphic technologies will power things like synthetic biology and materials science to create things that would have seemed like science fiction a generation ago. We simply can no longer afford to be so reckless.
Shifting From Agility Toward Resilience
Moving fast and breaking things only seems like a good idea in a stable world. When you operate in a safe environment, it’s okay to take a little risk and see what happens. Clearly, we no longer live in such a world (if we ever did). Taking on more risk in financial markets led to the Great Recession. Being blase about data security has nearly destroyed our democracy. Failure to prepare for a pandemic has nearly brought modern society to its knees.
Over the next decade, the dangers will only increase. We will undergo four major shifts in technology, resources, migration and demographics. To put that in perspective, a similar shift in demography was enough to make the 60s a tumultuous decade. We haven’t seen a confluence of so many disruptive forces since the 1920s and that didn’t end well.
Unfortunately it’s far too easy to underinvest in order to mitigate the risk of a danger that may never come to fruition. Moving fast and breaking things can seem attractive because the costs are often diffuse. Although it has impoverished society as a whole and made us worse off in so many ways, it has created a small cadre of fabulously wealthy plutocrats.
Yet history is not destiny. We have the power to shape our path by making better choices. We can abandon the cult of disruption and begin to invest in resilience. In fact, we have to. By this point there should be no doubt that the dangers are real. The only question is whether we will act now or simply wait for it to happen and accept the consequences.
It is not too often that the leader of a Fortune 500 gives you an insight into how their company achieves competitive advantage in the marketplace in a letter to shareholders, instead of launching into a page or two of flowery prose written by the Public Relations (PR) team that works for them. The former is what Jeff Bezos tends to deliver year after year. This year’s letter is particularly interesting.
The two key insights in this year’s letter were that:
#1 – Amazon strives to view itself as a startup champion riding to the rescue of customers
#2 – Amazon chooses to be customer-obsessed, not customer-focused or customer-centric, but customer-obsessed
Both of these are crucial to sustaining innovation, and are supported by Jeff’s other main pieces of advice:
– Resisting proxies
– Embracing external trends
– Practicing high velocity decision making
But, I won’t steal Jeff’s thunder. I encourage you to read Jeff’s letter to shareholders in its entirety, check out the bonus video interview at the end, and add comments to share what you find particularly interesting in the letter.
—————————————————————- 2016 Letter to Amazon Shareholders
April 12, 2017
“Jeff, what does Day 2 look like?”
That’s a question I just got at our most recent all-hands meeting. I’ve been reminding people that it’s Day 1 for a couple of decades. I work in an Amazon building named Day 1, and when I moved buildings, I took the name with me. I spend time thinking about this topic.
“Day 2 is stasis. Followed by irrelevance. Followed by excruciating, painful decline. Followed by death. And that is why it is always Day 1.”
To be sure, this kind of decline would happen in extreme slow motion. An established company might harvest Day 2 for decades, but the final result would still come.
I’m interested in the question, how do you fend off Day 2? What are the techniques and tactics? How do you keep the vitality of Day 1, even inside a large organization?
Such a question can’t have a simple answer. There will be many elements, multiple paths, and many traps. I don’t know the whole answer, but I may know bits of it. Here’s a starter pack of essentials for Day 1 defense: customer obsession, a skeptical view of proxies, the eager adoption of external trends, and high-velocity decision making.
True Customer Obsession
There are many ways to center a business. You can be competitor focused, you can be product focused, you can be technology focused, you can be business model focused, and there are more. But in my view, obsessive customer focus is by far the most protective of Day 1 vitality.
Why? There are many advantages to a customer-centric approach, but here’s the big one: customers are always beautifully, wonderfully dissatisfied, even when they report being happy and business is great. Even when they don’t yet know it, customers want something better, and your desire to delight customers will drive you to invent on their behalf. No customer ever asked Amazon to create the Prime membership program, but it sure turns out they wanted it, and I could give you many such examples.
Staying in Day 1 requires you to experiment patiently, accept failures, plant seeds, protect saplings, and double down when you see customer delight. A customer-obsessed culture best creates the conditions where all of that can happen.
As companies get larger and more complex, there’s a tendency to manage to proxies. This comes in many shapes and sizes, and it’s dangerous, subtle, and very Day 2.
A common example is process as proxy. Good process serves you so you can serve customers. But if you’re not watchful, the process can become the thing. This can happen very easily in large organizations. The process becomes the proxy for the result you want. You stop looking at outcomes and just make sure you’re doing the process right. Gulp. It’s not that rare to hear a junior leader defend a bad outcome with something like, “Well, we followed the process.” A more experienced leader will use it as an opportunity to investigate and improve the process. The process is not the thing. It’s always worth asking, do we own the process or does the process own us? In a Day 2 company, you might find it’s the second.
Another example: market research and customer surveys can become proxies for customers – something that’s especially dangerous when you’re inventing and designing products. “Fifty-five percent of beta testers report being satisfied with this feature. That is up from 47% in the first survey.” That’s hard to interpret and could unintentionally mislead.
Good inventors and designers deeply understand their customer. They spend tremendous energy developing that intuition. They study and understand many anecdotes rather than only the averages you’ll find on surveys. They live with the design.
I’m not against beta testing or surveys. But you, the product or service owner, must understand the customer, have a vision, and love the offering. Then, beta testing and research can help you find your blind spots. A remarkable customer experience starts with heart, intuition, curiosity, play, guts, taste. You won’t find any of it in a survey.
Embrace External Trends
The outside world can push you into Day 2 if you won’t or can’t embrace powerful trends quickly. If you fight them, you’re probably fighting the future. Embrace them and you have a tailwind.
These big trends are not that hard to spot (they get talked and written about a lot), but they can be strangely hard for large organizations to embrace. We’re in the middle of an obvious one right now: machine learning and artificial intelligence.
Over the past decades computers have broadly automated tasks that programmers could describe with clear rules and algorithms. Modern machine learning techniques now allow us to do the same for tasks where describing the precise rules is much harder.
At Amazon, we’ve been engaged in the practical application of machine learning for many years now. Some of this work is highly visible: our autonomous Prime Air delivery drones; the Amazon Go convenience store that uses machine vision to eliminate checkout lines; and Alexa, our cloud-based AI assistant. (We still struggle to keep Echo in stock, despite our best efforts. A high-quality problem, but a problem. We’re working on it.)
But much of what we do with machine learning happens beneath the surface. Machine learning drives our algorithms for demand forecasting, product search ranking, product and deals recommendations, merchandising placements, fraud detection, translations, and much more. Though less visible, much of the impact of machine learning will be of this type – quietly but meaningfully improving core operations.
Inside AWS, we’re excited to lower the costs and barriers to machine learning and AI so organizations of all sizes can take advantage of these advanced techniques.
Using our pre-packaged versions of popular deep learning frameworks running on P2 compute instances (optimized for this workload), customers are already developing powerful systems ranging everywhere from early disease detection to increasing crop yields. And we’ve also made Amazon’s higher level services available in a convenient form. Amazon Lex (what’s inside Alexa), Amazon Polly, and Amazon Rekognition remove the heavy lifting from natural language understanding, speech generation, and image analysis. They can be accessed with simple API calls – no machine learning expertise required. Watch this space. Much more to come.
High-Velocity Decision Making
Day 2 companies make high-quality decisions, but they make high-quality decisions slowly. To keep the energy and dynamism of Day 1, you have to somehow make high-quality, high-velocity decisions. Easy for start-ups and very challenging for large organizations. The senior team at Amazon is determined to keep our decision-making velocity high. Speed matters in business – plus a high-velocity decision making environment is more fun too. We don’t know all the answers, but here are some thoughts.
First, never use a one-size-fits-all decision-making process. Many decisions are reversible, two-way doors. Those decisions can use a light-weight process. For those, so what if you’re wrong? I wrote about this in more detail in last year’s letter.
Second, most decisions should probably be made with somewhere around 70% of the information you wish you had. If you wait for 90%, in most cases, you’re probably being slow. Plus, either way, you need to be good at quickly recognizing and correcting bad decisions. If you’re good at course correcting, being wrong may be less costly than you think, whereas being slow is going to be expensive for sure.
Third, use the phrase “disagree and commit.” This phrase will save a lot of time. If you have conviction on a particular direction even though there’s no consensus, it’s helpful to say, “Look, I know we disagree on this but will you gamble with me on it? Disagree and commit?” By the time you’re at this point, no one can know the answer for sure, and you’ll probably get a quick yes.
This isn’t one way. If you’re the boss, you should do this too. I disagree and commit all the time. We recently greenlit a particular Amazon Studios original. I told the team my view: debatable whether it would be interesting enough, complicated to produce, the business terms aren’t that good, and we have lots of other opportunities. They had a completely different opinion and wanted to go ahead. I wrote back right away with “I disagree and commit and hope it becomes the most watched thing we’ve ever made.” Consider how much slower this decision cycle would have been if the team had actually had to convince me rather than simply get my commitment.
Note what this example is not: it’s not me thinking to myself “well, these guys are wrong and missing the point, but this isn’t worth me chasing.” It’s a genuine disagreement of opinion, a candid expression of my view, a chance for the team to weigh my view, and a quick, sincere commitment to go their way. And given that this team has already brought home 11 Emmys, 6 Golden Globes, and 3 Oscars, I’m just glad they let me in the room at all!
Fourth, recognize true misalignment issues early and escalate them immediately. Sometimes teams have different objectives and fundamentally different views. They are not aligned. No amount of discussion, no number of meetings will resolve that deep misalignment. Without escalation, the default dispute resolution mechanism for this scenario is exhaustion. Whoever has more stamina carries the decision.
I’ve seen many examples of sincere misalignment at Amazon over the years. When we decided to invite third party sellers to compete directly against us on our own product detail pages – that was a big one. Many smart, well-intentioned Amazonians were simply not at all aligned with the direction. The big decision set up hundreds of smaller decisions, many of which needed to be escalated to the senior team.
“You’ve worn me down” is an awful decision-making process. It’s slow and de-energizing. Go for quick escalation instead – it’s better.
So, have you settled only for decision quality, or are you mindful of decision velocity too? Are the world’s trends tailwinds for you? Are you falling prey to proxies, or do they serve you? And most important of all, are you delighting customers? We can have the scope and capabilities of a large company and the spirit and heart of a small one. But we have to choose it.
A huge thank you to each and every customer for allowing us to serve you, to our shareowners for your support, and to Amazonians everywhere for your hard work, your ingenuity, and your passion.
As always, I attach a copy of our original 1997 letter. It remains Day 1.
If you’d like dive deeper into the mind of Jeff Bezos, then check out this interview with him conducted by Walt Mossberg of The Verge last year at Code Conference 2016:
And here is another fascinating peek inside the mind of Jeff Bezos from 1997:
Sign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.