Silicon Valley’s Innovator’s Dilemma – The Atom, the Bit and the Gene
GUEST POST from Greg Satell
Over the past several decades, innovation has become largely synonymous with digital technology. When the topic of innovation comes up, somebody points to a company like Apple, Google or Meta rather than, say, a car company, a hotel or a restaurant. Management gurus wax poetically about the “Silicon Valley way.”
Of course, that doesn’t mean that other industries haven’t been innovative. In fact, there are no shortage of excellent examples of innovation in cars, hotels, restaurants and many other things. Still, the fact remains that for most of recent memory digital technology has moved further and faster than anything else.
This has been largely due to Moore’s Law, our ability to consistently double the number of transistors we’re able to cram onto a silicon wafer. Now, however, Moore’s Law is ending and we’re entering a new era of innovation. Our future will not be written in ones and zeros, but will be determined by our ability to use information to shape the physical world.
The Atom
The concept of the atom has been around at least since the time of the ancient Greek philosopher Democritus. Yet it didn’t take on any real significance until the early 20th century. In fact, the paper Albert Einstein used for his dissertation helped to establish the existence of atoms through a statistical analysis of Brownian motion.
Yet it was the other papers from Einstein’s miracle year of 1905 that transformed the atom from an abstract concept to a transformative force, maybe even the most transformative force in the 20th century. His theory of mass-energy equivalence would usher in the atomic age, while his work on black-body radiation would give rise to quantum mechanics and ideas so radical that even he would refuse to accept them.
Ironically, despite Einstein’s reluctance, quantum theory would lead to the development of the transistor and the rise of computers. These, in turn, would usher in the digital economy, which provided an alternative to the physical economy of goods and services based on things made from atoms and molecules.
Still, the vast majority of what we buy is made up of what we live in, ride in, eat and wear. In fact, information and communication technologies only make up about 6% of GDP in advanced countries, which is what makes the recent revolution in materials science is so exciting. We’re beginning to exponentially improve the efficiency of how we design the materials that make up everything from solar panels to building materials.
The Bit
While the concept of the atom evolved slowly over millennia, the bit is one of the rare instances in which an idea seems to have arisen in the mind of a single person with little or no real precursor. Introduced by Claude Shannon in a paper in 1948—incidentally, the same year the transistor was invented—the bit has shaped how we see and interact with the world ever since.
The basic idea was that information isn’t a function of content, but the absence of ambiguity, which can be broken down to a single unit – a choice between two alternatives. Much like how a coin toss which lacks information while in the air, but takes on a level of certainty when it lands, information arises when ambiguity disappears.
He called this unit, a “binary digit” or a “bit” and much like the pound, quart, meter or liter, it has become such a basic unit of measurement that it’s hard to imagine our modern world without it. Shannon’s work would soon combine with Alan Turing’s concept of a universal computer to create the digital computer.
Now the digital revolution is ending and we will soon be entering a heterogeneous computing environment that will include things like quantum, neuromorphic and biological computing. Still, Claude Shannon’s simple idea will remain central to how we understand how information interacts with the world it describes.
The Gene
The concept of the gene was first discovered by an obscure Austrian monk named Gregor Mendel, but in one of those strange peculiarities of history, his work went almost totally unnoticed until the turn of the century. Even then, no one really knew what a gene was or how they functioned. The term was, for the most part, just an abstract concept.
That changed abruptly when James Watson and Francis Crick published their article in the scientific journal Nature. In a single stroke, the pair were able to show that genes were, in fact, made up of a molecule called DNA and that they operated through a surprisingly simple code made up of A,T,C and G.
Things really began to kick into high gear when the Human Genome Project was completed in 2003. Since then the cost to sequence a genome has been falling faster than the rate of Moore’s Law, which has unleashed a flurry of innovation. Jennifer Doudna’s discovery of CRISPR in 2012 revolutionized our ability to edit genes. More recently, mRNA technology has helped develop COVID-19 vaccines in record time.
Today, we have entered a new era of synthetic biology in which we can manipulate the genetic code of A,T,C and G almost as easily as we can the bits in the machines that Turing imagined all those years ago. Researchers are also exploring how we can use genes to create advanced materials and maybe even create better computers.
Innovation Is Combination
The similarity of the atom, the bit and the gene as elemental concepts is hard to miss and they’ve allowed us to understand our universe in a visceral, substantial way. Still, they arose in vastly different domains and have been largely applied to separate and distinct fields. In the future, however, we can expect vastly greater convergence between the three.
We’ve already seen glimpses of this. For example, as a graduate student Charlie Bennett was a teaching assistant for James Watson. Yet in between his sessions instructing undergraduates in Watson’s work on genes, he took an elective course on the theory of computing in which he learned about the work of Shannon and Turing. That led him to go work for IBM and become a pioneer in quantum computing.
In much the same way, scientists are applying powerful computers to develop new materials and design genetic sequences. Some of these new materials will be used to create more powerful computers. In the future, we can expect the concepts of the atom, the bit and the gene to combine and recombine in exciting ways that we can only begin to imagine today.
The truth is that innovation is combination and has, in truth, always been. The past few decades, in which one technology so thoroughly dominated that it was able to function largely in isolation to other fields, was an anomaly. What we are beginning to see now is, in large part, a reversion to the mean, where the most exciting work will be interdisciplinary.
This is Silicon Valley’s innovator’s dilemma. Nerdy young geeks will no longer be able to prosper coding blithely away in blissful isolation. It is no longer sufficient to work in bits alone. Increasingly we need to combine those bits with atoms and genes to create significant value. If you want to get a glimpse of the future, that’s where to look.
In this blog, I return and expand on a paradox that has concerned me for some time. Are we getting too good at innovation, and is it in danger of getting out of control? That may seem like a strange question for an innovator to ask. But innovation has always been a two edged sword. It brings huge benefits, but also commensurate risks.
Ostensibly, change is good. Because of technology, today we mostly live more comfortable lives, and enjoy superior health, longevity, and mostly increased leisure and abundance compared to our ancestors.
Exponential Innovation Growth: The pace of innovation is accelerating. It may not exactly mirror Moore’s Law, and of course, innovation is much harder to quantify than transistors. But the general trend in innovation and change approximates exponential growth. The human stone-age lasted about 300,000 years before ending in about 3,000 BC with the advent of metalworking. The culture of the Egyptian Pharos lasted 30 centuries. It was certainly not without innovations, but by modern standards, things changed very slowly. My mum recently turned 98 years young, and the pace of change she has seen in her lifetime is staggering by comparison to the past. Literally from horse and carts delivering milk when she was a child in poor SE London, to todays world of self driving cars and exploring our solar system and beyond. And with AI, quantum computing, fusion, gene manipulation, manned interplanetary spaceflight, and even advanced behavior manipulation all jockeying for position in the current innovation race, it seems highly likely that those living today will see even more dramatic change than my mum experienced.
The Dark Side of Innovation: While accelerated innovation is probably beneficial overall, it is not without its costs. For starters, while humans are natural innovators, we are also paradoxically change averse. Our brains are configured to manage more of our daily lives around habits and familiar behaviors than new experiences. It simply takes more mental effort to manage new stuff than familiar stuff. As a result we like some change, but not too much, or we become stressed. At least some of the burgeoning mental health crisis we face today is probably attributable the difficulty we have adapting to so much rapid change and new technology on multiple fronts.
Nefarious Innovation: And of course, new technology can be used for nefarious as well as noble purpose. We can now kill our fellow humans far more efficiently, and remotely than our ancestors dreamed of. The internet gives us unprecedented access to both information and connectivity, but is also a source of misinformation and manipulation.
The Abundance Dichotomy: Innovation increases abundance, but it’s arguable if that actually makes us happier. It gives us more, but paradoxically brings greater inequalities in distribution of the ‘wealth’ it creates. Behavior science has shown us consistently that humans make far more relative than absolute judgments. Being better off than our ancestors actually doesn’t do much for us. Instead we are far more interested in being better off than our peers, neighbors or the people we compare ourselves to on Instagram. And therein lies yet another challenge. Social media means we now compare ourselves to far more people than past generations, meaning that the standards we judge ourselves against are higher than ever before.
Side effects and Unintended Consequences: Side effects and unintended consequences are perhaps the most difficult challenge we face with innovation. As the pace of innovation accelerates, so does the build up of side effects, and problematically, these often lag our initial innovations. All too often, we only become aware of them when they have already become a significant problem. Climate change is of course a poster child for this, as a huge unanticipated consequence of the industrial revolution. The same applies to pollution. But as innovation accelerates, the unintended consequences it brings are also stacking up. The first generations of ‘digital natives’ are facing unprecedented mental health challenges. Diseases are becoming resistant to antibiotics, while population density is leading increased rate of new disease emergence. Agricultural efficiency has created monocultures that are inherently more fragile than the more diverse supply chain of the past. Longevity is putting enormous pressure on healthcare.
The More we Innovate, the less we understand: And last, but not least, as innovation accelerates, we understand less about what we are creating. Technology becomes unfathomably complex, and requires increasing specialization, which means few if any really understand the holistic picture. Today we are largely going full speed ahead with AI, quantum computing, genetic engineering, and more subtle, but equally perilous experiments in behavioral and social manipulation. But we are doing so with increasingly less pervasive understanding of direct, let alone unintended consequences of these complex changes!
The Runaway Innovation Train: So should we back off and slow down? Is it time to pump the brakes? It’s an odd question for an innovator, but it’s likely a moot point anyway. The reality is that we probably cannot slow down, even if we want to. Innovation is largely a self-propagating chain reaction. All innovators stand on the shoulders of giants. Every generation builds on past discoveries, and often this growing knowledge base inevitably leads to multiple further innovations. The connectivity and information access of internet alone is driving today’s unprecedented innovation, and AI and quantum computing will only accelerate this further. History is compelling on this point. Stone-age innovation was slow not because our ancestors lacked intelligence. To the best of our knowledge, they were neurologically the same as us. But they lacked the cumulative knowledge, and the network to access it that we now enjoy. Even the smartest of us cannot go from inventing flint-knapping to quantum mechanics in a single generation. But, back to ‘standing on the shoulder of giants’, we can build on cumulative knowledge assembled by those who went before us to continuously improve. And as that cumulative knowledge grows, more and more tools and resources become available, multiple insights emerge, and we create what amounts to a chain reaction of innovations. But the trouble with chain reactions is that they can be very hard to control.
Simultaneous Innovation: Perhaps the most compelling support for this inevitability of innovation lies in the pervasiveness of simultaneous innovation. How does human culture exist for 50,000 years or more and then ‘suddenly’ two people, Darwin and Wallace come up with the theory of evolution independently and simultaneously? The same question for calculus (Newton and Leibniz), or the precarious proliferation of nuclear weapons and other assorted weapons of mass destruction. It’s not coincidence, but simply reflects that once all of the pieces of a puzzle are in place, somebody, and more likely, multiple people will inevitably make connections and see the next step in the innovation chain.
But as innovation expands like a conquering army on multiple fronts, more and more puzzle pieces become available, and more puzzles are solved. But unfortunately associated side effects and unanticipated consequences also build up, and my concern is that they can potentially overwhelm us. And this is compounded because often, as in the case of climate change, dealing with side effects can be more demanding than the original innovation. And because they can be slow to emerge, they are often deeply rooted before we become aware of them. As we look forward, just taking AI as an example, we can already somewhat anticipate some worrying possibilities. But what about the surprises analogous to climate change that we haven’t even thought of yet? I find that a sobering thought that we are attempting to create consciousness, but despite the efforts of numerous Nobel laureates over decades, we still have to idea what consciousness is. It’s called the ‘hard problem’ for good reason.
Stop the World, I Want to Get Off: So why not slow down? There are precedents, in the form of nuclear arms treaties, and a variety of ethically based constraints on scientific exploration. But regulations require everybody to agree and comply. Very big, expensive and expansive innovations are relatively easy to police. North Korea and Iran notwithstanding, there are fortunately not too many countries building nuclear capability, at least not yet. But a lot of emerging technology has the potential to require far less physical and financial infrastructure. Cyber crime, gene manipulation, crypto and many others can be carried out with smaller, more distributed resources, which are far more difficult to police. Even AI, which takes considerable resources to initially create, opens numerous doors for misuse that requires far less resource.
The Atomic Weapons Conundrum. The challenge with getting bad actors to agree on regulation and constraint is painfully illustrated by the atomic bomb. The discovery of fission by Strassman and Hahn in the late 1930’s made the bomb inevitable. This set the stage for a race to turn theory into practice between the Allies and Nazi Germany. The Nazis were bad actor, so realistically our only option was to win the race. We did, but at enormous cost. Once the ‘cat was out of the bag, we faced a terrible choice; create nuclear weapons, and the horror they represent, or chose to legislate against them, but in so doing, cede that terrible power to the Nazi’s? Not an enviable choice.
Cumulative Knowledge. Today we face similar conundrums on multiple fronts. Cumulative knowledge will make it extremely difficult not to advance multiple, potentially perilous technologies. Countries who legislate against it risk either pushing it underground, or falling behind and deferring to others. The recent open letter from Meta to the EU chastising it for the potential economic impacts of its AI regulations may have dripped with self-interest. But that didn’t make it wrong. https://euneedsai.com/ Even if the EU slows down AI development, the pieces of the puzzle are already in place. Big corporations, and less conservative countries will still pursue the upside, and risk the downside. The cat is very much out of the bag.
Muddling Through: The good news is that when faced with potentially perilous change in the past, we’ve muddled through. Hopefully we will do so again. We’ve avoided a nuclear holocaust, at least for now. Social media has destabilized our social order, but hasn’t destroyed it, yet. We’ve been through a pandemic, and come out of it, not unscathed, but still functioning. We are making progress in dealing with climate change, and have made enormous strides in managing pollution.
Chain Reactions: But the innovation chain reaction, and the impact of cumulative knowledge mean that the rate of change will, in the absence of catastrophe, inevitably continue to accelerate. And as it does, so will side effects, nefarious use, mistakes and any unintended consequences that derive from it. Key factors that have helped us in the past are time and resource, but as waves of innovation increase in both frequency and intensity, both are likely to be increasingly squeezed.
What can, or should we do? I certainly don’t have simple answers. We’re all pretty good, although by definition, far from perfect at scenario planning and trouble shooting for our individual innovations. But the size and complexity of massive waves of innovation, such as AI, are obviously far more challenging. No individual, or group can realistically either understand or own all of the implications. But perhaps we as an innovation community should put more collective resources against trying? We’ll never anticipate everything, and we’ll still get blindsided. And putting resources against ‘what if’ scenarios is always a hard sell. But maybe we need to go into sales mode.
Can the Problem Become the Solution? Encouragingly, the same emerging technology that creates potential issues could also help us. AI and quantum computing will give us almost infinite capacity for computation and modeling. Could we collectively assign more of that emerging resource against predicting and managing it’s own risks?
With many emerging technologies, we are now where we were in the 1900’s with climate change. We are implementing massive, unpredictable change, and by definition have no idea what the unanticipated consequences of that will be. I personally think we’ll deal with climate change. It’s difficult to slow a leviathan that’s been building for over a hundred years. But we’ve taken the important first steps in acknowledging the problem, and are beginning to implement corrective action.
But big issues require big solutions. Long-term, I personally believe the most important thing for humanity to escape the gravity well. Given the scale of our ability to curate global change, interplanetary colonization is not a luxury, but an essential. Climate change is a shot across the bow with respect to how fragile our planet is, and how big our (unintended) influence can be. We will hopefully manage that, and avoid nuclear war or synthetic pandemics for long enough to achieve it. But ultimately, humanity needs the insurance dispersed planetary colonization will provide.
Image credits: Microsoft Copilot
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.
When it comes to strategy and tactics, there are a lot of definitions, a lot of disagreement, and a whole lot of confusion. When is it strategy? When is it tactics? Which is more important? How do they inform each other?
Instead of definitions and disagreement, I want to start with agreement. Everyone agrees that both strategy AND tactics are required. If you have one without the other, it’s just not the same. It’s like with shoes and socks: Without shoes, your feet get wet; without socks, you get blisters; and when you have both, things go a lot better. Strategy and tactics work best when they’re done together.
The objective of strategy and tactics is to help everyone take the right action. Done well, everyone from the board room to the trenches knows how to take action. In that way, here are some questions to ask to help decide if your strategy and tactics are actionable.
What will we do? This gets to the heart of it. You’ve got to be able to make a list of things that will get done. Real things. Real actions. Don’t be fooled by babble like “We will provide customer value” and “Will grow the company by X%.” Providing customer value may be a good idea, but it’s not actionable. And growing the company by an arbitrary percentage is aspirational, but not actionable.
Why will we do it? This one helps people know what’s powering the work and helps them judge whether their actions are in line with that forcing function. Here’s a powerful answer: Competitors now have products and services that are better than ours, and we can’t have that. This answer conveys the importance of the work and helps everyone put the right amount of energy into their actions. [Note: this question can be asked before the first one.]
Who will do it? Here’s a rule: if no one is freed up to do the new work, the new work won’t get done. Make a list of the teams that will stop their existing projects before they can take action on the new work. Make a list of the new positions that are in the budget to support the strategy and tactics. Make a list of the new companies you’ll partner with. Make a list of all the incremental funding that has been put in the budget to help all the new people complete all these new actions. If your lists are short or you can make any, you don’t have what it takes to get the work done. You don’t have a strategy and you don’t have tactics. You have an unfunded mandate. Run away.
When will it be done? All actions must have completion dates. The dates will be set without consideration of the work content, so they’ll be wrong. Even still, you should have them. And once you have the dates, double all the task durations and push out the dates in your mind. No need to change the schedule now (you can’t change it anyway) because it will get updated when the work doesn’t get done on time. Now, using your lists of incremental headcount and budget, assign the incremental resources to all the actions with completion dates. Look for actions and budgets as those are objective evidence of the unfunded mandate character of your strategy and tactics. And for actions without completion dates, disregard them because they can never be late.
How will we know it’s done? All actions must call out a definition of success (DOS) that defines when the action has been accomplished. Without a measurable DOS, no one is sure when they’re done so they’ll keep working until you stop them. And you don’t want that. You want them to know when they’re done so they can quickly move on to the next action without oversight. If there’s no time to create a DOS, the action isn’t all that important and neither is the completion date.
When the wheels fall off, and they will, how will we update the strategy and tactics? Strategy and tactics are forward-looking and looking forward is rife with uncertainty. You’ll be wrong. What actions will you take to see if everything is going as planned? What actions will you take when progress doesn’t meet the plan? What actions will you take when you learn your tactics aren’t working and your strategy needs a band-aid?
Brian Higgins On Driving Verizon’s Customer Experience Vision
GUEST POST from Shep Hyken
If you have the best product in the world, that’s nice, but it’s not enough. You need a strong customer experience to go with it.
If you have the best service in the world, that’s nice, but it’s not enough. You need a strong product to go with it.
And one other thing. You also need customers! Without them, it doesn’t matter if you have the best product and the best service; you will eventually go out of business.
That’s why I’m excited about this week’s article. I had the opportunity to have an Amazing Business Radio interview with Brian Higgins, the chief customer experience officer at Verizon Consumer. After a career of 20-plus years working for one of the most recognized brands in the world, he has a lot to share about what it takes to get customers to say, “I’ll be back.”
Verizon is one of the most recognizable brands on the planet. A Fortune 50 company, it has more than 100,000 employees, a global presence serving more than 150 countries, more than $130 billion in annual revenue and a market cap of more than $168 billion.
Higgins made it clear that in addition to a premium network and product offerings, there needs to be a focus on customer experience with three primary objectives: addressing pain points, enhancing digital experiences and highlighting signature experiences exclusive to Verizon customers/members. They want to be easy to do business with and to use Customer Experience (CX) to capture market share and retain customers. What follows is a summary of Higgins’ most important points in our interview, followed by my commentary:
Who Reports to Whom?: With Verizon’s emphasis on CX, one of the first questions I asked Higgins was about the company’s structure. Does CX report to marketing? Is CX over sales and marketing? Different companies put an emphasis on marketing, sales or experience. Often, one reports to the other. At Verizon, sales, revenue and experience work together. Higgins says, “We work in partnership with each other. You can’t build an experience if you don’t have the sales, revenue and customer care teams all on board.” The chief sales officer, chief revenue officer and chief experience officer “sit next to each other.”
Membership: In our conversation, Higgins referred to Verizon’s customers as customers, members and subscribers. I asked which he preferred, and he quickly responded, “I would refer to them as members.” The membership is diverse, but the goal is to create a consistent and positive experience regardless of how individuals interact with the company. He sees the relationship with members as a partnership that is an intricate part of their lives. Most people check their phone the moment they wake up, throughout the day, and often, it’s one of the last things they check before going to bed. Verizon is a part of its members’ lives, and that’s an opportunity that cannot be mismanaged or abused.
Employees Must Be Happy Too: More companies are recognizing that their CX must also include EX (employee experience). Employees must have the tools they need. This is an emphasis in his organization. Simplifying the employee experience with better tools and policies is the key to elevating the customer’s experience. Higgins shared the perfect description of why employee experience is paramount to the success of a business: “If employees aren’t happy and don’t feel they have the policies and tools they need that are right to engage with customers, you’re not going to get the experience right.”
Focus on Little Pain Points: One of the priorities Higgins focuses on is what he refers to as “small cracks in the experience.” Seventy-five percent of the calls coming in to customer care are for small problems or questions, such as a promo code that didn’t work or an issue with a bill. His team continuously analyzes all customer journeys and works to fix them when needed. This helps to minimize recurring issues, thereby reducing customer support calls and the time employees spend fixing the same issue.
The Digital Experience: Customers are starting to get comfortable with—and sometimes prefer—digital experiences. Making these experiences seamless and user-friendly increases overall customer satisfaction. More and more, they are using digital platforms to help with the “small cracks in the experience.” Employees also get an AI-infused digital experience. Higgins said Verizon uses AI to analyze customer conversations and provide real-time answers and solutions to employees, demonstrating how AI can support both employees and customers.
Amplifying the Power of One Interaction: The final piece of wisdom Higgins shared was about recognizing how important a single interaction can be. Most customers don’t call very often. They may call once every three years, so each interaction needs to be treated like it’s a special moment—a unique opportunity to leave a lasting positive impression, one that leaves no doubt the customer made the right decision to do business with Verizon. Higgins believes in treating the customer like a relative visiting your home for a holiday. He closed by saying, “You’d be amazed how getting that one interaction with a customer right versus anything less than right can have a huge impact on the brand.”
Higgins’ vision for Verizon is not just about maintaining a superior network. It’s about creating an unparalleled customer experience that resonates with every interaction. As Verizon continues integrating advanced AI technologies and streamlining its processes, the focus continues to be on personalizing and enhancing every customer touchpoint, creating an experience that fosters high customer satisfaction and loyalty.
Digital transformation is hardly new. Advances in computing create more powerful infrastructure which in turn enables more productive operating models which in turn can enable wholly new business models. From mainframes to minicomputers to PCs to the Internet to the Worldwide Web to cloud computing to mobile apps to social media to generative AI, the hits just keep on coming, and every IT organization is asked to both keep the current systems running and to enable the enterprise to catch the next wave. And that’s a problem.
The dynamics of productivity involve a yin and yang exchange between systems that improve efficiency and programs that improve effectiveness. Systems, in this model, are intended to maintain state, with as little friction as possible. Programs, in this model, are intended to change state, with maximum impact within minimal time. Each has its own governance model, and the two must not be blended.
It is a rare IT organization that does not know how to maintain its own systems. That’s Job One, and the decision rights belong to the org itself. But many IT organizations lose their way when it comes to programs — specifically, the digital transformation initiatives that are re-engineering business processes across every sector of the global economy. They do not lose their way with respect to the technology of the systems. They are missing the boat on the management of the programs.
Specifically, when the CEO champions the next big thing, and IT gets a big chunk of funding, the IT leader commits to making it all happen. This is a mistake. Digital transformation entails re-engineering one or more operating models. These models are executed by organizations outside of IT. For the transformation to occur, the people in these organizations need to change their behavior, often drastically. IT cannot — indeed, must not — commit to this outcome. Change management is the responsibility of the consuming organization, not the delivery organization. In other words, programs must be pulled. They cannot be pushed. IT in its enthusiasm may believe it can evangelize the new operating model because people will just love it. Let me assure you — they won’t. Everybody endorses change as long as other people have to be the ones to do it. No one likes to move their own cheese.
Given all that, here’s the playbook to follow:
If it is a program, the head of the operating unit that must change its behavior has to sponsor the change and pull the program in. Absent this commitment, the program simply must not be initiated.
To govern the program, the Program Management Office needs a team of four, consisting of the consuming executive, the IT executive, the IT project manager, and the consuming organization’s program manager. The program manager, not the IT manager, is responsible for change management.
The program is defined by a performance contract that uses a current state/future state contrast to establish the criteria for program completion. Until the future state is achieved, the program is not completed.
Once the future state is achieved, then the IT manager is responsible for securing the system that will maintain state going forward.
Delivering programs that do not change state is the biggest source of waste in the Productivity Zone. There is an easy fix for this. Just say No.
That’s what I think. What do you think?
Image Credit: Unsplash
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.
You know that asking questions is essential. After all, when you’re innovating, you’re doing something new, which means you’re learning, and the best way to learn is by asking questions. You also know that asking genuine questions, rather than rhetorical or weaponized ones, is critical to building a culture of curiosity, exploration, and smart risk-taking. But did you know that making a small change to a single question can radically change everything for your innovation strategy, process, and portfolio?
What is your hypothesis?
Before Lean Startup, there was Discovery-Driven Planning. This approach, first proposed by Columbia Business School professor Rita McGrath and Wharton School professor Ian MacMillan in their 1995 HBR article, outlines a “planning” approach that acknowledges and embraces assumptions (instead of pretending that they’re facts) and relentlessly tests them to uncover new data and inform and update the plan.
It’s the scientific method applied to business.
How confident are you?
However, not all assumptions or hypotheses are created equal. This was the assertion in the 2010 HBR article “Beating the Odds When You Launch a New Venture.” Using examples from Netflix, Johnson & Johnson, and a host of other large enterprises and scrappy startups, the authors encourage innovators to ask two questions about their assumptions:
How confident am I that this assumption is true?
What is the (negative) impact on the idea if the assumption is false?
By asking these two questions of every assumption, the innovator sorts assumptions into three categories:
Deal Killers: Assumptions that, if left untested, threaten the idea’s entire existence
Path-dependent risks: Assumptions that impact the strategic underpinnings of the idea and cost significant time and money to resolve
High ROI risks: Assumptions that can be quickly and easily tested but don’t have a significant impact on the idea’s strategy or viability
However, human beings have a long and inglorious history of overconfidence. This well-established bias in which our confidence in our judgment exceeds the objective (data-based) accuracy of those judgments resulted in disasters like Chernobyl, the sinking of the Titanic, the explosions of the Space Shuttle Challenger and Discovery, and the Titan submersible explosion.
Let’s not add your innovation to that list.
How much of your money are you willing to bet?
For years, I’ve worked with executives and their teams to adopt Discovery-Driven Planning and focus their earliest efforts on testing Deal Killer assumptions. I was always struck by how confident everyone was and rather dubious when they reported that they had no Deal Killer assumptions.
So, I changed the question.
Instead of asking how confident they were, I asked how much they would bet. Then I made it personal—high confidence meant you were willing to bet your annual income, medium confidence meant dinner for the team at a Michelin-starred restaurant, and low confidence meant a cup of coffee.
Suddenly, people weren’t quite so confident, and there were A LOT of Deal Killers to test.
Make it Personal
It’s easy to become complacent in companies. You don’t get paid more if you come in under budget, and you don’t get fired if you overspend. Your budget is a rounding error in the context of all the money available to the company. And your signing authority is probably a rounding error on the rounding error that is your budget. So why worry about ten grand here and a hundred grand there?
Because neither you, your team, nor your innovation efforts have the luxury of complacency.
Innovation is always under scrutiny. People expect you to generate results with a fraction of the resources in record time. If you don’t, you, your team, and your budget are the first to be cut.
The business of innovation is personal. Treat it that way.
How much of your time, money, and reputation are you willing to risk? What do you need your team to risk in terms of their time, money, and professional aspirations? How much time, money, and reputation are your stakeholders willing to risk?
The answers change everything.
Image credit: Pixabay
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.
There’s an old adage that says we should never let a crisis go to waste. The point is that during a crisis there is a visceral sense of urgency and resistance often falls by the wayside. We certainly saw that during the COVID-19 pandemic. Digital technologies such as video conferencing, online grocery and tele-health have gone from fringe to mainstream in record time.
Seasoned leaders learn how to make good use of a crisis. Consider Bill Gates and his ‘Internet Tidal Wave‘ memo, which leveraged what could have been a mortal threat to Microsoft into a springboard to even greater dominance. Or how Steve Jobs used Apple’s near-death experience to reshape the ailing company into a powerhouse.
But what if we could prepare for a trigger before it happens? The truth is that indications of trouble are often clear long before the crisis arrives. Clearly, there were a number of warning signs that a pandemic was possible, if not likely. As every good leader knows, there’s never a shortage of looming threats. If we learn to plan ahead, we can make a crisis work for us.
The Plan Hatched in a Belgrade Cafe
In the fall of 1998, five young activists met in a coffee shop in Belgrade, Serbia. Although still in their twenties, they were already grizzled veterans. In 1992, they took part in student protests against the war in Bosnia. In 1996, they helped organize a series of rallies in response to Slobodan Milošević’s attempt to steal local elections.
To date, their results were decidedly mixed. The student protests were fun, but when the semester ended, everyone went home for the summer and that was the end of that. The 1996 protests were more successful, overturning the fraudulent results, but the opposition coalition, called “Zajedno,” soon devolved into infighting.
So they met in the coffee shop to discuss their options for the upcoming presidential election to be held in 2000. They knew from experience that they could organize rallies effectively and get people to the polls. They also knew that when they got people to the polls and won, Milošević would use his power and position to steal the election.
That would be their trigger.
The next day, six friends joined them and they called their new organization Otpor. Things began slowly, with mostly street theatre and pranks, but within 2 years their ranks had swelled to more than 70,000. When Milošević tried to steal the election they were ready and what is now known as the Bulldozer Revolution erupted.
The Serbian strongman was forced to concede. The next year, Milošević would be arrested and sent to The Hague for his crimes against humanity. He would die in his prison cell in 1996, awaiting trial.
Opportunity From the Ashes
In 2014, in the wake of the Euromaidan protests that swept the thoroughly corrupt autocrat Viktor Yanukovych from power, Ukraine was in shambles. Having been looted of roughly $100 billion (roughly the amount of the country’s entire GDP) and invaded by Russia, things looked bleak. Without western aid, the proud nation’s very survival was in doubt.
Yet for Vitaliy Shabunin and the Anti-Corruption Action Center, it was a moment he had been waiting for. He established the organization with his friend Dasha Kaleniuk a few years earlier. Since then they, along with a small staff, had been working with international NGOs to document corruption and develop effective legislation to fight it.
With Ukraine’s history of endemic graft, which had greatly worsened under Yanukovych, progress had been negligible. Yet now, with the IMF and other international institutions demanding reform, Shabunin and Kaleniuk were instantly in demand to advise the government on instituting a comprehensive anti-corruption program, which passed in record time.
Yet they didn’t stop there either. “Our long-term strategy is to create a situation in which it will be impossible not to do anti-corruption reforms,” Shabunin would later tell me. “We are working to ensure that these reforms will be done, either by these politicians or by another, because they will lose their office if they don’t do these reforms.”
Vitaliy, Dasha and the Anti-Corruption Action Center continue to prepare for future triggers.
The Genius of Xerox PARC
One story that Silicon Valley folks love to tell involves Steve Jobs and Xerox. After the copier giant made an investment in Apple, which was then a fledgling company, it gave Jobs access to its Palo Alto Research Center (PARC). He then used the technology he saw there to create the Macintosh. Jobs built an empire based on Xerox’s oversight.
Yet the story misses the point. By the late 60s, its Xerox CEO Peter McColough knew that the copier business, while still incredibly profitable, was bound to be disrupted eventually. At the same time it was becoming clear that computer technology was advancing quickly and, someday, would revolutionize how we worked. PARC was created to prepare for that trigger.
The number of groundbreaking technologies created at PARC is astounding. The graphical user interface, networked computing, object oriented programing, the list goes on. Virtually everything that we came to know as “personal computing” had its roots in the work done at PARC in the 1970s.
Most of all, PARC saved Xerox. The laser printer invented there would bring in billions and, eventually, largely replace the copier business. Some technologies were spun off into new companies, such as Adobe and 3Com, with an equity stake going to Xerox. And, of course, the company even made a tidy profit off the Macintosh, because of the equity stake that gave Jobs access to the technology in the first place.
Transforming an Obstacle Into a Design Constraint
The hardest thing about change is that, typically, most people don’t want it. If they did, it have already been accepted as the normal state of affairs. That can make transformation a lonely business. The status quo has inertia on its side and never yields its power gracefully. The path for an aspiring changemaker can be heartbreaking and soul crushing.
Many would see the near-certainty that Milosevic would try to steal the election as an excuse to do nothing. Most people would look at the almost impossibly corrupt Yanukovych regime and see the idea of devoting your life to anti-corruption reforms as quixotic folly. It is extremely rare for a CEO whose firm dominates an industry to ask, “What comes after?”
Yet anything can happen and often does. Circumstances conspire. Events converge. Round-hole businesses meet their square-peg world. We can’t predict exactly when or where or how or what will happen, but we know that everybody and everything gets disrupted eventually. It’s all just a matter of time.
When that happens resistance to change temporarily abates. So there’s lots to do and no time to waste. We need to empower our allies, as well as listen to our adversaries. We need to build out a network to connect to others who are sympathetic to our cause. Transformational change is always driven by small groups, loosely connected, but united by a common purpose.
Most of all, we need to prepare. A trigger always comes and, when it does, it brings great opportunity with it.
In recent years, the concept of microdosing has moved from the fringes of alternative therapy into the mainstream as a potential tool for enhancing mental performance and wellness. But is microdosing truly an innovation, or is it a passing trend destined for the annals of speculative practices? Through examining its revolutionary potential and analyzing its impact in real-world scenarios, we can better understand the role microdosing plays in our continuous pursuit of human-centered innovation.
Understanding Microdosing
Microdosing typically involves taking sub-perceptual doses of psychedelics, like LSD or psilocybin, approximately one-tenth of a recreational dose, to experience the potential therapeutic benefits without hallucinogenic effects. Advocates claim it can boost creativity, alleviate anxiety, and improve focus, leading to its rising popularity among entrepreneurs, artists, and the tech-savvy.
Case Study 1: Microdosing in Silicon Valley
In the competitive landscape of Silicon Valley, professionals are constantly seeking a competitive edge to enhance productivity and creativity. The tech hub has notably become a breeding ground for experimentation with microdosing. Tech workers claim the practice helps them to sustain high levels of innovation and problem-solving abilities in an environment where mental agility is highly prized.
For instance, a significant number of software developers and startup founders have reported that microdosing has supported cognitive function and stress reduction, leading to improved workplace performance and job satisfaction. Companies have begun embracing wellness practices, subtly endorsing microdosing as part of a broader strategy to cultivate employee well-being and foster an innovative work culture.
Case Study 2: Microdosing in Mental Health Treatment
Beyond corporate environments, microdosing has gained attention as a potential revolutionary approach in mental health treatment. Psychedelics-assisted therapy research has opened up dialogues about microdosing’s efficacy as a treatment for mood disorders and PTSD. Leading institutions are exploring the controlled use of microdoses as an adjunct to traditional therapies.
A pilot study conducted at a renowned university evaluated the impact of psilocybin microdosing on patients with treatment-resistant depression. Preliminary findings suggest a marked improvement in mood stabilization and cognitive flexibility among participants, renewing hope for alternative approaches in mental health treatment. This study has prompted further research and dialogue within the medical community, transforming discussions around treatment paradigms.
Case Study 3: Brez Beverages – Microdosing in the Consumer Market
Brez Beverages, a pioneering player in the beverage industry, has embraced the microdosing revolution by developing a line of drinks infused with adaptogenic and nootropic compounds. Their products aim to provide consumers with the benefits of microdosing in a more accessible and socially acceptable format.
The innovative approach of Brez Beverages lies in their ability to tap into the growing desire for wellness-centric consumer products. By integrating microdosed elements into beverages, they offer a unique alternative for individuals seeking mental clarity and stress reduction without committing to psychedelic substances. Brez Beverages represents a shift in how microdosing concepts can be commercialized and introduced to mainstream consumers.
Market feedback indicates a burgeoning interest among health-conscious customers who are drawn to the idea of enhancing their daily lives with subtle botanical blends, thus carving a new niche in the health and wellness sector. Brez continues to capitalize on the demand for unconventional health solutions, reflecting both the challenge and potential of integrating microdosing into consumer products.
The Verdict: Innovation or Not?
Whether microdosing is labeled as an innovation largely depends on one’s perspective. On one hand, it presents a novel application of existing compounds, showcasing unconventional problem-solving in enhancing human potential—an experimental departure from typical wellness and therapeutic practices. On the other hand, its lack of universal acceptance and scientific consensus makes it a contentious archetype of modern self-experimentation rather than unmistakable innovation.
In conclusion, microdosing embodies the dynamic nature of innovation—provocative yet promising. As we push the boundaries of what’s possible in the human experience, microdosing remains an emblem of the desire to enhance and evolve our capabilities. Whether it stands the test of time will depend on ongoing research, legal structures, and societal acceptance, but it undoubtedly shapes the current discourse on potential pathways for human-centered transformation.
Image credit: DrinkBrez.com, Pexels
Sign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.
How one man’s innovation provided the missing link for a 20th century agricultural revolution
GUEST POST from John Bessant
There’s a lot of good stuff which comes out of Ireland. Leaving aside the wonderful music, the amazing countryside (complete with its ‘soft’ rain) and some excellent food and drink (including a drop or two of the black stuff to which I am occasionally partial). But it’s also a country which punches well above its weight in terms of ideas — it’s got a reputation for being a smart economy basing its progress on putting knowledge to work. Creating value from those ideas — innovation.
That’s something which you’ll find not only in the universities and hi-tech companies dotted across the landscape but also down on the farm. Farming’s a tough business — anyone who watches the series ‘Clarkson’s Farm,’ will recognize the multiple challenges farmers face, battling all that Nature can throw at them when she’s in a bad mood plus rising costs, increasing regulation and volatile markets. It’s a field (ouch) where innovation is not just a nice to have, it’s essential.
And in Dromara, County Down there’s a statue erected to honor a man to whom many farmers, not just in Ireland but around the world, have cause to be grateful. Harry Ferguson.
Of course farming innovation isn’t new; it’s been at the heart of our progress towards being able to feed ourselves and so move beyond subsistence to doing something constructive with our newly-found spare time. Like building cities and societies. Think back to your school days and you’ll recognize many of the key innovations which enabled the ‘agricultural revolution’, increasing productivity to help feed a growing population. The early days were all about ingenious implements — Jethro Tull’s seed drill, (1701), Cyril McCormick’s reaper (1840), John Deere’s steel plow (1847) — all these and hundreds of other innovations helped move the needle on farming practices.
But better implements still faced the limitations of power — and that aspect of innovation remained unchanged for centuries. We’d moved on from back-breaking manual labor but for centuries we relied on animals, primarily horses, to pull or occasionally push our implements. Power was the agricultural equivalent of the ‘philosophers stone’ for alchemists, the secret which would turn base metals into gold (or farms into more productive units). So with the advent of steam power in the early 1800s it looked like it had been discovered; as factories, mines and even early railways were showing, a steam engine could harness the power of many horses.
But (in an early example of the hype cycle) the promise of steam power failed to deliver — largely for technical reasons. Steam engines were big and heavy which meant they had to stay in one place with their power distributed to where it was needed by elaborate systems of pulleys, belts and wheels. They were unreliable and dangerous with an unpleasant tendency to explode unpredictably. For certain tasks they held out promise — they could plow a simple flat field ten times as fast as a team of horses— but their inflexibility limited their application.
Traction engines provided a partial solution since these machines could carry out basic tasks drilling and plowing. Though they were often too heavy to work directly on muddy fields they had the advantage of power which could quickly be moved to where they were needed. Set them up on the side of a field, hook them up to relevant implements like plows and put them to work. When the job was finished, uncouple everything and move on to the next field (as long as it was fairly flat and big).
(Interestingly it was the traction engine which inspired Henry Ford to work on transportation. Reflecting on his first encounter with a traction engine on the family farm he said ‘I remember that engine, as though I had seen it only yesterday, for it was the only vehicle other than horse-drawn I had ever seen….it was that engine that took me into automotive transportation’).
So steam power wasn’t really going to change the farming world. But another innovation was — the internal combustion engine. Engineers around the world had seized on the possibilities of this technology and were working to try and come up with a ‘horseless carriage’, something which Karl Benz managed to do with his Motorwagen in 1885 in Germany. It didn’t take a big leap of imagination to see another location where replacing horses could have an advantage — and John Froelich, an engineer from Iowa duly developed the first gasoline-powered tractor, mounting an engine on a traction engine chassis in 1892.
Unfortunately he wasn’t able to make the machine in volume, producing only four tractors before closing down the business. But others were more successful; for example in 1905 the International Harvester company produced its first tractor, and in 1906 Henry Ford invested over $600,000 in research for tractors, building on his growing experience with cars. An early outcome was the ‘Automobile plow’, a cross-over concept using the Model T as the base.
Pretty soon, just as in the personal transportation marketplace, hundreds of entrepreneurs began working on tractor innovation; a classic example of what Joseph Schumpeter (the godfather of innovation economics) would call ‘swarming’ behavior. By 1910 there were over a thousand tractor designs on offer from 150 different companies.
A key part of Schumpeter’s theory of how innovation works is that many of the early entrepreneurs active in a new field will fail, whether for technical or business reasons, and there will be convergence along key dimensions — setting up a technological trajectory along which future developments will tend to run.
That was certainly the case with tractors; key pieces of the puzzle were coming into place like an ability to deal with difficult terrain by using all-wheel drive (offered by John Deere in 1914) and the trend towards smaller (and more affordable) machines, pioneered by the Bull Company. Agricultural shows began to feature tractor demonstrations which allowed farmers to see first-hand the relative benefits of different machines and an early front runner in the move towards widespread market acceptance was International Harvester with their light and affordable Titan 10/20 model.
This was a growing market; by 1916 over 20,000 tractors had been sold in the USA. As with many innovations once the ‘dominant design’ has emerged for the basic product configuration emphasis shifts to the ways in which they can be made — process innovation. Those players — like Henry Ford — with experience in mass production had a significant potential advantage. His Fordson brand became the benchmark in terms of pricing and other manufacturers often struggled to compete unless they were large, like the John Deere company which offered its Model D in 1923 for around $1000. Ford had priced aggressively to try and capture the market, originally offering the Fordson for $200 in pre-sales advertising , but eventually selling the tractor in 1917 for $750 ( a price at which he was actually making a loss).
Ford understood the principle; he’d used it to open up the automobile market by offering ‘…to build a car for the great multitude’ at a price that multitude could afford. But things were a little more complex down on the farm. At first sight tractors seemed a great idea not least because of their running cost advantages. Animals, while a flexible source of power, were also a big cost since they needed food, shelter and veterinary services, plus there was an opportunity cost in terms of land needed to grow their feed which could otherwise be sued for more profitable crops. It took around 6 acres per horse over the farming year. Tractors ran on kerosene, becoming widely available and at low cost; and they only burned this fuel when they were working.
Ford’s strategy appeared to pay off; by 1923 he had over 75% of the US market . Yet only five years later things had deteriorated so much that the company exited the business. What led to this dramatic shift was a series of challenges to which cost advantages based on process innovation weren’t the answer. Product innovation once again became a key differentiator. This time the issue wasn’t around simply replacing the animal power unit with a mechanical one; it had everything to do with what you connected that power up to.
Early tractors solved the connection problem with a simple drawbar, essentially a metal stick to which you could attach different implements. Which worked fine when the going was flat, the surface dry, the field large and simple. Unfortunately most farming also involves uneven ground, plenty of mud and rain-filled potholes, trees and other obstacles and small fields with uneven boundaries. To cope with all of that you need a utility tractor — not for nothing was the IH Farmall a runaway success in the 1920s — the name says it all. Having spent a significant amount (for a small farmer) on buying your lightweight utility tractor you want it to carry out much more than just row crop duties — helping out with a wide range of construction and maintenance operations down on the farm,.
In particular one innovation which helped endear International Harvester to many a farmer’s heart was the ‘power take off’ device — essentially making power available to be hooked up to a variety of different implements. Introduced in 1922 this opened up the market by massively increasing the versatility of tractor. All manner of attachments — seed drills, rotary cutters, posthole diggers, snow throwers — all could be run off the core PTO. We could draw an analogy to today’s IT world; buying a tractor without the ability to attach tools to it would be like buying a computer without software.
Which brings us back to Harry Ferguson (in case you thought we’d lost the Irish connection). Because connecting farm implements to tractors became his passion — and the basis for a highly successful business. In doing so he provided the platform on which so much could happen, much as Steve Jobs with the smart-phone enabled users to find and deploy the apps they wanted . And along the way he was able to help Henry Ford re-enter and revive his tractor business.
Ferguson was born 1884 in County Down, Ulster and grew up in a farming family — though he wasn’t particularly taken with the life. Nor was he that keen on school either, dropping out at the age of 14. What saved him was a love of reading and a fascination with all things mechanical — which in the early 20th century was a good interest to have. His brother helpfully opened a repair shop to cater to the emerging motor trade and Harry joined him, kindling enough focused motivation to study at Belfast Technical College. Arguably, though, his skill set was less around the mechanical detail than in the front office — sales and PR. He persuaded his brother to sponsor him and he proved adept at motor car and cycle racing — even persuading his brother to fund the development of Ireland’s first airplane which Harry then learned to fly!
Eventually he set up his own automobile business, May Street Motors, in Belfast in 1911 and one of his first appointments (a 21 year old mechanic, Willy Sands) proved to be crucial in his subsequent success. Sands was a gifted engineer; he remained with Ferguson for nearly fifty years, working in the backroom and helping develop the technologies which built business success.
Ferguson was quick to spot an opportunity in the emerging tractor market and managed to obtain a franchise for sales and service of the John Deere Overtime tractor which was being built in the UK. That gave Ferguson and Sands extensive experience in the way the tractor was put together, the repairs it needed and the context into which it was being applied.
The miseries of the Great War on the home front included food shortages and problems with imports so the British government were urgently seeking anything which could help out with farm productivity — including subsidizing investment in tractors. Harry played a part in this when he was given a contract for the Irish Board of Agriculture in 1917 to oversee government-owned tractor maintenance and production records. The duo traveled the country to advise farmers, help set up equipment like plows and understanding the problems farmers faced in deploying the tractor. For example soil compaction, caused by the heavy weight of tractors and plows of the time, was a common complaint.
All of this honed their skills at repairs and improvements to the current stock of tractors in Ireland; their next break came when conversion kits for the Model T car began to appear to create a car/tractor. Ferguson took a franchise for the Eros, a kit which involved putting larger rear wheels on the car, together with a chain transmission to them and installing a bigger radiator to cope with the engine load. His experience with farmers paid off; he realized that this lighter weight car/tractor could solve the soil compaction problem and so got Sands to design a lightweight plow for the Eros.
This — the ‘Belfast plow’ — was launched in 1917 and was the first farm implement bearing Ferguson’s name; it was half the weight of a standard plow and crucially used a clever idea for the hitch connecting the tractor to the plow. This meant that the load from pulling the plow was shared equally by all four wheels instead of just the rear ones; this made it easier to steer and drive.
But Henry Ford was not about to let the tractor opportunity market fall into the hands of conversion kits for Model Ts; instead he commissioned design and manufacture of his own tractor with a large slow turning engine. He persuaded the British Ministry of Munitions to purchase 6000 units in return for his setting up a factory in Ireland. The Fordson tractor (as it was called) arrived in 1917 but quickly ran into problems as farmers began to use it. In particular it had a worrying tendency to flip over on its back if it hit an obstacle; its powerful engine and the relative lack of weight on the front end meant it could be pulled over by an obstacle or an unexpected drag while plowing. Nonetheless its arrival spelt the end of conversion kits — and dealt a blow to Harry Ferguson’s dream.
He was nothing if not resilient; in true entrepreneurial style he turned the arrival in force of Fordsons to an opportunity, adapting his lightweight plow for use with the tractor. In particular they worked on their hitch system so that it helped overcome the tendency for the front wheels to rear up; their design included a clever depth control device — a floating skid — which stopped the problem happening when the plow dug too deep and pulled the tractor over.
This worked well with the plow but for other implements they realized depth control could be enabled by the use of a hydraulic lever which adapted to the terrain. Putting all of this together led them to a system which worked on a variety of implements including disc harrows and cultivators. In 1925 Ferguson was granted a patent for this three point hitch — and it became the basis on which he built his future success. It was the key to unlocking the puzzle of how to connect power to implements and became the dominant design, one which is still widely used today.
The significance of this design should not be underestimated, and it’s something explored in depth in an excellent review by Scott Marshaus at the University of Wisconsin. Even though other factors helped contribute to the major increase in agricultural productivity like fertilizers, better seed strains and environmental management of pests the importance of completing the mechanization cycle is central. Yes, you can replace horses and mules with machine power but you can’t plant the seeds or distribute the chemicals unless you have the means to connect power with application. Which was the problem that Ferguson did so much to solve.
Just when all looked promising the market weather changed once again, another shift triggered by the business strategy of Ford. After years of making a loss the company decided to exit the tractor market in 1927, choosing instead to concentrate resources on their new Model A automobile. Which left Ferguson with no market for his Fordson-fitting plow.
So he (and Sands, as ever working away diligently in the backroom) developed their own lightweight tractor based on the Fordson design. They included their 3 point hitch and the prototype ‘Black Tractor’ appeared in 1933. Ferguson then went into partnership with the David Brown company to manufacture what became known as the Ferguson Brown Model A; production started in 1936. Disagreements quickly followed with Brown wanting to make a bigger tractor so Ferguson pulled out of the venture.
Instead he took one of the production Model A tractors into Henry Ford’s back garden — literally. In 1938 he showed it off and tested it against the Fordson and another tractor from Allis-Chalmers at Ford’s Fair Lane country estate. It performed so well that Ford wanted to make a deal on the spot and after brief discussion the two men shook hands. This handshake deal put a version of the tractor, called the Ford-Ferguson Model 9N into production in 1939 and it sold over 10,000 in its first year. By 1940 the factory was churning out 150 per day.
All should have been plain sailing but Ferguson’s prickly nature posed problems. He was, in many ways, a classic example of an entrepreneur, seeking opportunity wherever he could find it and adapting setbacks to become new directions for development. However he was also, according to his biographer Colin Fraser,‘someone who combined the extremes of subtlety, naiveté, charm, rudeness, brashness, modesty, largesse and pettiness; and the switch from any one to another could be abrupt and unpredictable. And, he had a penchant for confrontation.”
He had hoped that Ford in the UK would start production after the end of WW2 and he wanted a seat on the board; when this was rejected he threatened to walk away and start production on his own. But his position was weak; what he didn’t know was that Henry Ford 2nd, who took over in 1945, had discovered that the tractor business was still losing money at a desperate rate. He also discovered that the Ford-Ferguson 2N was being sold at a loss to Ferguson for resale to his dealers, an arrangement that cost Ford $25 million. Not surprisingly Ford wanted to stop and Ferguson was advised that 1947 would be the last year of the handshake agreement.
Ferguson fought back, putting his own version of the Ferguson/Ford tractor into production in 1946 in a war-surplus British factory. But competing with Ford was always going to be difficult; in response Ford introduced a new version, the Ford Model 8N in 1947, conspicuously missing the ‘Ferguson’ name from the badge. Ford’s engineers had tried to improve and sidestep Ferguson’s patented ideas but the core 3 point hitch and hydraulic system were retained. Although Ford’s marketing and distribution muscle backed him into a corner Ferguson in turn fought back, suing Ford in 1948 for $251 million for infringement of these patents.
Ferguson eventually won the bitter dispute and used some of the $9.25 damages agreed to continue to make tractors in the UK. But his attempts at working independently in the USA failed and eventually he merged his business with the Massey-Harris company in 1953.He retired from the tractor business but continued to develop ideas for the world of motor sport, including creating the first four wheel-drive system for use on Formula One racing cars.
He died in 1960 as a result of a barbiturate overdose; the inquest was unable to conclude whether this had been accidental or not. A sad end for someone whose passion and drive had helped enable the later stages of the agricultural revolution. But he left a powerful innovation footprint in farming soil all around the world. remembered in the tractor brand which bears his name and in the 3 point hitch design which is still in widespread use.
When the work is new for them, they don’t know how to do it. You’ve got to show them how to do it and explain everything. Tell them about your top-level approach; tell them why you focus on the new elements; show them how to make the chart that demonstrates the new one is better than the old one. Let them ask questions at every step. And tell them their questions are good ones. Praise them for their curiosity. And tell them the answers to the questions they should have asked you. And tell them they’re ready for the next level.
2. Do it with them, and let them hose it up.
Let them do the work they know how to do, you do all the new work except for one new element, and let them do that one bit of new work. They won’t know how to do it, and they’ll get it wrong. And you’ve got to let them. Pretend you’re not paying attention so they think they’re doing it on their own, but pay deep attention. Know what they’re going to do before they do it, and protect them from catastrophic failure. Let them fail safely. And when then hose it up, explain how you’d do it differently and why you’d do it that way. Then, let them do it with your help. Praise them for taking on the new work. Praise them for trying. And tell them they’re ready for the next level.
3. Let them do it, and help them when they need it.
Let them lead the project, but stay close to the work. Pretend to be busy doing another project, but stay one step ahead of them. Know what they plan to do before they do it. If they’re on the right track, leave them alone. If they’re going to make a small mistake, let them. And be there to pick up the pieces. If they’re going to make a big mistake, casually check in with them and ask about the project. And, with a light touch, explain why this situation is different than it seems. Help them take a different approach and avoid the big mistake. Praise them for their good work. Praise them for their professionalism. And tell them they’re ready for the next level.
4. Let them do it, and help only when they ask.
Take off the training wheels and let them run the project on their own. Work on something else, and don’t keep track of their work. And when they ask for help, drop what you are doing and run to help them. Don’t walk. Run. Help them like they’re your family. Praise them for doing the work on their own. Praise them for asking for help. And tell them they’re ready for the next level.
5. Do the new work for them, then repeat.
Repeat the whole recipe for the next level of new work you’ll help them master.
Image credit: misterinnovation.com
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.