Category Archives: Technology

Bringing Yin and Yang to the Productivity Zone

Bringing Yin and Yang to the Productivity Zone

GUEST POST from Geoffrey A. Moore

Digital transformation is hardly new. Advances in computing create more powerful infrastructure which in turn enables more productive operating models which in turn can enable wholly new business models. From mainframes to minicomputers to PCs to the Internet to the Worldwide Web to cloud computing to mobile apps to social media to generative AI, the hits just keep on coming, and every IT organization is asked to both keep the current systems running and to enable the enterprise to catch the next wave. And that’s a problem.

The dynamics of productivity involve a yin and yang exchange between systems that improve efficiency and programs that improve effectiveness. Systems, in this model, are intended to maintain state, with as little friction as possible. Programs, in this model, are intended to change state, with maximum impact within minimal time. Each has its own governance model, and the two must not be blended.

It is a rare IT organization that does not know how to maintain its own systems. That’s Job 1, and the decision rights belong to the org itself. But many IT organizations lose their way when it comes to programs—specifically, the digital transformation initiatives that are re-engineering business processes across every sector of the global economy. They do not lose their way with respect to the technology of the systems. They are missing the boat on the management of the programs.

Specifically, when the CEO champions the next big thing, and IT gets a big chunk of funding, the IT leader commits to making it all happen. This is a mistake. Digital transformation entails re-engineering one or more operating models. These models are executed by organizations outside of IT. For the transformation to occur, the people in these organizations need to change their behavior, often drastically. IT cannot—indeed, must not—commit to this outcome. Change management is the responsibility of the consuming organization, not the delivery organization. In other words, programs must be pulled. They cannot be pushed. IT in its enthusiasm may believe it can evangelize the new operating model because people will just love it. Let me assure you—they won’t. Everybody endorses change as long as other people have to be the ones to do it. No one likes to move their own cheese.

Given all that, here’s the playbook to follow:

  1. If it is a program, the head of the operating unit that must change its behavior has to sponsor the change and pull the program in. Absent this commitment, the program simply must not be initiated.
  2. To govern the program, the Program Management Office needs a team of four, consisting of the consuming executive, the IT executive, the IT project manager, and the consuming organization’s program manager. The program manager, not the IT manager, is responsible for change management.
  3. The program is defined by a performance contract that uses a current state/future state contrast to establish the criteria for program completion. Until the future state is achieved, the program is not completed.
  4. Once the future state is achieved, then the IT manager is responsible for securing the system that will maintain state going forward.

Delivering programs that do not change state is the biggest source of waste in the Productivity Zone. There is an easy fix for this. Just say No.

That’s what I think. What do you think?

Image Credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

A Triumph of Artificial Intelligence Rhetoric

Understanding ChatGPT

A Triumph of Artificial Intelligence Rhetoric - Understanding ChatGPT

GUEST POST from Geoffrey A. Moore

I recently finished reading Stephen Wolfram’s very approachable introduction to ChatGPT, What is ChatGPT Doing . . . And Why Does It Work?, and I encourage you to do the same. It has sparked a number of thoughts that I want to share in this post.

First, if I have understood Wolfram correctly, what ChatGPT does can be summarized as follows:

  1. Ingest an enormous corpus of text from every available digitized source.
  2. While so doing, assign to each unique word a unique identifier, a number that will serve as a token to represent that word.
  3. Within the confines of each text, record the location of every token relative to every other token.
  4. Using just these two elements—token and location—determine for every word in the entire corpus the probability of it being adjacent to, or in the vicinity of, every other word.
  5. Feed these probabilities into a neural network to cluster words and build a map of relationships.
  6. Leveraging this map, given any string of words as a prompt, use the neural network to predict the next word (just like AutoCorrect).
  7. Based on feedback from so doing, adjust the internal parameters of the neural network to improve its performance.
  8. As performance improves, extend the reach of prediction from the next word to the next phrase, then to the next clause, the next sentence, the next paragraph, and so on, improving performance at each stage by using feedback to further adjust its internal parameters.
  9. Based on all of the above, generate text responses to user questions and prompts that reviewers agree are appropriate and useful.

OK, I concede this is a radical oversimplification, but for the purposes of this post, I do not think I am misrepresenting what is going on, specifically when it comes to making what I think is the most important point to register when it comes to understanding ChatGPT. That point is a simple one. ChatGPT has no idea what it is talking about.

Indeed, ChatGPT has no ideas of any kind—no knowledge or expertise—because it has no semantic information. It is all math. Math has been used to strip words of their meaning, and that meaning is not restored until a reader or user engages with the output to do so, using their own brain, not ChatGPT’s. ChatGPT is operating entirely on form and not a whit on content. By processing the entirety of its corpus, it can generate the most probable sequence of words that correlates with the input prompt it had been fed. Additionally, it can modify that sequence based on subsequent interactions with an end user. As human beings participating in that interaction, we process these interactions as a natural language conversation with an intelligent agent, but that is not what is happening at all. ChatGPT is using our prompts to initiate a mathematical exercise using tokens and locations as its sole variables.

OK, so what? I mean, if it works, isn’t that all that matters? Not really. Here are some key concerns.

First, and most importantly, ChatGPT cannot be expected to be self-governing when it comes to content. It has no knowledge of content. So, whatever guardrails one has in mind would have to be put in place either before the data gets into ChatGPT or afterward to intercept its answers prior to passing them along to users. The latter approach, however, would defeat the whole purpose of using it in the first place by undermining one of ChatGPT’s most attractive attributes—namely, its extraordinary scalability. So, if guardrails are required, they need to be put in place at the input end of the funnel, not the output end. That is, by restricting the datasets to trustworthy sources, one can ensure that the output will be trustworthy, or at least not malicious. Fortunately, this is a practical solution for a reasonably large set of use cases. To be fair, reducing the size of the input dataset diminishes the number of examples ChatGPT can draw upon, so its output is likely to be a little less polished from a rhetorical point of view. Still, for many use cases, this is a small price to pay.

Second, we need to stop thinking of ChatGPT as artificial intelligence. It creates the illusion of intelligence, but it has no semantic component. It is all form and no content. It is a like a spider that can spin an amazing web, but it has no knowledge of what it is doing. As a consequence, while its artifacts have authority, based on their roots in authoritative texts in the data corpus validated by an extraordinary amount of cross-checking computing, the engine itself has none. ChatGPT is a vehicle for transmitting the wisdom of crowds, but it has no wisdom itself.

Third, we need to fully appreciate why interacting with ChatGPT is so seductive. To do so, understand that because it constructs its replies based solely on formal properties, it is selecting for rhetoric, not logic. It is delivering the optimal rhetorical answer to your prompt, not the most expert one. It is the one that is the most popular, not the one that is the most profound. In short, it has a great bedside manner, and that is why we feel so comfortable engaging with it.

Now, given all of the above, it is clear that for any form of user support services, ChatGPT is nothing less than a godsend, especially where people need help learning how to do something. It is the most patient of teachers, and it is incredibly well-informed. As such, it can revolutionize technical support, patient care, claims processing, social services, language learning, and a host of other disciplines where users are engaging with a technical corpus of information or a system of regulated procedures. In all such domains, enterprises should pursue its deployment as fast as possible.

Conversely, wherever ambiguity is paramount, wherever judgment is required, or wherever moral values are at stake, one must not expect ChatGPT to be the final arbiter. That is simply not what it is designed to do. It can be an input, but it cannot be trusted to be the final output.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Ideas Have Limited Value

Ideas Have Limited Value

GUEST POST from Greg Satell

There is a line of thinking that says that the world is built on ideas. It was an idea that launched the American Revolution and created a nation. It was an idea that led Albert Einstein to pursue relativity, Linus Pauling to invent a vaccine and for Steve Jobs to create the iPhone and build the most valuable company in the world.

It is because of the power of ideas that we hold them so dear. We want to protect those we believe are valuable and sometimes become jealous when others think them up first. There’s nothing so rapturous as the moment of epiphany in which an idea forms in our mind and begins to take shape.

Clearly, ideas are important, but not as many believe. America is what it is today, for better or worse, not just because of the principles of its founding, but because of the actions that came after it. We revere people like Einstein, Pauling and Jobs not because of their ideas, but what they did with them. The truth is that although possibilities are infinite, ideas are limited.

The Winklevoss Affair

The muddled story of Facebook’s origin is now well known. Mark Zuckerberg met with the Winklevoss twins and another Harvard classmate to discuss building a social network together. Zuckerberg agreed, but then sandbagged his partners while he built and launched a competing site. He would later pay out a multimillion dollar settlement for his misdeeds.

Zuckerberg and the Winklevoss twins were paired in the news together again recently when Facebook announced that it’s developing a new cryptocurrency called Libra. As it happens, the Winklevoss twins have been high profile investors in Bitcoin for a while now. The irony was too delicious for many in the media to ignore. First he stole their idea for Facebook and now he’s doing the same with cryptocurrencies!

Of course this is ridiculous. Social networks like Friendster and Myspace existed before Facebook and many others came after. Most failed. In much the same way, many people today have ideas about starting cryptocurrency businesses. Most of them will fail too. The value of an initial idea is highly questionable.

Different people have similar ideas all the time. In fact, in a landmark study published in 1922 identified 148 major inventions or discoveries that at least two different people, working independently, arrived at the same time. So the fact that both the Winklevoss twins and Zuckerberg wanted to launch a social network was meaningless.

The truth is that Zuckerberg didn’t have to pay the Winklevoss twins because he stole their idea, but because he used their trust to actively undermine their business to benefit his. His crime wasn’t creation, but destruction.

The Semmelweis Myth

In 1847, a young doctor named Ignaz Semmelweis had a major breakthrough. Working in a maternity ward, he discovered that a regime of hand washing could dramatically lower the incidence of childbed fever. Unfortunately, the medical establishment rejected his idea and the germ theory of disease didn’t take hold until decades later.

The phenomenon is now known as the Semmelweis effect, the tendency for people to reject new knowledge that contradicts established beliefs. We tend to think that a great idea will be immediately obvious to everyone, but the opposite usually happens. Ideas that have the power to change the world always arrive out of context for the simple reason that the world hasn’t changed yet.

However, the Semmelweis effect is misleading. As Sherwin Nuland explains in The Doctor’s Plague, there’s more to the story than resistance to a new idea. Semmelweis didn’t see the value in communicating his work effectively, formatting his publications clearly or even collecting data in a manner that would gain his ideas greater acceptance.

Here again, we see the limits of ideas. Like a newborn infant, they can’t survive alone. They need to be nurtured to grow. They need to make friends, interact with other ideas and mature. The tragedy of Semmelweis is not that the medical establishment did not immediately accept his idea, but that he failed to steward it in such a way that it could spread and make an impact.

Why Blockbuster Video Really Failed

One of the most popular business myths today is that of Blockbuster Video. As the story is usually told, the industry giant failed to recognize the disruptive threat that Netflix represented. The truth is that the company’s leadership not only recognized the problem, but developed a smart strategy and executed it well.

The failure, in fact, had less to do with strategy and tactics than it did with managing stakeholder networks. Blockbuster moved quickly to launch an online business, cut late fees and innovated its business model. However, resistance from franchisees, who were concerned that the changes would kill their business, and from investors and analysts, who balked at the cost of the initiatives, sent the stock price reeling.

From there things spiraled downward. The low stock price attracted the corporate raider Carl Icahn, who got control of the board. His overbearing style led to a compensation dispute with Blockbuster’s CEO, John Antioco. Frustrated, Antioco negotiated his exit and left the company in July of 2007.

His successor, Jim Keyes, was determined to reverse Antioco’s strategy, cut investment in the subscription model, reinstated late fees and shifted focus back to the retail stores in a failed attempt to “leapfrog” the online subscription model. Three years later, in 2010, Blockbuster filed for bankruptcy.

The Fundamental Fallacy Of Ideas

One of the things that amazed me while I was researching my book Cascades was how often movements behind powerful ideas failed. The ones that succeeded weren’t those with different ideas or those of higher quality, but those that were able to align small groups, loosely connected, but united by a shared purpose.

The stories of the Winklevoss twins, Ignaz Semmelweis and Blockbuster Video are all different versions of the same fundamental fallacy, that ideas, if they are powerful enough, can stand on their own. Clearly, that’s not the case. Ideas need to be adopted and then combined with other ideas to make an impact on the world.

The truth is that ideas need ecosystems to support them and that doesn’t happen overnight. To make an idea viable in the real world it needs to continually connect outward, gaining adherents and widening its original context. That takes more than an initial epiphany. It takes the will to make the idea subservient to its purpose.

What we have to learn to accept is that what makes an idea powerful is its ability to solve problems. The ideas embedded in the American Constitution were not new at the time of the country’s founding, but gained power by their application in the real world. In much the same way, we revere Einstein’s relativity, Pauling’s vaccine and Jobs iPhone because of their impact on the world.

As G.H. Hardy once put it, “For any serious purpose, intelligence is a very minor gift.” The same can be said about ideas. They do not and cannot stand alone, but need the actions of people to bring them to life.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

4 Things Leaders Must Know About Artificial Intelligence and Automation

4 Things Leaders Must Know About Artificial Intelligence and Automation

GUEST POST from Greg Satell

In 2011, MIT economists Erik Brynjolfsson and Andrew McAfee self-published an unassuming e-book titled Race Against The Machine. It quickly became a runaway hit. Before long, the two signed a contract with W. W. Norton & Company to publish a full-length version, The Second Machine Age that was an immediate bestseller.

The subject of both books was how “digital technologies are rapidly encroaching on skills that used to belong to humans alone.” Although the authors were careful to point out that automation is nothing new, they argued, essentially, that at some point a difference in scale becomes a difference in kind and forecasted we were close to hitting a tipping point.

In recent years, their vision has come to be seen as deterministic and apocalyptic, with humans struggling to stay relevant in the face of a future ruled by robot overlords. There’s no evidence that’s true. The future, in fact, will be driven by humans collaborating with other humans to design work for machines to create value for other humans.

1. Automation Doesn’t Replace Jobs, It Replaces Tasks

When a new technology appears, we always seem to assume that its primary value will be to replace human workers and reduce costs, but that’s rarely true. For example, when automatic teller machines first appeared in the early 1970s, most people thought it would lead to less branches and tellers, but actually just the opposite happened.

What really happens is that as a task is automated, it becomes commoditized and value shifts somewhere else. That’s why today, as artificial intelligence is ramping up, we increasingly find ourselves in a labor shortage. Most tellingly, the shortage is especially acute in manufacturing, where automation is most pervasive.

That’s why the objective of any viable cognitive strategy is not to cut costs, but to extend capabilities. For example, when simple consumer service tasks are automated, that can free up time for human agents to help with more thorny issues. In much the same way, when algorithms can do much of the analytical grunt work, human executives can focus on long-term strategy, which computers tend to not do so well.

The winners in the cognitive era will not be those who can reduce costs the fastest, but those who can unlock the most value over the long haul. That will take more than simply implementing projects. It will require serious thinking about what your organization’s mission is and how best to achieve it.

2. Value Never Disappears, It Just Shifts To Another Place

In 1900, 30 million people in the United States were farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a manner of speaking, 90% of American agriculture workers lost their jobs, mostly due to automation. Still, the twentieth century was seen as an era of unprecedented prosperity.

We’re in the midst of a similar transformation today. Just as our ancestors toiled in the fields, many of us today spend much of our time doing rote, routine tasks. Yet, as two economists from MIT explain in a paper, the jobs of the future are not white collar or blue collar, but those focused on non-routine tasks, especially those that involve other humans.

Far too often, however, managers fail to recognize value hidden in the work their employees do. They see a certain job description, such as taking an order in a restaurant or answering a customer’s call, and see how that task can be automated to save money. What they don’t see, however, is the hidden value of human interaction often embedded in many jobs.

When we go to a restaurant, we want somebody to take care of us (which is why we didn’t order takeout). When we have a problem with a product or service, we want to know somebody cares about solving it. So the most viable strategy is not to cut jobs, but to redesign them to leverage automation to empower humans to become more effective.

3. As Machines Learn To Think, Cognitive Skills Are Being Replaced By Social Skills

20 or 30 years ago, the world was very different. High value work generally involved the retaining information and manipulating numbers. Perhaps not surprisingly, education and corporate training programs were focused on building those skills and people would build their careers on performing well on knowledge and quantitative tasks.

Today, however, an average teenager has more access to information and computing power than even a large enterprise would a generation ago, so knowledge retention and quantitative ability have largely been automated and devalued, so high value work has shifted from cognitive skills to social skills.

To take just one example, the journal Nature has noted that the average scientific paper today has four times as many authors as one did in 1950 and the work they are doing is far more interdisciplinary and done at greater distances than in the past. So even in highly technical areas, the ability to communicate and collaborate effectively is becoming an important skill.

There are some things that a machine will never do. Machines will never strike out at a Little League game, have their hearts broken or see their children born. That makes it difficult, if not impossible, for machines to relate to humans as well as a human can.

4. AI Is A Force Multiplier, Not A Magic Box

The science fiction author Arthur C. Clark noted that “Any sufficiently advanced technology is indistinguishable from magic” and that’s largely true. So when we see a breakthrough technology for the first time, such as when IBM’s Watson system beat top human players at Jeopardy!, many immediately began imagining all the magical possibilities that could be unleashed.

Unfortunately, that always leads to trouble. Many firms raced to implement AI applications without understanding them and were immediately disappointed that the technology was just that — technology — and not actually magic. Besides wasting resources, these projects were also missed opportunities to implement something truly useful.

As Josh Sutton, CEO of Agorai, a platform that helps companies build AI applications for their business, put it, “What I tell business leaders is that AI is useful for tasks you understand well enough that you could do them if you had enough people and enough time, but not so useful if you couldn’t do it with more people and more time. It’s a force multiplier, not a magic box.”

So perhaps most importantly, what business leaders need to understand about artificial intelligence is that it is not inherently utopian or apocalyptic, but a business tool. Much like any other business tool its performance is largely dependent on context and it is a leader’s job to help create that context.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Mystery of Stonehenge Solved

Mystery of Stonehenge Solved

by Braden Kelley

Forget about capturing and reverse engineering alien spacecraft to gain a competitive edge in the innovation race. Sorry, but the universe is billions of years old and even if some extra terrestrial civilization millions or billions of years older than our own managed to travel here from halfway across the galaxy and crash, it is very likely that we would be incapable of reverse engineering their technology.

Why?

When the United States captures a downed enemy aircraft we can reverse engineer it because at its core it is still an aircraft made of similar materials to those we use and made using similar manufacturing processes. Meaning that we already have the capabilities to build something similar, we just need a physical example or blueprints of the aircraft.

But, when you are talking about something made using technology thousands, millions, or billions of years more advanced than our own, it becomes less likely that we would be able to reverse engineer found technology. This is because there would likely be materials involved that we haven’t discovered yet, either entirely new elements on the periodic table or alloys that we don’t yet know how to make. Imagine what would happen if a slightly damaged Apollo-era Saturn V rocket suddenly appeared circa 50 AD next to the Pantheon in Rome. How long would it be before the Romans would be able to fly to the moon?

If a large, and overdue, solar event were to occur and destroy all of our electricity-based technology, how long would it take for us to be able to achieve spaceflight again?

Apocalypse Innovation

There is no doubt that human beings developed a different set of technologies prior to the last great apocalypse and most of this knowledge has been lost through time, warfare, and 400 feet of water or 20 feet of earth. Only tall stone constructions away from prehistoric coastlines or items locked away in dry underground vaults survived. History and technology are incredibly perishable.

Twelve thousand years later we have achieved some pretty remarkable achievements and ground penetrating radar is giving us new insight into the scope and scale of pre-apocalypse societies hidden undersea and underground.

But, there are a great many mysteries from the ancient world that we are still struggling to reverse engineer. From the pyramids to Stonehenge, people are hypothesizing a number of ways these monuments may have been built and what their true purpose might have been.

Nine years ago researchers from the University of Amsterdam determined that the blocks on stone moved around on the Giza plateau on sledges would have moved easier if someone went before them wetting the sand.

Eleven years ago, American Wally Wallington of Michigan showed in a YouTube video how he could move stones weighing more than a ton up to 300 feet per hour and then stand them up vertically all by himself.

He didn’t invent some amazing new piece of technology to do this, but instead eschewed modern technology and showed how he can do this using basic principles of physics and gravity. First let’s look at the video and then we’ll talk about what apocalypse innovation exercise is:

The apocalypse innovation exercise is one way of challenging orthodoxies and is quite simple:

  1. Identify a technology or input that is key to your product or service achieving its goal
  2. Concoct a simple reason why this technology no longer functions or this input is no longer available
  3. Have the group begin to ideate alternative inputs that could be used or alternate technologies that could be leveraged or developed to make the product or service achieve its goal again (If you are looking for a new technology, what are the first principles that you could go back to? And what are the other technology paths you could explore instead? – i.e. acoustic levitation instead of electromagnetic levitation)
  4. Pick one from the list of available options
  5. Re-engage the group to backcast what it will take to replace the existing technology or input with this new one (NOTE: backcasting is the practice of working backwards to show how an outcome will be achieved)
  6. Sketch out how the product or service will change as result of using this new technology or input
  7. Brainstorm ways that this change can be positioned as a benefit for customers

Apocalypse innovation can be a valuable innovation exercise for those products or services approaching the upper flattening of the traditional ‘S’ curve that pretty much all innovations go through and represents one way that can lead you to the steeper part of a new ‘S’ curve.

What other exercises do you like to use to help people challenge orthodoxies?

If you’d like to sign up to learn more about my new FutureHacking™ methodology and set of tools, go here.

Build a Common Language of Innovation on your team

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Moneyball and the Beginning, Middle, and End of Innovation

Moneyball and the Beginning, Middle, and End of Innovation

GUEST POST from Robyn Bolton

Recently, pitchers and catchers reported to MLB Spring Training facilities in Florida and Arizona.  For baseball fans, this is the first sign of Spring, an occasion that heralds months of warmth and sunshine, ballparks filled (hopefully) with cheering fans, dinners of beers and brats, and the undying belief that this year will be the year.

Of course, there was still a lot of dark, dreary cold between then and Opening Day.  Perfect weather for watching baseball movies – Bull DurhamMajor LeagueThe NaturalField of Dreams, and, of course, Moneyball.

Moneyball is based on the book of the same name by Michael Lewis and chronicles the 2002 Oakland Athletics season.  The ’02 Oakland A’s, led by General Manager Billy Beane (played by Brad Pitt), forever changed baseball by adopting an approach that valued rigorous statistical analysis over the collective wisdom of baseball insiders (coaches, scouts, front office personnel) when building a team.  This approach, termed “Moneyball,” enabled the A’s to reach the postseason with a team that cost only $44M in salary, compared to the NY Yankees that spent $125M to achieve the same outcome.

While the whole movie (and book) is a testament to the courage and perseverance required to challenge and change the status quo, time and again I come back to three lines that perfectly sum up the journey of every successful intrapreneur I’ve ever met.

The Beginning

I know you’ve taken it in the teeth out there, but the first guy through the wall…he always gets bloody…always always gets bloody.  This is threatening not just a way of doing business… but in their minds, it’s threatening the game. Really what it’s threatening is their livelihood, their jobs. It’s threatening the way they do things… and every time that happens, whether it’s the government, a way of doing business, whatever, the people who are holding the reins – they have their hands on the switch – they go batshit crazy.”

John Henry, Owner of the Boston Red Sox

Context

The 2002 season is over, and the A’s were eliminated in the first round of the playoffs.  John Henry, an owner of the Boston Red Sox, has invited Bill Beane to Boston to offer him the Red Sox GM job.

Lesson

This is what you sign up for when you decide to be an Intrapreneur.  The more you challenge the status quo, the more you question how business is done, the more you ask Why and demand an answer, the closer you get to “tak(ing) it in the teeth.”

This is why courage, perseverance, and an unshakeable belief that things can and should be better are absolutely essential for intrapreneurs.  Your job is to run at the wall over and over until you get through it.

People will follow.  The Red Sox did.  They won the World Series in 2004, breaking an 84-year-old curse.

The Middle

“It’s a process, it’s a process, it’s a process”

Bill Beane

Context

Billy has to convince the ballplayers to forget all the habits that made them great and embrace the philosophy of Moneyball.  To stop stealing bases, turning double plays on bunts, and swinging for the fences and to start taking walks, throwing to first for the easy out, and prioritize getting on base over hitting a home run.

The players are confused and frustrated.  Suddenly, everything that they once did right is wrong and what was not valued is deeply prized.

Lesson

Innovation is something new that creates value.  Something new doesn’t just require change, it requires people to stop doing things that work and start doing things that seem strange or even wrong.

Change doesn’t happen overnight.  It’s not a switch to be flipped.  It’s a process to be learned.  It takes time, practice, reminders, and patience.

The End

“When you get an answer you’re looking for, hang up.”

Billy Beane

Context

In this scene, Billy has offered one of his players to multiple teams, searching for the best deal.  When the phone rings with a deal he likes, he and the other General Manager (GM) agree to it, Billy hangs up.  Even though the other GM was in the middle of a sentence.  When Peter Brand, the Assistant GM played by Jonah Hill, points out that Billy had just hung up on the other GM, Billy responds with this nugget of wisdom.

Lesson

It’s advice intrapreneurs should take very much to heart.  I often see Innovation teams walk into management presentations with long presentations, full of data and projections, anxious to share their progress, and hoping for continued funding and support.  When the meeting starts, a senior exec will say something like, “We’re excited by the progress we’re hearing about and what it will take to continue.”

That’s the cue to “hang up.”

Instead of starting the presentation from the beginning, start with “what it will take to continue.”  You got the answer you’re looking for – they’re excited about the progress you’ve made – don’t spend time giving them the info they already have or, worse, could raise questions and dim their enthusiasm.  Hang up on the conversation you want to have and have the conversation they want to have.

In closing

Moneyball was an innovation that fundamentally changed one of the most tradition-bound businesses in sports.  To be successful, it required someone willing to take it in the teeth, to coach people through a process, and to hang up when they got the answer they wanted.  It wasn’t easy but real change rarely is.

The same is true in corporations.  They need their own Bill Beanes.

Are you willing to step up to the plate?

Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

3 Examples of Why Innovation is a Leadership Problem

Through the Looking Glass

3 Examples of Why Innovation is a Leadership Problem

GUEST POST from Robyn Bolton

Do you sometimes feel like you’re living in an alternate reality?

If so, you’re not alone.  Most innovators feel that way at some point.

After all, you see things that others don’t.

Question things that seem inevitable and true.

Make connections where others only see differences.

Do things that seem impossible.

It’s easy to believe that you’re the crazy one, the Mad Hatter and permanent resident of Wonderland.

But what if you’re not the crazy one?

What if you’re Alice?

And you’re stepping through the looking glass every time you go to work?

In Lewis Carroll’s book, the other side of the looking glass is a chessboard, and all its inhabitants are chess pieces that move in defined and prescribed ways, follow specific rules, and achieve defined goals.  Sound familiar?

Here are a few other things that may sound familiar, too

“The rule is, jam tomorrow and jam yesterday – but never jam today.” – The White Queen

In this scene, the White Queen offers to hire Alice as her lady’s maid and pay her “twopence a week and jam every other day.”  When Alice explains that she doesn’t want the job, doesn’t like jam, and certainly doesn’t want jam today, the queen scoffs and explains the rule.

The problem, Alice points out, is that it’s always today, and that means there’s never jam.

Replace “jam” with “innovation,” and this hits a little too close to home for most innovators.

How often do you hear about the “good old days” when the company was more entrepreneurial, willing to experiment and take risks, and encouraged everyone to innovate?

Innovation yesterday.

How often do you hear that the company will invest in innovation, restart its radical innovation efforts, and disrupt itself as soon as the economy rebounds, business improves, and things settle down a bit?  Innovation tomorrow.

But never innovation today.  After all, “it’s [innovation] every other day: today isn’t any other day, you know.”

“When I use a word, it means just what I choose it to mean – neither more, not less.” – Humpty Dumpty

In this scene, poor Alice tries to converse with Humpty Dumpty, but he keeps using the “wrong” words.  Except they’re not the wrong words because they mean exactly what he chooses them to mean.

Even worse, when Alice asks Humpty to define confusing terms, he gets angry, speaks in a “scornful tone,” and smiles “contemptuously” before “wagging his head gravely from side to side.

We all know what the words we use mean, but we too often think others share our definitions.  We use “innovation” and “growth,” assuming people know what we mean.  But they don’t.  They know what the words mean to them.  And that may or may not be what we mean.

When managers encourage people to share ideas, challenge the status quo, and take risks, things get even trickier.  People listen, share ideas, challenge the status quo, and take risks.  Then they are confused when management doesn’t acknowledge their efforts.  No one realizes that those requests meant one thing to the managers who gave them and a different thing to the people who did them.

“It takes all the running you can do, to keep in the same place.  If you want to go somewhere else, you must run at least twice as fast as that!” – The Red Queen

In this scene, the Red Queen introduces life on the other side of the looking glass and explains Alice’s new role as a pawn.  Of course, the explanation comes after a long sprint that seems to get them nowhere and only confuses Alice more.

When “tomorrow” finally comes, and it’s time for innovation, it often comes with a mandate to “act with urgency” to avoid falling behind.  I’ve seen managers set goals of creating and launching a business with $250M revenue in 3 years and leadership teams scrambling to develop a portfolio of businesses that would generate $16B in 10 years.

Yes, the world is moving faster, so companies need to increase the pace at which they operate and innovate.  But if you’re doing all you can, you can’t do twice as much.  You need help – more people and more funding, not more meetings or oversight.

“Life, what is it but a dream?”

Managers and executives, like the kings and queens, have roles to play.  They live in a defined space, an org chart rather than a chessboard, and they do their best to navigate it following rules set by tradition, culture, and HR.

But you are like Alice.  You see things differently.  You question what’s taken as given.  And, every now and then, you probably want to shake someone until they grow “shorter – and fatter – and softer – and rounder – and…[into] a kitten, after all.”

So how do you get back to reality and bring everyone with you?  You talk to people.  You ask questions and listen to the answers.  You seek to understand their point of view and then share yours.

Some will choose to stay where they are.

Some will choose to follow you back through the looking glass.

They will be the ones who transform a leadership problem into a leadership triumph.

Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The End of the Digital Revolution

Here’s What You Need to Know

The End of the Digital Revolution

GUEST POST from Greg Satell

The history of digital technology has largely been one of denial followed by disruption. First came the concept of the productivity paradox, which noted the limited economic impact of digital technology. When e-commerce appeared, many doubted that it could ever compete with physical retail. Similar doubts were voiced about digital media.

Today, it’s hard to find anyone who doesn’t believe in the power of digital technology. Whole industries have been disrupted. New applications driven by cloud computing, artificial intelligence and blockchain promise even greater advancement to come. Every business needs to race to adopt them in order to compete for the future.

Ironically, amid all this transformation the digital revolution itself is ending. Over the next decade, new computing architectures will move to the fore and advancements in areas like synthetic biology and materials science will reshape entire fields, such as healthcare, energy and manufacturing. Simply waiting to adapt won’t be enough. The time to prepare is now.

1. Drive Digital Transformation

As I explained in Mapping Innovation, innovation is never a single event, but a process of discovery, engineering and transformation. Clearly, with respect to digital technology, we are deep into the transformation phase. So the first part of any post-digital strategy is to accelerate digital transformation efforts in order to improve your competitive position.

One company that’s done this very well is Walmart. As an old-line incumbent in the physical retail industry, it appeared to be ripe for disruption as Amazon reshaped how customers purchased basic items. Why drive out to a Walmart store for a package of toothpaste when you can just click a few buttons on your phone?

Yet rather than ceding the market to Amazon, Walmart has invested heavily in digital technology and has achieved considerable success. It wasn’t any one particular tactic or strategy made the difference, but rather the acknowledgment that every single process needed to be reinvented for the digital age. For example, the company is using virtual reality to revolutionize how it does in-store training.

Perhaps most of all, leaders need to understand that digital transformation is human transformation. There is no shortage of capable vendors that can implement technology for you. What’s key, however, is to shift your culture, processes and business model to leverage digital capabilities.

2. Explore Post-Digital Technologies

While digital transformation is accelerating, advancement in the underlying technology is slowing down. Moore’s law, the consistent doubling of computer chip performance over the last 50 years, is nearing its theoretical limits. It has already slowed down considerably and will soon stop altogether. Yet there are non-digital technologies under development that will be far more powerful than anything we’ve ever seen before.

Consider Intel, which sees its future in what it calls heterogeneous computing combining traditional digital chips with non-digital architectures, such as quantum and neuromorphic. It announced a couple of years ago its Pohoiki Beach neuromorphic system that processes information up to 1,000 times faster and 10,000 more efficiently than traditional chips for certain tasks.

IBM has created a network to develop quantum computing technology, which includes research labs, startups and companies that seek to be early adopters of the technology. Like neuromorphic computing, quantum systems have the potential to be thousands, if not millions, of times more powerful than today’s technology.

The problem with these post-digital architectures is that no one really knows how they are going to work. They operate on a very different logic than traditional computers, will require new programming languages and algorithmic strategies. It’s important to start exploring these technologies now or you could find yourself years behind the curve.

3. Focus on Atoms, Not Bits

The digital revolution created a virtual world. My generation was the first to grow up with video games and our parents worried that we were becoming detached from reality. Then computers entered offices and Dan Bricklin created Visicalc, the first spreadsheet program. Eventually smartphones and social media appeared and we began spending almost as much time in the virtual world as we did in the physical one.

Essentially, what we created was a simulation economy. We could experiment with business models in our computers, find flaws and fix them before they became real. Computer-aided design (CAD) software allowed us to design products in bits before we got down to the hard work of shaping atoms. Because it’s much cheaper to fail in the virtual world than the physical one, this made our economy much more efficient.

Yet the next great transformation will be from bits to atoms. Digital technology is creating revolutions in things like genomics and materials science. Artificial intelligence and cloud computing are reshaping fields like manufacturing and agriculture. Quantum and neuromorphic computing will accelerate these trends.

Much like those new computing architectures, the shift from bits to atoms will create challenges. Applying the simulation economy to the world of atoms will require new skills and we will need people with those skills to move from offices in urban areas to factory floors and fields. They will also need to learn to collaborate effectively with people in those industries.

4. Transformation is Always a Journey, Never a Destination

The 20th century was punctuated by two waves of disruption. The first, driven by electricity and internal combustion, transformed almost every facet of daily life and kicked off a 50-year boom in productivity. The second, driven by the microbe, the atom and the bit, transformed fields such as agriculture, healthcare and management.

Each of these technologies followed the pattern of discovery, engineering and transformation. The discovery phase takes place mostly out of sight, with researchers working quietly in anonymous labs. The engineering phase is riddled with errors, as firms struggle to shape abstract concepts into real products. A nascent technology is easy to ignore, because its impact hasn’t been felt yet.

The truth is that disruption doesn’t begin with inventions, but when an ecosystem emerges to support them. That’s when the transformation phase begins and takes us by surprise, because transformation never plays out like we think it will. The future will always, to a certain extent, unpredictable for the simple reason that it hasn’t happened yet.

Today, we’re on the brink of a new era of innovation that will be driven by new computing architectures, genomics, materials science and artificial intelligence. That’s why we need to design our organizations for transformation by shifting from vertical hierarchies to horizontal networks.

Most of all, we need to shift our mindsets from seeing transformation as set of discreet objectives to a continuous journey of discovery. Digital technology has only been one phase of that journey. The most exciting things are still yet to come.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Is China Our New Sputnik Moment?

Is China Our New Sputnik Moment?

GUEST POST from Greg Satell

When the Soviets launched Sputnik, the first space satellite, into orbit in 1957, it was a wake-up call for America. Over the next year, President Eisenhower would sign the National Defense Education Act to spur science education, increase funding for research and establish NASA and DARPA to spur innovation.

A few years ago, a report by the Council on Foreign Relations (CFR) argued that we are at a similar point today, but with China. While we have been steadily decreasing federal investment in R&D over the past few decades, our Asian rival has been ramping up and now threatens our leadership in key technologies such as AI, genomics and quantum information technology.

Clearly, we need to increase our commitment to science and innovation and that means increasing financial investment. However, what the report makes clear is that money alone won’t solve the problem. We are, in several important ways, actually undermining our ability to innovate, now and in the future. We need to renew our culture of innovation in America.

Educating And Attracting Talent

The foundation of an innovation economy is education, especially in STEM subjects. Historically, America has been the world’s best educated workforce, but more recently we’ve fallen to fifth among OECD countries for post-secondary education. That’s alarming and something we will certainly need to reverse if we are to compete effectively.

Our educational descent can be attributed to three major causes. First, the rest of the world has become more educated, so the competition has become stiffer. Second, is financing. Tuition has nearly tripled in the last decade and student debt has become so onerous that it now takes about 20 years to pay off four years for college. Third, we need to work harder to attract talented people to the United States.

The CFR report recommends developing a “21st century National Defense Education Act” to create scholarships in STEM areas and making it easier for foreign students to get Green Cards when they graduate from our universities. It also points out that we need to work harder to attract foreign talent, especially in high impact areas like AI, genomics and quantum computing.

Unfortunately, we seem to be going the other way. The number of international students to American universities is declining. Policies like the muslim ban and concerns about gun violence are deterring scientific talent coming here. The denial rate for those on H1-B visas has increased from 4% in 2016 to 18% in the first quarter of 2019.

Throughout our history, it has been our openness to new people and new ideas that has made America exceptional. It’s a legitimate question whether that’s still true.

Building Technology Ecosystems

In the 1980s, the US semiconductor industry was on the ropes. Due to increased competition from low-cost Japanese manufacturers, American market share in the DRAM market fell from 70% to 20%. The situation not only had a significant economic impact, there were also important national security implications.

The federal government responded with two initiatives, the Semiconductor Research Corporation and SEMATECH, both of which were nonprofit consortiums that involved government, academia and industry. By the 1990s. American semiconductor manufacturers were thriving again.

Today, we have similar challenges with rare earth elements, battery technology and many manufacturing areas. The Obama administration responded by building similar consortiums to those that were established for semiconductors: The Critical Materials Institute for rare earth elements, JCESR for advanced batteries and the 14 separate Manufacturing Institutes.

Yet here again, we seem to be backsliding. The current administration has sought to slash funding for the Manufacturing Extension Partnership that supports small and medium sized producers. An addendum to the CFR report also points out that the administration has pushed for a 30% cut in funding for the national labs, which support much of the advanced science critical to driving American technology forward.

Supporting International Trade and Alliances

Another historical strength of the US economy has been our open approach to trade. The CFR report points out that our role as a “central node in a global network of research and development,” gave us numerous advantages, such as access to foreign talent at R&D centers overseas, investment into US industry and cooperative responses to global challenges.

However, the report warns that “the Trump administration’s indiscriminate use of tariffs against China, as well as partners and allies, will harm U.S. innovative capabilities.” It also faults the Trump administration for pulling out of the Trans-Pacific Partnership trade agreement, which would have bolstered our relationship with Asian partners and increased our leverage over China.

The tariffs undermine American industry in two ways. First, because many of the tariffs are on intermediate goods which US firms use to make products for export, we’re undermining our own competitive position, especially in manufacturing. Second, because trade partners such as Canada and the EU have retaliated against our tariffs, our position is weakened further.

Clearly, we compete in an ecosystem driven world in which power does not come from the top, but emanates from the center. Traditionally, America has positioned itself at the center of ecosystems by constantly connecting out. Now that process seems to have reversed itself and we are extremely vulnerable to others, such as China, filling the void.

We Need to Stop Killing Innovation in America

The CFR report, whose task force included such luminaries as Admiral William McRaven, former Google CEO Eric Schmidt and economist Laura Tyson, should set alarm bells ringing. Although the report was focused on national security issues, it pertains to general competitiveness just as well and the picture it paints is fairly bleak.

After World War II, America stood almost alone in the world in terms of production capacity. Through smart policy, we were able to transform that initial advantage into long-term technological superiority. Today, however we have stiff competition in areas ranging from AI to synthetic biology to quantum systems.

At the same time, we seem to be doing everything we can to kill innovation in America. Instead of working to educate and attract the world’s best talent, we’re making it harder for Americans to attain higher education and for top foreign talent to come and work here. Instead of ramping up our science and technology programs, presidential budgets regular recommend cutting them. Instead of pulling our allies closer, we are pushing them away.

To be clear, America is still at the forefront of science and technology, vying for leadership in every conceivable area. However, as global competition heats up and we need to be redoubling our efforts, we seem to be doing just the opposite. The truth is that our prosperity is not a birthright to which we are entitled, but a legacy that must be lived up to.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Rethinking Customer Journeys

Rethinking Customer Journeys

GUEST POST from Geoffrey A. Moore

Customer journeys are a mainstay of modern marketing programs. Unfortunately, for most companies, they are pointed in the wrong direction!

Most customer journey diagrams I see map the customer’s journey through the vendor’s marketing and sales process. That’s not a customer journey. That is a vendor journey. Customers could not care less about it.

What customers do care about is any journey that leads to value realization in their enterprise. That means true customer journey mapping must work backward from the customer’s value goals and objectives, not forward from the vendor’s sales goals and objectives.

But to do that, the customer-facing team in the vendor organization has to have good intelligence about what value realization the customer is seeking. That means that sales teams must diagnose before they prescribe. They must interrogate before they present. They must listen before they demo.

That is not what the typical sales enablement program teaches. Instead, it instructs salespeople on how to give the standard presentation, how to highlight the product’s competitive advantages, how to counter the competition’s claims—anything and everything except the only thing that really matters—how do you get good customer intelligence from whatever level of management you are able to converse with?

The SaaS business model with its emphasis on subscription and consumption creates a natural occasion for reforming these practices. Net Revenue Retention is the name of the game. Adoption, extension, and expansion of product usage are core to the customer’s Health Score. This only happens when value is truly being realized.

All this is casting the post-sales customer-facing functions of Customer Success and Customer Support in a new light. These relationships are signaling outposts for current customer status. Vendors still need to connect with the top management, for they are the ones who set the value realization goals and provide the budgets to fund the vendor’s offerings, but for day-to-day reality checks on whether the value is actually getting realized, nothing beats feet on the ground.

So, note to vendors. You can still use your vendor-centric customer journey maps to manage your marketing and sales productivity. Just realize these maps are about you, not the customer. You cannot simply assign the customer a mindset that serves your interests. You have to genuinely engage with them to get to actionable truth.

That’s what I think. What do you think?

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.