Category Archives: Technology

Why Most Corporate Innovation Programs Fail

(And How To Make Them Succeed)

Why Most Corporate Innovation Programs Fail

GUEST POST from Greg Satell

Today, everybody needs to innovate. So it shouldn’t be surprising that corporate innovation programs have become wildly popular. There is an inherent tradeoff between innovation and the type of optimization that operational executives excel at. Creating a separate unit to address innovation just makes intuitive sense.

Yet corporate innovation programs often fail and it’s not hard to see why. Unlike other business functions, like marketing or finance, in a healthy organization everybody takes pride in their ability to innovate. Setting up a separate innovation unit can often seem like an affront to those who work hard to innovate in operational units.

Make no mistake, a corporate innovation program is no panacea. It doesn’t replace the need to innovate every day. Yet a well designed program can augment those efforts, take the business in new directions and create real value. The key to a successful innovation program is to develop a clear purpose built on a shared purpose that can solve important problems.

A Good Innovation Program Extends, It Doesn’t Replace

It’s no secret that Alphabet is one of the most powerful companies in the world. Nevertheless, it has a vulnerability that is often overlooked. Much like Xerox and Kodak decades ago, it’s highly dependent on a single revenue stream. In 2018, 86% of its revenues came from advertising, mostly from its Google search business.

It is with this in mind that the company created its X division. Because the unit was set up to pursue opportunities outside of its core search business, it didn’t encounter significant resistance. In fact, the X division is widely seen as an extension of what made Alphabet so successful in the first place.

Another important aspect is that the X division provides a platform to incubate internal projects. For example, Google Brain started out as a “20% time project.” As it progressed and needed more resources, it was moved to the X division, where it was scaled up further. Eventually, it returned to the mothership and today is an integral part of the core business.

Notice how the vision of the X division was never to replace innovation efforts in the core business, but to extend them. That’s been a big part of its success and has led to exciting new business like Waymo autonomous vehicles and the Verily healthcare division.

Focus On Commonality, Not Difference

All too often, innovation programs thrive on difference. They are designed to put together a band of mavericks and disruptors who think differently than the rest of the organization. That may be great for instilling a strong esprit de corps among those involved with the innovation program, but it’s likely to alienate others.

As I explain in Cascades, any change effort must be built on shared purpose and shared values. That’s how you build trust and form the basis for effective collaboration between the innovation program and the rest of the organization. Without those bonds of trust, any innovation effort is bound to fail.

You can see how that works in Alphabet’s X division. It is not seen as fundamentally different from the core Google business, but rather as channeling the company’s strengths in new directions. The business opportunities it pursues may be different, but the core values are the same.

The key question to ask is why you need a corporate innovation program in the first place. If the answer is that you don’t feel your organization is innovative enough, then you need to address that problem first. A well designed innovation program can’t be a band-aid for larger issues within the core business.

Executive Sponsorship Isn’t Enough

Clearly, no corporate innovation program can be successful without strong executive sponsorship. Commitment has to come from the top. Yet just as clearly, executive sponsorship isn’t enough. Unless you can build support among key stakeholders inside and outside the organization, support from the top is bound to erode.

For example, when Eric Haller started Datalabs at Experian, he designed it to be focused on customers, rather than ideas developed internally. “We regularly sit down with our clients and try and figure out what’s causing them agita,” he told me, “because we know that solving problems is what opens up enormous business opportunities for us.”

Because the Datalabs units works directly with customers to solve problems that are important to them, it has strong support from a key stakeholder group. Another important aspect at Datalabs is that once a project gets beyond the prototype stage it goes to one of the operational units within the company to be scaled up into a real business. Over the past five years businesses originated at Datalabs have added over $100 million in new revenues.

Perhaps most importantly, Haller is acutely aware how innovation programs can cause resentment, so he works hard to reduce tensions through building collaborations around the organization. Datalabs is not where “innovation happens” at Experian. Rather it serves to augment and expand capabilities that were already there.

Don’t Look For Ideas, Identify Meaningful Problems

Perhaps most importantly, an innovation program should not be seen as a place to generate ideas. The truth is that ideas can come from anywhere. So designating one particular program in which ideas are supposed to happen will not only alienate the rest of the organization, it is also likely to overlook important ideas generated elsewhere.

The truth is that innovation isn’t about ideas. It’s about solving problems. In researching my book, Mapping Innovation, I came across dozens of stories from every conceivable industry and field and it always started with someone who came across a problem they wanted to solve. Sometimes, it happened by chance, but in most cases I found that great innovators were actively looking for problems that interested them.

If you look at successful innovation programs like Alphabet’s X division and Experian’s Datalabs, the fundamental activity is exploration. X division explores domains outside of search, while Datalabs explores problems that its customers need solved. Once you identify a meaningful problem, the ideas will come.

That’s the real potential of innovation programs. They provide a space to explore areas that don’t fit with the current business, but may play an important role in its future. A good innovation program doesn’t replace capabilities in the core organization, but leverages them to create new opportunities.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Four Major Shifts Driving the 21st Century

Four Major Shifts Driving the 21st Century

GUEST POST from Greg Satell

In 1900, most people lived much like their ancestors had for millennia. They lived and worked on farms, using animal power and hand tools to augment their own abilities. They inhabited small communities and rarely, if ever, traveled far from home. They engaged in small scale violence and lived short, hard lives.

That would all change over the next century as we learned to harness the power of internal combustion, electricity and atoms. These advancements allowed us to automate physical labor on a large scale, engage in mass production, travel globally and wage violence that could level entire cities.

Today, at the beginning of a new century, we are seeing similar shifts that are far more powerful and are moving far more quickly. Disruption is no longer seen as merely an event, but a way of life and the fissures are there for all to see. Our future will depend on our determination to solve problems faster than our proclivity to continually create them.

1. Technology Shifts

At the turn of the 20th century, electricity and internal combustion were over a decade old, but hadn’t made much of an impact yet. That would change in the 1920s, as roads got built and new appliances that harnessed the power of electricity were invented. As ecosystems formed around new technologies, productivity growth soared and quality of life increased markedly.

There would be two more major technology shifts over the course of the century. The Green Revolution and the golden age of antibiotics in the 50s and 60s saved an untold number of lives. The digital revolution in the 90s created a new era of communication and media that still reverberates today.

These technological shifts worked for both good and ill in that they revealed the best and worst parts of human nature. Increased mobility helped to bring about violence on a massive scale during two world wars. The digital revolution made war seem almost antiseptic, enabling precision strikes to kill people half a world away at the press of a button.

Today, we are on the brink of a new set of technological shifts that will be more powerful and more pervasive than any we have seen before. The digital revolution is ending, yet new technologies, such as novel computing architectures, artificial intelligence, as well as rapid advancements in genomics and materials science promise to reshape the world as we know it.

2. Resource Shifts

As new technologies reshaped the 20th century, they also reshaped our use of resources. Some of these shifts were subtle, such as how the invention of synthetic indigo dye in Germany affected farmers in India. Yet the biggest resource shift, of course, was the increase in the demand for oil.

The most obvious impact from the rise of oil was how it affected the Middle East. Previously nomadic societies were suddenly awash in money. Within just a single generation, countries like Saudi Arabia, Iraq and Iran became global centers of power. The Arab Oil Embargo of the 1970s nearly brought western societies to their knees and prolonged the existence of the Soviet Union.

So I was more than surprised last year to find when I was at a conference in Bahrain that nearly every official talked openly about he need to “get off oil.” With the rise of renewable energy, depending on a single commodity is no longer a viable way to run a society. Today, solar power is soaring in the Middle East.

Still, resource availability remains a powerful force. As the demand for electric vehicles increases, the supply of lithium could become a serious issue. Already China is threatening to leverage its dominance in rare earth elements in the trade war with the United States. Climate change and population growth is also making water a scarce resource in many places.

3. Migrational Shifts

One of the most notable shifts in the 20th century was how the improvement in mobility enabled people to “vote with their feet.” Those who faced persecution or impoverishment could, if they dared, sail off to some other place where the prospects were better. These migrational shifts also helped shape the 20th century and will likely do the same in the 21st.

Perhaps the most notable migration in the 20th century was from Europe to the United States. Before World War I, immigrants from Southern and Eastern Europe flooded American shores and the backlash led to the Immigration Act of 1924. Later, the rise of fascism led to another exodus from Europe that included many of its greatest scientists.

It was largely through the efforts of immigrant scientists that the United States was able to develop technologies like the atomic bomb and radar during World War II. Less obvious though is the contributions of second and third generation citizens, who make up a large proportion of the economic and political elite in the US.

Today, the most noteworthy shift is the migration of largely Muslim people from war-torn countries into Europe. Much like America in the 1920s, the strains of taking in so many people so quickly has led to a backlash, with nationalist parties making significant gains in many countries.

4. Demographic Shifts

While the first three shifts played strong roles throughout the 20th century, demographic shifts, in many ways, shaped the second half of the century. The post war generation of Baby Boomers repeatedly challenged traditional values and led the charge in political movements such as the struggle for civil rights in the US, the Prague Spring in Czechoslovakia and the March 1968 protests in Poland.

The main drivers of the Baby Boomer’s influence have been its size and economic prosperity. In America alone, 76 million people were born in between 1946 and 1964, and they came of age in the prosperous years of the 1960s. These factors gave them unprecedented political and economic clout that continues to this day.

Yet now, Millennials, who are more diverse and focused on issues such as the environment and tolerance, are beginning to outnumber Baby Boomers. Much like in the 1960s, their increasing influence is driving trends in politics, the economy and the workplace and their values often put them in conflict with the baby boomers.

However, unlike the Baby Boomers, Millennials are coming of age in an era where prosperity seems to be waning. With Baby Boomers retiring and putting further strains on the economy, especially with regard to healthcare costs, tensions are on the rise.

Building On Progress

As Mark Twain is reputed to have said, “History doesn’t repeat itself, but it does rhyme.” While shifts in technology, resources, migration and demographics were spread throughout the 20th century, today we’re experiencing shifts in all four areas at once. Given that the 20th century was rife with massive wars and genocide, that is somewhat worrying.

Many of the disturbing trends around the world, such as the rise of authoritarian and populist movements, global terrorism and cyber warfare, can be attributed to the four shifts. Yet the 20th century was also a time of great progress. Wars became less frequent, life expectancy doubled and poverty fell while quality of life improved dramatically.

So today, while we face seemingly insurmountable challenges, we should also remember that many of the shifts that cause tensions, also give us the power to solve our problems. Advances in genomics and materials science can address climate change and rising healthcare costs. A rising, multicultural generation can unlock creativity and innovation. Migration can move workers to places where they are sorely needed.

The truth is that every disruptive era is not only fraught with danger, but also opportunity. Every generation faces unique challenges and must find the will to solve them. My hope is that we will do the same. The alternative is unthinkable.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Digital Revolution Has Been A Giant Disappointment

The Digital Revolution Has Been A Giant Disappointment

GUEST POST from Greg Satell

One of the most often repeated episodes in the history of technology is when Steve Jobs was recruiting John Sculley from his lofty position as CEO at Pepsi to come to Apple. “Do you want to sell sugar water for the rest of your life,”Jobs asked, “or do you want to come with me and change the world?”

It’s a strange conceit of digital denizens that their businesses are something nobler than other industries. While it is true that technology can do some wonderful things, if the aim of Silicon Valley entrepreneurs was truly to change the world, why wouldn’t they apply their formidable talents to something like curing cancer or feeding the hungry?

The reality, as economist Robert Gordon explains in the The Rise and Fall of American Growth, is that the measurable impact has been relatively meager. According to the IMF, except for a relatively short burst in growth between 1996 and 2004, productivity has been depressed since the 1970s. We need to rethink how technology impacts our world.

The Old Productivity Paradox

In the 1970s and 80s, business investment in computer technology was increasing by more than 20% per year. Strangely though, productivity growth had decreased during the same period. Economists found this turn of events so strange that they called it the productivity paradox to underline their confusion.

The productivity paradox dumbfounded economists because it violated a basic principle of how a free market economy is supposed to work. If profit seeking businesses continue to make substantial investments, you expect to see a return. Yet with IT investment in the 70s and 80s, firms continued to increase their investment with negligible measurable benefit.

A paper by researchers at the University of Sheffield sheds some light on what happened. First, productivity measures were largely developed for an industrial economy, not an information economy. Second, the value of those investments, while substantial, were a small portion of total capital investment. Third, businesses weren’t necessarily investing to improve productivity, but to survive in a more demanding marketplace.

Yet by the late 1990s, increased computing power combined with the Internet to create a new productivity boom. Many economists hailed the digital age as a “new economy” of increasing returns, in which the old rules no longer applied and a small initial advantage would lead to market dominance. The mystery of the productivity paradox, it seemed, had been solved. We just needed to wait for the technology to hit critical mass.

The New Productivity Paradox

By 2004, the law of increasing returns was there for everyone to see. Google already dominated search, Amazon ruled e-commerce, Apple would go on to dominate mobile computing and Facebook would rule social media. Yet as the dominance of the tech giants grew, productivity would once again fall to depressed levels.

Yet today, more than a decade later, we’re in the midst of a second productivity paradox, just as mysterious as the first one. New technologies like mobile computing and artificial intelligence are there for everyone to see, but they have done little, if anything, to boost productivity.

At the same time the power of digital technology is diminishing. Moore’s law, the decades old paradigm of continuous doubling in the power of computer processing is slowing down and soon will end completely. Without advancement in the underlying technology, it is hard to see how digital technology will ever power another productivity boom.

Considering the optimistic predictions of digital entrepreneurs like Steve Jobs, this is incredibly disappointing. Compare the meager eight years of elevated productivity that digital technology produced with the 50-year boom in productivity created in the wake of electricity and internal combustion and it’s clear that digital technology simply doesn’t measure up.

The Baumol Effect, The Clothesline Paradox and Other Headwinds

Much like the first productivity paradox, it’s hard to determine exactly why the technological advancement over the last 15 years has amounted to so little. Most likely, it is not one factor in particular, but the confluence of a number of them. Increasing productivity growth in an advanced economy is no simple thing.

One possibility for the lack of progress is the Baumol effect, the principle that some sectors of the economy are resistant to productivity growth. For example, despite the incredible efficiency that Jeff Bezos has produced at Amazon, his barber still only cuts one head of hair at a time. In a similar way, sectors like healthcare and education, which require a large amount of labor inputs that resist automation, will act as a drag on productivity growth.

Another factor is the Clothesline paradox, which gets its name from the fact that when you dry your clothes in a machine, it figures into GDP data, but when you hang them on a clothesline, no measurable output is produced. In much the same way, when you use a smartphone to take pictures or to give you directions, there is considerable benefit that doesn’t result in any financial transactions. In fact, because you use less gas and don’t develop film, GDP decreases somewhat.

Additionally, the economist Robert Gordon, mentioned above, notes six headwinds to economic growth, including aging populations, limits to increasing education, income inequality, outsourcing, environmental costs due to climate change and rising household and government debt. It’s hard to see how digital technology will make a dent in any of these problems.

Technology is Never Enough to Change the World

Perhaps the biggest reason that the digital revolution has been such a big disappointment is because we expected the technology to largely do the work for us. While there is no doubt that computers are powerful tools, we still need to put them to good use and we have clearly missed opportunities in that regard.

Think about what life was like in 1900, when the typical American family didn’t have access to running water, electricity or gas powered machines such as tractors or automobiles. Even something simply like cooking a meal took hours of backbreaking labor. Yet investments in infrastructure and education combined with technology to produce prosperity.

Today, however, there is no comparable effort to invest in education and healthcare for those who cannot afford it, to limit the effects of climate change, to reduce debt or to do anything of anything of significance to mitigate the headwinds we face. We are awash in nifty gadgets, but in many ways we are no better off than we were 30 years ago.

None of this was inevitable, but the somewhat the results of choices that we have made. We can, if we really want to, make different choices in the days and years ahead. What I hope we have learned from our digital disappointments is that technology itself is never enough. We are truly the masters of our fate, for better or worse.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Navigating the AI Revolution

Leveraging the Three Horizons for Tomorrow’s Triumphs

Navigating the AI Revolution - Leveraging the Three Horizons for Tomorrow's Triumphs

GUEST POST from Teresa Spangler

The future belongs to those who prepare for it today. As we stand at the dawn of the AI revolution, we must not merely adapt to change; we must anticipate it, shape it, and turn it to our advantage. Embracing the three horizons of AI is not just about technology or strategy; it’s about purpose – our purpose as leaders to guide our organizations, our people, and our society into a prosperous, equitable, and truly human future.

Teresa Spangler

As we turn the page on a year of profound transformation, the horizon of 2024 and beyond takes shape. Artificial Intelligence (AI) is steadfastly marching forward, and as leaders, the pressing call to pilot our organizations through these new frontiers couldn’t be more poignant. We must explore how executive leadership can initiate actionable measures today to harness tomorrow’s opportunities.

As the silhouette of 2024 looms ahead, we realize that maneuvering through the turbulent waters of change requires not just a reactive approach, but a meticulously charted plan. A navigational tool that can prove invaluable in this journey is the Three Horizons framework for futures planning. This framework allows us to methodically comprehend, envision, and shape our path through the cascading waves of AI development. By exploring each horizon in detail, we can create a strategic roadmap that integrates immediate actions, mid-term plans, and long-term visions. Let’s delve deeper into this process, beginning with the groundwork of understanding today’s AI landscape.

The Groundwork: Understanding Today’s AI Landscape – Horizon 1

Diving into the fast-paced whirlwind of AI, a comprehensive grasp of today’s landscape is the cornerstone for future triumphs. Familiarity with various AI technologies, like machine learning, natural language processing, robotics, and computer vision, is now an indispensable part of the executive toolkit. However, a theory is merely the starting point.

Turning this knowledge into strategic assets necessitates that you:

  • Actively interact with AI tools like, ChatGPT, DALL-E, DeepArt and DeepDream, Stable Diffusion, Midjourney …etc. Developing even rudimentary AI models with platforms like TensorFlow or PyTorch can shed light on AI’s potential and limitations. For instance, IBM’s Project Debater showcases how AI can understand the context and form logical arguments, pushing the boundary of natural language processing.
  • Forecast AI’s immediate future is leveraging trends in AI research, market dynamics, societal needs, and regulatory shifts. Access the best industry reports and collaborate with external experts that offer invaluable insights. A recent McKinsey report, for instance, found that companies integrating AI were nearly twice as likely to be top-quartile performers in their industry.

It’s widely acknowledged that AI will significantly alter the dynamics of how our world operates. While the intricacies of this transformation can seem complex, it’s certainly not an insurmountable challenge! The Three Horizons methodology is one of many effective strategies your organization can adopt to manage this transition. By strategically navigating through these horizons with a cohesive team and a well-articulated plan, your organization will be well-positioned to embrace the AI revolution. Here are a few other methodologies you might consider:

  1. Scenario Planning: This approach involves envisioning different future states and developing strategies to succeed in each potential scenario.
  2. Backcasting: Starting with a desirable future end-state, this method works backward to identify the strategic steps required to reach that goal.
  3. Roadmapping: This technique charts out the evolution of technologies and products, helping you understand how technological progress might affect your business over time.

Choosing the right methodology will depend on your specific circumstances and objectives. Regardless of the approach, remember that the key to success lies in aligning your team and developing a clear, comprehensive plan of action.

On to Horizon 2 & 3

Navigating the Waves: Crafting the Mid-Term AI Future – Horizon 2

As part of the C-suite, your role extends beyond mere reactions to change – you’re a herald of future trends. Structuring the mid-term AI future necessitates:

  • Assimilating the implications of AI for your industry. Evaluate how job roles might evolve, identify the ethical and privacy concerns, and understand the geopolitical interplays of AI on your global strategies. For instance, AI-driven automation could reshape employment, as seen with Amazon’s warehouse robots.
  • Tailoring a 3-5 year forecast using foresight platforms like FuturePlatform to incorporate technological breakthroughs, policy changes, societal trends, and economic factors. Staying informed about AI regulations through think tanks like the AI Now Institute can help you navigate this complex terrain.

Setting the Sails: Envisioning a Decade Ahead – Horizon 3

Leadership in the AI epoch means having the courage to gaze beyond the immediate future. For the long-term horizon, consider the following:

  • Contemplating the possibilities. Quantum computing, advanced neural networks, and sophisticated AI-human interfaces might be the norm a decade from now. Consider how Microsoft’s recent advancements in quantum computing could revolutionize data processing and analysis in your industry.
  • Employing scenario planning to prepare for a multitude of futures. Use strategic planning software like Lucid chart to visualize different assumptions about technological progress, regulatory changes, and societal evolution.
  • Formulating strategic plans based on these scenarios. The essence of leadership is making today’s decisions with an eye on tomorrow’s probabilities.
  • Maximize the power of external expertise. Benefit from programs like Plazabridge Group’s Innovation Pro™, Innofusion™ Transformation, Innofusion™ Sprint, and Innofusion™ Sustainability Assessment to aid your journey. These programs offer valuable outside perspectives that can enrich your understanding and application of AI. They provide fresh insights, hands-on experience, and expert guidance in navigating the complex AI landscape. Find out more [Learn more] to embark on your AI journey.

External experts act as crucial navigators in this AI expedition. They help decode ethical challenges, demystify technological complexities, and forecast future trends, equipping executives to make well-informed, strategic decisions in the face of AI’s rapid evolution.

As we draw closer to 2024, remember that we’re not merely spectators of the emerging AI revolution – we’re the trailblazers. As leaders, we have the power to do more than respond to change; we can architect it. The ripples of our leadership will extend beyond our organizations, shaping the very fabric of our society. The future isn’t something that simply happens to us – we’re active participants in its creation. Now is the time to embrace this momentous journey, and lead with boldness and determination.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Bringing Yin and Yang to the Productivity Zone

Bringing Yin and Yang to the Productivity Zone

GUEST POST from Geoffrey A. Moore

Digital transformation is hardly new. Advances in computing create more powerful infrastructure which in turn enables more productive operating models which in turn can enable wholly new business models. From mainframes to minicomputers to PCs to the Internet to the Worldwide Web to cloud computing to mobile apps to social media to generative AI, the hits just keep on coming, and every IT organization is asked to both keep the current systems running and to enable the enterprise to catch the next wave. And that’s a problem.

The dynamics of productivity involve a yin and yang exchange between systems that improve efficiency and programs that improve effectiveness. Systems, in this model, are intended to maintain state, with as little friction as possible. Programs, in this model, are intended to change state, with maximum impact within minimal time. Each has its own governance model, and the two must not be blended.

It is a rare IT organization that does not know how to maintain its own systems. That’s Job 1, and the decision rights belong to the org itself. But many IT organizations lose their way when it comes to programs—specifically, the digital transformation initiatives that are re-engineering business processes across every sector of the global economy. They do not lose their way with respect to the technology of the systems. They are missing the boat on the management of the programs.

Specifically, when the CEO champions the next big thing, and IT gets a big chunk of funding, the IT leader commits to making it all happen. This is a mistake. Digital transformation entails re-engineering one or more operating models. These models are executed by organizations outside of IT. For the transformation to occur, the people in these organizations need to change their behavior, often drastically. IT cannot—indeed, must not—commit to this outcome. Change management is the responsibility of the consuming organization, not the delivery organization. In other words, programs must be pulled. They cannot be pushed. IT in its enthusiasm may believe it can evangelize the new operating model because people will just love it. Let me assure you—they won’t. Everybody endorses change as long as other people have to be the ones to do it. No one likes to move their own cheese.

Given all that, here’s the playbook to follow:

  1. If it is a program, the head of the operating unit that must change its behavior has to sponsor the change and pull the program in. Absent this commitment, the program simply must not be initiated.
  2. To govern the program, the Program Management Office needs a team of four, consisting of the consuming executive, the IT executive, the IT project manager, and the consuming organization’s program manager. The program manager, not the IT manager, is responsible for change management.
  3. The program is defined by a performance contract that uses a current state/future state contrast to establish the criteria for program completion. Until the future state is achieved, the program is not completed.
  4. Once the future state is achieved, then the IT manager is responsible for securing the system that will maintain state going forward.

Delivering programs that do not change state is the biggest source of waste in the Productivity Zone. There is an easy fix for this. Just say No.

That’s what I think. What do you think?

Image Credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






A Triumph of Artificial Intelligence Rhetoric

Understanding ChatGPT

A Triumph of Artificial Intelligence Rhetoric - Understanding ChatGPT

GUEST POST from Geoffrey A. Moore

I recently finished reading Stephen Wolfram’s very approachable introduction to ChatGPT, What is ChatGPT Doing . . . And Why Does It Work?, and I encourage you to do the same. It has sparked a number of thoughts that I want to share in this post.

First, if I have understood Wolfram correctly, what ChatGPT does can be summarized as follows:

  1. Ingest an enormous corpus of text from every available digitized source.
  2. While so doing, assign to each unique word a unique identifier, a number that will serve as a token to represent that word.
  3. Within the confines of each text, record the location of every token relative to every other token.
  4. Using just these two elements—token and location—determine for every word in the entire corpus the probability of it being adjacent to, or in the vicinity of, every other word.
  5. Feed these probabilities into a neural network to cluster words and build a map of relationships.
  6. Leveraging this map, given any string of words as a prompt, use the neural network to predict the next word (just like AutoCorrect).
  7. Based on feedback from so doing, adjust the internal parameters of the neural network to improve its performance.
  8. As performance improves, extend the reach of prediction from the next word to the next phrase, then to the next clause, the next sentence, the next paragraph, and so on, improving performance at each stage by using feedback to further adjust its internal parameters.
  9. Based on all of the above, generate text responses to user questions and prompts that reviewers agree are appropriate and useful.

OK, I concede this is a radical oversimplification, but for the purposes of this post, I do not think I am misrepresenting what is going on, specifically when it comes to making what I think is the most important point to register when it comes to understanding ChatGPT. That point is a simple one. ChatGPT has no idea what it is talking about.

Indeed, ChatGPT has no ideas of any kind—no knowledge or expertise—because it has no semantic information. It is all math. Math has been used to strip words of their meaning, and that meaning is not restored until a reader or user engages with the output to do so, using their own brain, not ChatGPT’s. ChatGPT is operating entirely on form and not a whit on content. By processing the entirety of its corpus, it can generate the most probable sequence of words that correlates with the input prompt it had been fed. Additionally, it can modify that sequence based on subsequent interactions with an end user. As human beings participating in that interaction, we process these interactions as a natural language conversation with an intelligent agent, but that is not what is happening at all. ChatGPT is using our prompts to initiate a mathematical exercise using tokens and locations as its sole variables.

OK, so what? I mean, if it works, isn’t that all that matters? Not really. Here are some key concerns.

First, and most importantly, ChatGPT cannot be expected to be self-governing when it comes to content. It has no knowledge of content. So, whatever guardrails one has in mind would have to be put in place either before the data gets into ChatGPT or afterward to intercept its answers prior to passing them along to users. The latter approach, however, would defeat the whole purpose of using it in the first place by undermining one of ChatGPT’s most attractive attributes—namely, its extraordinary scalability. So, if guardrails are required, they need to be put in place at the input end of the funnel, not the output end. That is, by restricting the datasets to trustworthy sources, one can ensure that the output will be trustworthy, or at least not malicious. Fortunately, this is a practical solution for a reasonably large set of use cases. To be fair, reducing the size of the input dataset diminishes the number of examples ChatGPT can draw upon, so its output is likely to be a little less polished from a rhetorical point of view. Still, for many use cases, this is a small price to pay.

Second, we need to stop thinking of ChatGPT as artificial intelligence. It creates the illusion of intelligence, but it has no semantic component. It is all form and no content. It is a like a spider that can spin an amazing web, but it has no knowledge of what it is doing. As a consequence, while its artifacts have authority, based on their roots in authoritative texts in the data corpus validated by an extraordinary amount of cross-checking computing, the engine itself has none. ChatGPT is a vehicle for transmitting the wisdom of crowds, but it has no wisdom itself.

Third, we need to fully appreciate why interacting with ChatGPT is so seductive. To do so, understand that because it constructs its replies based solely on formal properties, it is selecting for rhetoric, not logic. It is delivering the optimal rhetorical answer to your prompt, not the most expert one. It is the one that is the most popular, not the one that is the most profound. In short, it has a great bedside manner, and that is why we feel so comfortable engaging with it.

Now, given all of the above, it is clear that for any form of user support services, ChatGPT is nothing less than a godsend, especially where people need help learning how to do something. It is the most patient of teachers, and it is incredibly well-informed. As such, it can revolutionize technical support, patient care, claims processing, social services, language learning, and a host of other disciplines where users are engaging with a technical corpus of information or a system of regulated procedures. In all such domains, enterprises should pursue its deployment as fast as possible.

Conversely, wherever ambiguity is paramount, wherever judgment is required, or wherever moral values are at stake, one must not expect ChatGPT to be the final arbiter. That is simply not what it is designed to do. It can be an input, but it cannot be trusted to be the final output.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Ideas Have Limited Value

Ideas Have Limited Value

GUEST POST from Greg Satell

There is a line of thinking that says that the world is built on ideas. It was an idea that launched the American Revolution and created a nation. It was an idea that led Albert Einstein to pursue relativity, Linus Pauling to invent a vaccine and for Steve Jobs to create the iPhone and build the most valuable company in the world.

It is because of the power of ideas that we hold them so dear. We want to protect those we believe are valuable and sometimes become jealous when others think them up first. There’s nothing so rapturous as the moment of epiphany in which an idea forms in our mind and begins to take shape.

Clearly, ideas are important, but not as many believe. America is what it is today, for better or worse, not just because of the principles of its founding, but because of the actions that came after it. We revere people like Einstein, Pauling and Jobs not because of their ideas, but what they did with them. The truth is that although possibilities are infinite, ideas are limited.

The Winklevoss Affair

The muddled story of Facebook’s origin is now well known. Mark Zuckerberg met with the Winklevoss twins and another Harvard classmate to discuss building a social network together. Zuckerberg agreed, but then sandbagged his partners while he built and launched a competing site. He would later pay out a multimillion dollar settlement for his misdeeds.

Zuckerberg and the Winklevoss twins were paired in the news together again recently when Facebook announced that it’s developing a new cryptocurrency called Libra. As it happens, the Winklevoss twins have been high profile investors in Bitcoin for a while now. The irony was too delicious for many in the media to ignore. First he stole their idea for Facebook and now he’s doing the same with cryptocurrencies!

Of course this is ridiculous. Social networks like Friendster and Myspace existed before Facebook and many others came after. Most failed. In much the same way, many people today have ideas about starting cryptocurrency businesses. Most of them will fail too. The value of an initial idea is highly questionable.

Different people have similar ideas all the time. In fact, in a landmark study published in 1922 identified 148 major inventions or discoveries that at least two different people, working independently, arrived at the same time. So the fact that both the Winklevoss twins and Zuckerberg wanted to launch a social network was meaningless.

The truth is that Zuckerberg didn’t have to pay the Winklevoss twins because he stole their idea, but because he used their trust to actively undermine their business to benefit his. His crime wasn’t creation, but destruction.

The Semmelweis Myth

In 1847, a young doctor named Ignaz Semmelweis had a major breakthrough. Working in a maternity ward, he discovered that a regime of hand washing could dramatically lower the incidence of childbed fever. Unfortunately, the medical establishment rejected his idea and the germ theory of disease didn’t take hold until decades later.

The phenomenon is now known as the Semmelweis effect, the tendency for people to reject new knowledge that contradicts established beliefs. We tend to think that a great idea will be immediately obvious to everyone, but the opposite usually happens. Ideas that have the power to change the world always arrive out of context for the simple reason that the world hasn’t changed yet.

However, the Semmelweis effect is misleading. As Sherwin Nuland explains in The Doctor’s Plague, there’s more to the story than resistance to a new idea. Semmelweis didn’t see the value in communicating his work effectively, formatting his publications clearly or even collecting data in a manner that would gain his ideas greater acceptance.

Here again, we see the limits of ideas. Like a newborn infant, they can’t survive alone. They need to be nurtured to grow. They need to make friends, interact with other ideas and mature. The tragedy of Semmelweis is not that the medical establishment did not immediately accept his idea, but that he failed to steward it in such a way that it could spread and make an impact.

Why Blockbuster Video Really Failed

One of the most popular business myths today is that of Blockbuster Video. As the story is usually told, the industry giant failed to recognize the disruptive threat that Netflix represented. The truth is that the company’s leadership not only recognized the problem, but developed a smart strategy and executed it well.

The failure, in fact, had less to do with strategy and tactics than it did with managing stakeholder networks. Blockbuster moved quickly to launch an online business, cut late fees and innovated its business model. However, resistance from franchisees, who were concerned that the changes would kill their business, and from investors and analysts, who balked at the cost of the initiatives, sent the stock price reeling.

From there things spiraled downward. The low stock price attracted the corporate raider Carl Icahn, who got control of the board. His overbearing style led to a compensation dispute with Blockbuster’s CEO, John Antioco. Frustrated, Antioco negotiated his exit and left the company in July of 2007.

His successor, Jim Keyes, was determined to reverse Antioco’s strategy, cut investment in the subscription model, reinstated late fees and shifted focus back to the retail stores in a failed attempt to “leapfrog” the online subscription model. Three years later, in 2010, Blockbuster filed for bankruptcy.

The Fundamental Fallacy Of Ideas

One of the things that amazed me while I was researching my book Cascades was how often movements behind powerful ideas failed. The ones that succeeded weren’t those with different ideas or those of higher quality, but those that were able to align small groups, loosely connected, but united by a shared purpose.

The stories of the Winklevoss twins, Ignaz Semmelweis and Blockbuster Video are all different versions of the same fundamental fallacy, that ideas, if they are powerful enough, can stand on their own. Clearly, that’s not the case. Ideas need to be adopted and then combined with other ideas to make an impact on the world.

The truth is that ideas need ecosystems to support them and that doesn’t happen overnight. To make an idea viable in the real world it needs to continually connect outward, gaining adherents and widening its original context. That takes more than an initial epiphany. It takes the will to make the idea subservient to its purpose.

What we have to learn to accept is that what makes an idea powerful is its ability to solve problems. The ideas embedded in the American Constitution were not new at the time of the country’s founding, but gained power by their application in the real world. In much the same way, we revere Einstein’s relativity, Pauling’s vaccine and Jobs iPhone because of their impact on the world.

As G.H. Hardy once put it, “For any serious purpose, intelligence is a very minor gift.” The same can be said about ideas. They do not and cannot stand alone, but need the actions of people to bring them to life.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






4 Things Leaders Must Know About Artificial Intelligence and Automation

4 Things Leaders Must Know About Artificial Intelligence and Automation

GUEST POST from Greg Satell

In 2011, MIT economists Erik Brynjolfsson and Andrew McAfee self-published an unassuming e-book titled Race Against The Machine. It quickly became a runaway hit. Before long, the two signed a contract with W. W. Norton & Company to publish a full-length version, The Second Machine Age that was an immediate bestseller.

The subject of both books was how “digital technologies are rapidly encroaching on skills that used to belong to humans alone.” Although the authors were careful to point out that automation is nothing new, they argued, essentially, that at some point a difference in scale becomes a difference in kind and forecasted we were close to hitting a tipping point.

In recent years, their vision has come to be seen as deterministic and apocalyptic, with humans struggling to stay relevant in the face of a future ruled by robot overlords. There’s no evidence that’s true. The future, in fact, will be driven by humans collaborating with other humans to design work for machines to create value for other humans.

1. Automation Doesn’t Replace Jobs, It Replaces Tasks

When a new technology appears, we always seem to assume that its primary value will be to replace human workers and reduce costs, but that’s rarely true. For example, when automatic teller machines first appeared in the early 1970s, most people thought it would lead to less branches and tellers, but actually just the opposite happened.

What really happens is that as a task is automated, it becomes commoditized and value shifts somewhere else. That’s why today, as artificial intelligence is ramping up, we increasingly find ourselves in a labor shortage. Most tellingly, the shortage is especially acute in manufacturing, where automation is most pervasive.

That’s why the objective of any viable cognitive strategy is not to cut costs, but to extend capabilities. For example, when simple consumer service tasks are automated, that can free up time for human agents to help with more thorny issues. In much the same way, when algorithms can do much of the analytical grunt work, human executives can focus on long-term strategy, which computers tend to not do so well.

The winners in the cognitive era will not be those who can reduce costs the fastest, but those who can unlock the most value over the long haul. That will take more than simply implementing projects. It will require serious thinking about what your organization’s mission is and how best to achieve it.

2. Value Never Disappears, It Just Shifts To Another Place

In 1900, 30 million people in the United States were farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a manner of speaking, 90% of American agriculture workers lost their jobs, mostly due to automation. Still, the twentieth century was seen as an era of unprecedented prosperity.

We’re in the midst of a similar transformation today. Just as our ancestors toiled in the fields, many of us today spend much of our time doing rote, routine tasks. Yet, as two economists from MIT explain in a paper, the jobs of the future are not white collar or blue collar, but those focused on non-routine tasks, especially those that involve other humans.

Far too often, however, managers fail to recognize value hidden in the work their employees do. They see a certain job description, such as taking an order in a restaurant or answering a customer’s call, and see how that task can be automated to save money. What they don’t see, however, is the hidden value of human interaction often embedded in many jobs.

When we go to a restaurant, we want somebody to take care of us (which is why we didn’t order takeout). When we have a problem with a product or service, we want to know somebody cares about solving it. So the most viable strategy is not to cut jobs, but to redesign them to leverage automation to empower humans to become more effective.

3. As Machines Learn To Think, Cognitive Skills Are Being Replaced By Social Skills

20 or 30 years ago, the world was very different. High value work generally involved the retaining information and manipulating numbers. Perhaps not surprisingly, education and corporate training programs were focused on building those skills and people would build their careers on performing well on knowledge and quantitative tasks.

Today, however, an average teenager has more access to information and computing power than even a large enterprise would a generation ago, so knowledge retention and quantitative ability have largely been automated and devalued, so high value work has shifted from cognitive skills to social skills.

To take just one example, the journal Nature has noted that the average scientific paper today has four times as many authors as one did in 1950 and the work they are doing is far more interdisciplinary and done at greater distances than in the past. So even in highly technical areas, the ability to communicate and collaborate effectively is becoming an important skill.

There are some things that a machine will never do. Machines will never strike out at a Little League game, have their hearts broken or see their children born. That makes it difficult, if not impossible, for machines to relate to humans as well as a human can.

4. AI Is A Force Multiplier, Not A Magic Box

The science fiction author Arthur C. Clark noted that “Any sufficiently advanced technology is indistinguishable from magic” and that’s largely true. So when we see a breakthrough technology for the first time, such as when IBM’s Watson system beat top human players at Jeopardy!, many immediately began imagining all the magical possibilities that could be unleashed.

Unfortunately, that always leads to trouble. Many firms raced to implement AI applications without understanding them and were immediately disappointed that the technology was just that — technology — and not actually magic. Besides wasting resources, these projects were also missed opportunities to implement something truly useful.

As Josh Sutton, CEO of Agorai, a platform that helps companies build AI applications for their business, put it, “What I tell business leaders is that AI is useful for tasks you understand well enough that you could do them if you had enough people and enough time, but not so useful if you couldn’t do it with more people and more time. It’s a force multiplier, not a magic box.”

So perhaps most importantly, what business leaders need to understand about artificial intelligence is that it is not inherently utopian or apocalyptic, but a business tool. Much like any other business tool its performance is largely dependent on context and it is a leader’s job to help create that context.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Mystery of Stonehenge Solved

Mystery of Stonehenge Solved

by Braden Kelley

Forget about capturing and reverse engineering alien spacecraft to gain a competitive edge in the innovation race. Sorry, but the universe is billions of years old and even if some extra terrestrial civilization millions or billions of years older than our own managed to travel here from halfway across the galaxy and crash, it is very likely that we would be incapable of reverse engineering their technology.

Why?

When the United States captures a downed enemy aircraft we can reverse engineer it because at its core it is still an aircraft made of similar materials to those we use and made using similar manufacturing processes. Meaning that we already have the capabilities to build something similar, we just need a physical example or blueprints of the aircraft.

But, when you are talking about something made using technology thousands, millions, or billions of years more advanced than our own, it becomes less likely that we would be able to reverse engineer found technology. This is because there would likely be materials involved that we haven’t discovered yet, either entirely new elements on the periodic table or alloys that we don’t yet know how to make. Imagine what would happen if a slightly damaged Apollo-era Saturn V rocket suddenly appeared circa 50 AD next to the Pantheon in Rome. How long would it be before the Romans would be able to fly to the moon?

If a large, and overdue, solar event were to occur and destroy all of our electricity-based technology, how long would it take for us to be able to achieve spaceflight again?

Apocalypse Innovation

There is no doubt that human beings developed a different set of technologies prior to the last great apocalypse and most of this knowledge has been lost through time, warfare, and 400 feet of water or 20 feet of earth. Only tall stone constructions away from prehistoric coastlines or items locked away in dry underground vaults survived. History and technology are incredibly perishable.

Twelve thousand years later we have achieved some pretty remarkable achievements and ground penetrating radar is giving us new insight into the scope and scale of pre-apocalypse societies hidden undersea and underground.

But, there are a great many mysteries from the ancient world that we are still struggling to reverse engineer. From the pyramids to Stonehenge, people are hypothesizing a number of ways these monuments may have been built and what their true purpose might have been.

Nine years ago researchers from the University of Amsterdam determined that the blocks on stone moved around on the Giza plateau on sledges would have moved easier if someone went before them wetting the sand.

Eleven years ago, American Wally Wallington of Michigan showed in a YouTube video how he could move stones weighing more than a ton up to 300 feet per hour and then stand them up vertically all by himself.

He didn’t invent some amazing new piece of technology to do this, but instead eschewed modern technology and showed how he can do this using basic principles of physics and gravity. First let’s look at the video and then we’ll talk about what apocalypse innovation exercise is:

The apocalypse innovation exercise is one way of challenging orthodoxies and is quite simple:

  1. Identify a technology or input that is key to your product or service achieving its goal
  2. Concoct a simple reason why this technology no longer functions or this input is no longer available
  3. Have the group begin to ideate alternative inputs that could be used or alternate technologies that could be leveraged or developed to make the product or service achieve its goal again (If you are looking for a new technology, what are the first principles that you could go back to? And what are the other technology paths you could explore instead? – i.e. acoustic levitation instead of electromagnetic levitation)
  4. Pick one from the list of available options
  5. Re-engage the group to backcast what it will take to replace the existing technology or input with this new one (NOTE: backcasting is the practice of working backwards to show how an outcome will be achieved)
  6. Sketch out how the product or service will change as result of using this new technology or input
  7. Brainstorm ways that this change can be positioned as a benefit for customers

Apocalypse innovation can be a valuable innovation exercise for those products or services approaching the upper flattening of the traditional ‘S’ curve that pretty much all innovations go through and represents one way that can lead you to the steeper part of a new ‘S’ curve.

What other exercises do you like to use to help people challenge orthodoxies?

If you’d like to sign up to learn more about my new FutureHacking™ methodology and set of tools, go here.

Build a Common Language of Innovation on your team

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Moneyball and the Beginning, Middle, and End of Innovation

Moneyball and the Beginning, Middle, and End of Innovation

GUEST POST from Robyn Bolton

Recently, pitchers and catchers reported to MLB Spring Training facilities in Florida and Arizona.  For baseball fans, this is the first sign of Spring, an occasion that heralds months of warmth and sunshine, ballparks filled (hopefully) with cheering fans, dinners of beers and brats, and the undying belief that this year will be the year.

Of course, there was still a lot of dark, dreary cold between then and Opening Day.  Perfect weather for watching baseball movies – Bull DurhamMajor LeagueThe NaturalField of Dreams, and, of course, Moneyball.

Moneyball is based on the book of the same name by Michael Lewis and chronicles the 2002 Oakland Athletics season.  The ’02 Oakland A’s, led by General Manager Billy Beane (played by Brad Pitt), forever changed baseball by adopting an approach that valued rigorous statistical analysis over the collective wisdom of baseball insiders (coaches, scouts, front office personnel) when building a team.  This approach, termed “Moneyball,” enabled the A’s to reach the postseason with a team that cost only $44M in salary, compared to the NY Yankees that spent $125M to achieve the same outcome.

While the whole movie (and book) is a testament to the courage and perseverance required to challenge and change the status quo, time and again I come back to three lines that perfectly sum up the journey of every successful intrapreneur I’ve ever met.

The Beginning

I know you’ve taken it in the teeth out there, but the first guy through the wall…he always gets bloody…always always gets bloody.  This is threatening not just a way of doing business… but in their minds, it’s threatening the game. Really what it’s threatening is their livelihood, their jobs. It’s threatening the way they do things… and every time that happens, whether it’s the government, a way of doing business, whatever, the people who are holding the reins – they have their hands on the switch – they go batshit crazy.”

John Henry, Owner of the Boston Red Sox

Context

The 2002 season is over, and the A’s were eliminated in the first round of the playoffs.  John Henry, an owner of the Boston Red Sox, has invited Bill Beane to Boston to offer him the Red Sox GM job.

Lesson

This is what you sign up for when you decide to be an Intrapreneur.  The more you challenge the status quo, the more you question how business is done, the more you ask Why and demand an answer, the closer you get to “tak(ing) it in the teeth.”

This is why courage, perseverance, and an unshakeable belief that things can and should be better are absolutely essential for intrapreneurs.  Your job is to run at the wall over and over until you get through it.

People will follow.  The Red Sox did.  They won the World Series in 2004, breaking an 84-year-old curse.

The Middle

“It’s a process, it’s a process, it’s a process”

Bill Beane

Context

Billy has to convince the ballplayers to forget all the habits that made them great and embrace the philosophy of Moneyball.  To stop stealing bases, turning double plays on bunts, and swinging for the fences and to start taking walks, throwing to first for the easy out, and prioritize getting on base over hitting a home run.

The players are confused and frustrated.  Suddenly, everything that they once did right is wrong and what was not valued is deeply prized.

Lesson

Innovation is something new that creates value.  Something new doesn’t just require change, it requires people to stop doing things that work and start doing things that seem strange or even wrong.

Change doesn’t happen overnight.  It’s not a switch to be flipped.  It’s a process to be learned.  It takes time, practice, reminders, and patience.

The End

“When you get an answer you’re looking for, hang up.”

Billy Beane

Context

In this scene, Billy has offered one of his players to multiple teams, searching for the best deal.  When the phone rings with a deal he likes, he and the other General Manager (GM) agree to it, Billy hangs up.  Even though the other GM was in the middle of a sentence.  When Peter Brand, the Assistant GM played by Jonah Hill, points out that Billy had just hung up on the other GM, Billy responds with this nugget of wisdom.

Lesson

It’s advice intrapreneurs should take very much to heart.  I often see Innovation teams walk into management presentations with long presentations, full of data and projections, anxious to share their progress, and hoping for continued funding and support.  When the meeting starts, a senior exec will say something like, “We’re excited by the progress we’re hearing about and what it will take to continue.”

That’s the cue to “hang up.”

Instead of starting the presentation from the beginning, start with “what it will take to continue.”  You got the answer you’re looking for – they’re excited about the progress you’ve made – don’t spend time giving them the info they already have or, worse, could raise questions and dim their enthusiasm.  Hang up on the conversation you want to have and have the conversation they want to have.

In closing

Moneyball was an innovation that fundamentally changed one of the most tradition-bound businesses in sports.  To be successful, it required someone willing to take it in the teeth, to coach people through a process, and to hang up when they got the answer they wanted.  It wasn’t easy but real change rarely is.

The same is true in corporations.  They need their own Bill Beanes.

Are you willing to step up to the plate?

Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.