Tag Archives: Waymo

Why Most Corporate Innovation Programs Fail

(And How To Make Them Succeed)

Why Most Corporate Innovation Programs Fail

GUEST POST from Greg Satell

Today, everybody needs to innovate. So it shouldn’t be surprising that corporate innovation programs have become wildly popular. There is an inherent tradeoff between innovation and the type of optimization that operational executives excel at. Creating a separate unit to address innovation just makes intuitive sense.

Yet corporate innovation programs often fail and it’s not hard to see why. Unlike other business functions, like marketing or finance, in a healthy organization everybody takes pride in their ability to innovate. Setting up a separate innovation unit can often seem like an affront to those who work hard to innovate in operational units.

Make no mistake, a corporate innovation program is no panacea. It doesn’t replace the need to innovate every day. Yet a well designed program can augment those efforts, take the business in new directions and create real value. The key to a successful innovation program is to develop a clear purpose built on a shared purpose that can solve important problems.

A Good Innovation Program Extends, It Doesn’t Replace

It’s no secret that Alphabet is one of the most powerful companies in the world. Nevertheless, it has a vulnerability that is often overlooked. Much like Xerox and Kodak decades ago, it’s highly dependent on a single revenue stream. In 2018, 86% of its revenues came from advertising, mostly from its Google search business.

It is with this in mind that the company created its X division. Because the unit was set up to pursue opportunities outside of its core search business, it didn’t encounter significant resistance. In fact, the X division is widely seen as an extension of what made Alphabet so successful in the first place.

Another important aspect is that the X division provides a platform to incubate internal projects. For example, Google Brain started out as a “20% time project.” As it progressed and needed more resources, it was moved to the X division, where it was scaled up further. Eventually, it returned to the mothership and today is an integral part of the core business.

Notice how the vision of the X division was never to replace innovation efforts in the core business, but to extend them. That’s been a big part of its success and has led to exciting new business like Waymo autonomous vehicles and the Verily healthcare division.

Focus On Commonality, Not Difference

All too often, innovation programs thrive on difference. They are designed to put together a band of mavericks and disruptors who think differently than the rest of the organization. That may be great for instilling a strong esprit de corps among those involved with the innovation program, but it’s likely to alienate others.

As I explain in Cascades, any change effort must be built on shared purpose and shared values. That’s how you build trust and form the basis for effective collaboration between the innovation program and the rest of the organization. Without those bonds of trust, any innovation effort is bound to fail.

You can see how that works in Alphabet’s X division. It is not seen as fundamentally different from the core Google business, but rather as channeling the company’s strengths in new directions. The business opportunities it pursues may be different, but the core values are the same.

The key question to ask is why you need a corporate innovation program in the first place. If the answer is that you don’t feel your organization is innovative enough, then you need to address that problem first. A well designed innovation program can’t be a band-aid for larger issues within the core business.

Executive Sponsorship Isn’t Enough

Clearly, no corporate innovation program can be successful without strong executive sponsorship. Commitment has to come from the top. Yet just as clearly, executive sponsorship isn’t enough. Unless you can build support among key stakeholders inside and outside the organization, support from the top is bound to erode.

For example, when Eric Haller started Datalabs at Experian, he designed it to be focused on customers, rather than ideas developed internally. “We regularly sit down with our clients and try and figure out what’s causing them agita,” he told me, “because we know that solving problems is what opens up enormous business opportunities for us.”

Because the Datalabs units works directly with customers to solve problems that are important to them, it has strong support from a key stakeholder group. Another important aspect at Datalabs is that once a project gets beyond the prototype stage it goes to one of the operational units within the company to be scaled up into a real business. Over the past five years businesses originated at Datalabs have added over $100 million in new revenues.

Perhaps most importantly, Haller is acutely aware how innovation programs can cause resentment, so he works hard to reduce tensions through building collaborations around the organization. Datalabs is not where “innovation happens” at Experian. Rather it serves to augment and expand capabilities that were already there.

Don’t Look For Ideas, Identify Meaningful Problems

Perhaps most importantly, an innovation program should not be seen as a place to generate ideas. The truth is that ideas can come from anywhere. So designating one particular program in which ideas are supposed to happen will not only alienate the rest of the organization, it is also likely to overlook important ideas generated elsewhere.

The truth is that innovation isn’t about ideas. It’s about solving problems. In researching my book, Mapping Innovation, I came across dozens of stories from every conceivable industry and field and it always started with someone who came across a problem they wanted to solve. Sometimes, it happened by chance, but in most cases I found that great innovators were actively looking for problems that interested them.

If you look at successful innovation programs like Alphabet’s X division and Experian’s Datalabs, the fundamental activity is exploration. X division explores domains outside of search, while Datalabs explores problems that its customers need solved. Once you identify a meaningful problem, the ideas will come.

That’s the real potential of innovation programs. They provide a space to explore areas that don’t fit with the current business, but may play an important role in its future. A good innovation program doesn’t replace capabilities in the core organization, but leverages them to create new opportunities.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Responsible Innovation

Building Trust in a Technologically Advanced World

Responsible Innovation

GUEST POST from Art Inteligencia

In our headlong rush toward the future, fueled by the relentless pace of technological advancement, we have a tendency to celebrate innovation for its speed and scale. We champion the next disruptive app, the more powerful AI model, or the seamless new user experience. But as a human-centered change and innovation thought leader, I believe we are at a critical inflection point. The question is no longer just, “Can we innovate?” but rather, “Should we?” and “How can we do so responsibly?” The future belongs not to the fastest innovators, but to the most trusted. Responsible innovation — a discipline that prioritizes ethics, human well-being, and social impact alongside commercial success—is the only sustainable path forward in a world where public trust is both fragile and invaluable.

The history of technology is littered with examples of innovations that, despite their potential, led to unintended and often harmful consequences. From social media algorithms that polarize societies to AI systems that perpetuate bias, the “move fast and break things” mantra has proven to be an unsustainable and, at times, dangerous philosophy. The public is growing weary. A lack of trust can lead to user backlash, regulatory intervention, and a complete rejection of a technology, no matter how clever or efficient it may be. The single greatest barrier to a new technology’s adoption isn’t its complexity, but the public’s perception of its integrity and safety. Therefore, embedding responsibility into the innovation process isn’t just an ethical consideration; it’s a strategic imperative for long-term survival and growth.

The Pillars of Responsible Innovation

Building a culture of responsible innovation requires a proactive and holistic approach, centered on four key pillars:

  • Ethical by Design: Integrate ethical considerations from the very beginning of the innovation process, not as an afterthought. This means asking critical questions about potential biases, unintended consequences, and the ethical implications of a technology before a single line of code is written.
  • Transparent and Accountable: Be clear about how your technology works, what data it uses, and how decisions are made. When things go wrong, take responsibility and be accountable for the outcomes. Transparency builds trust.
  • Human-Centered and Inclusive: Innovation must serve all of humanity, not just a select few. Design processes must include diverse perspectives to ensure solutions are inclusive, accessible, and do not inadvertently harm marginalized communities.
  • Long-Term Thinking: Look beyond short-term profits and quarterly results. Consider the long-term societal, environmental, and human impact of your innovation. This requires foresight and a commitment to creating lasting, positive value.

“Trust is the currency of the digital age. Responsible innovation is how we earn it, one ethical decision at a time.”

Integrating Responsibility into Your Innovation DNA

This is a cultural shift, not a checklist. It demands that leaders and teams ask new questions and embrace new metrics of success:

  1. Establish Ethical AI/Innovation Boards: Create a cross-functional board that includes ethicists, sociologists, and community representatives to review new projects from a non-technical perspective.
  2. Implement an Ethical Innovation Framework: Develop a formal framework that requires teams to assess and document the potential societal impact, privacy risks, and fairness implications of their work.
  3. Reward Responsible Behavior: Adjust performance metrics to include not just commercial success, but also a project’s adherence to ethical principles and positive social impact.
  4. Cultivate a Culture of Candor: Foster a psychologically safe environment where employees feel empowered to raise ethical concerns without fear of retribution.

Case Study 1: The Facial Recognition Debates – Ethical Innovation in Action

The Challenge:

Facial recognition technology is incredibly powerful, with potential applications ranging from unlocking smartphones to enhancing public safety. However, it also presents significant ethical challenges, including the potential for mass surveillance, privacy violations, and algorithmic bias that disproportionately misidentifies people of color and women. Companies were innovating at a rapid pace, but without a clear ethical compass, leading to public outcry and a lack of trust.

The Responsible Innovation Response:

In response to these concerns, some tech companies and cities took a different approach. Instead of a “deploy first, ask questions later” strategy, they implemented moratoriums and initiated a public dialogue. Microsoft, for example, proactively called for federal regulation of the technology and refused to sell its facial recognition software to certain law enforcement agencies, demonstrating a commitment to ethical principles over short-term revenue.

  • Proactive Regulation: They acknowledged the technology was too powerful and risky to be left unregulated, effectively inviting government oversight.
  • Inclusion of Stakeholders: The debate moved beyond tech company boardrooms to include civil rights groups, academics, and the public, ensuring a more holistic and human-centered discussion.
  • A Commitment to Fairness: Researchers at companies like IBM and Microsoft worked to improve the fairness of their algorithms, publicly sharing their findings to contribute to a better, more ethical industry standard.

The Result:

While the debate is ongoing, this shift toward responsible innovation has helped to build trust and has led to a more nuanced public understanding of the technology. By putting ethical guardrails in place and engaging in public discourse, these companies are positioning themselves as trustworthy partners in a developing market. They recognized that sustainable innovation is built on a foundation of trust, not just technological prowess.


Case Study 2: The Evolution of Google’s Self-Driving Cars (Waymo)

The Challenge:

From the outset, self-driving cars presented a complex set of ethical dilemmas. How should the car be programmed to act in a no-win scenario? What if it harms a pedestrian? How can the public trust a technology that is still under development, and how can a company be transparent about its safety metrics without revealing proprietary information?

The Responsible Innovation Response:

Google’s self-driving car project, now Waymo, has been a leading example of responsible innovation. Instead of rushing to market, they prioritized safety, transparency, and a long-term, human-centered approach.

  • Prioritizing Safety over Speed: Waymo’s vehicles have a human driver in the car at all times to take over in case of an emergency. This is a deliberate choice to prioritize safety above a faster, more automated rollout. They are transparently sharing their data on “disengagements” (when the human driver takes over) to show their progress.
  • Community Engagement: Waymo has engaged with local communities, holding workshops and public forums to address concerns about job losses, safety, and the role of autonomous vehicles in public life.
  • Ethical Framework: They have developed a clear ethical framework for their technology, including a commitment to minimizing harm, respecting local traffic laws, and being transparent about their performance.

The Result:

By taking a slow, deliberate, and transparent approach, Waymo has built a high degree of trust with the public and with regulators. They are not the fastest to market, but their approach has positioned them as the most credible and trustworthy player in a high-stakes industry. Their focus on responsible development has not been a barrier to innovation; it has been the very foundation of their long-term viability, proving that trust is the ultimate enabler of groundbreaking technology.


Conclusion: Trust is the Ultimate Innovation Enabler

In a world of breathtaking technological acceleration, our greatest challenge is not in creating the next big thing, but in doing so in a way that builds, rather than erodes, public trust. Responsible innovation is not an optional extra or a marketing ploy; it is a fundamental business strategy for long-term success. It requires a shift from a “move fast and break things” mentality to a “slow down and build trust” philosophy.

Leaders must champion a new way of thinking—one that integrates ethics, inclusivity, and long-term societal impact into the core of every project. By doing so, we will not only build better products and services but also create a more resilient, equitable, and human-centered future. The most powerful innovation is not just what we create, but how we create it. The time to be responsible is now.

Extra Extra: Futurology is not fortune telling. Futurists use a scientific approach to create their deliverables, but a methodology and tools like those in FutureHacking™ can empower anyone to engage in futurology themselves.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

The Ethics of AI in Innovation

The Ethics of AI in Innovation

GUEST POST from Chateau G Pato

In today’s rapidly evolving technological landscape, artificial intelligence (AI) plays a pivotal role in driving innovation. From healthcare and transportation to education and finance, AI’s potential to transform industries is unparalleled. However, with great power comes great responsibility. As we harness the capabilities of AI, we must also grapple with the ethical implications that accompany its use. This article delves into the ethical considerations of AI in innovation and presents two case studies that highlight the challenges and solutions within this dynamic field.

Understanding AI Ethics

AI ethics refers to the moral principles and guidelines that govern the development, deployment, and use of AI technologies. These principles aim to ensure that AI systems are designed and used in ways that are fair, transparent, and accountable. AI ethics also demand that we consider the potential biases in AI algorithms, the impact on employment, privacy concerns, and the long-term societal implications of AI-driven innovations.

Case Study 1: Healthcare AI – The IBM Watson Experience

IBM Watson, a powerful AI platform, made headlines with its potential to revolutionize healthcare. With the ability to analyze vast amounts of medical data and provide treatment recommendations, Watson promised to assist doctors in diagnosing and treating diseases more effectively.

However, the rollout of Watson in healthcare settings raised significant ethical questions. Firstly, there were concerns about the accuracy of the recommendations. Critics pointed out that Watson’s training data could be biased, potentially leading to flawed medical advice. Additionally, the opaque nature of AI decision-making posed challenges in accountability, especially in life-or-death scenarios.

IBM addressed these ethical issues by emphasizing transparency and collaboration with healthcare professionals. They implemented rigorous validation procedures and incorporated feedback from medical practitioners to refine Watson’s algorithms. This approach highlighted the importance of involving domain experts in the development process, ensuring that AI systems align with ethical standards and practical realities.

Case Study 2: Autonomous Vehicles – Google’s Waymo Journey

Waymo, Google’s self-driving car project, embodies the promise of AI in redefining urban transportation. Autonomous vehicles have the potential to enhance road safety and reduce traffic congestion. Nevertheless, they also bring forth ethical dilemmas that warrant careful consideration.

A key ethical challenge is the moral decision-making inherent in self-driving technology. In complex traffic situations, these AI-driven vehicles must make split-second decisions that could result in harm. The “trolley problem”—a classic ethical thought experiment—illustrates the dilemma of choosing between two harmful outcomes. For instance, should a self-driving car prioritize the safety of its passengers over pedestrians?

Waymo addresses these ethical concerns by implementing a robust ethical framework and engaging with stakeholders, including ethicists, regulators, and the general public. By fostering open dialogue, Waymo seeks to balance technical innovation with societal values, ensuring that their AI systems operate ethically and safely.

Principles for Ethical AI Innovation

As we navigate the ethical landscape of AI, several guiding principles can help steer innovation in a responsible direction:

  • Transparency: AI systems should be designed with transparency at their core, enabling users to understand the decision-making processes and underlying data.
  • Fairness: Developers must proactively address biases in AI algorithms to prevent discriminatory outcomes.
  • Accountability: Clear accountability mechanisms should be established to ensure that stakeholders can address any misuse or failure of AI technologies.
  • Collaboration: Cross-disciplinary collaboration involving technologists, ethicists, industry leaders, and policymakers is essential to fostering ethical AI innovation.

Conclusion

The integration of AI into our daily lives and industries presents both immense opportunities and complex ethical challenges. By thoughtfully addressing these ethical concerns, we can unleash the full potential of AI while safeguarding human values and societal well-being. As leaders in AI innovation, we must dedicate ourselves to building systems that are not only groundbreaking but also ethically sound, paving the way for a future where technology serves all of humanity.

In a world driven by AI, ethical innovation is not just an option—it’s a necessity. Through continuous dialogue, collaboration, and adherence to ethical principles, we can ensure that AI becomes a force for positive change, empowering people and societies worldwide.

Extra Extra: Because innovation is all about change, Braden Kelley’s human-centered change methodology and tools are the best way to plan and execute the changes necessary to support your innovation and transformation efforts — all while literally getting everyone all on the same page for change. Find out more about the methodology and tools, including the book Charting Change by following the link. Be sure and download the TEN FREE TOOLS while you’re here.

Image credit: Microsoft CoPilot

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.