Tag Archives: Artificial Intelligence

Top 100 Innovation and Transformation Articles of 2023

Top 100 Innovation and Transformation Articles of 2023

2021 marked the re-birth of my original Blogging Innovation blog as a new blog called Human-Centered Change and Innovation.

Many of you may know that Blogging Innovation grew into the world’s most popular global innovation community before being re-branded as InnovationExcellence.com and being ultimately sold to DisruptorLeague.com.

Thanks to an outpouring of support I’ve ignited the fuse of this new multiple author blog around the topics of human-centered change, innovation, transformation and design.

I feel blessed that the global innovation and change professional communities have responded with a growing roster of contributing authors and more than 17,000 newsletter subscribers.

To celebrate we’ve pulled together the Top 100 Innovation and Transformation Articles of 2023 from our archive of over 1,800 articles on these topics.

We do some other rankings too.

We just published the Top 40 Innovation Bloggers of 2023 and as the volume of this blog has grown we have brought back our monthly article ranking to complement this annual one.

But enough delay, here are the 100 most popular innovation and transformation posts of 2023.

Did your favorite make the cut?

1. Fear is a Leading Indicator of Personal Growth – by Mike Shipulski

2. The Education Business Model Canvas – by Arlen Meyers

3. Act Like an Owner – Revisited! – by Shep Hyken

4. Free Innovation Maturity Assessment – by Braden Kelley

5. The Role of Stakeholder Analysis in Change Management – by Art Inteligencia

6. What is Human-Centered Change? – by Braden Kelley

7. Sustaining Imagination is Hard – by Braden Kelley

8. The One Movie All Electric Car Designers Should Watch – by Braden Kelley

9. 50 Cognitive Biases Reference – Free Download – by Braden Kelley

10. A 90% Project Failure Rate Means You’re Doing it Wrong – by Mike Shipulski

11. No Regret Decisions: The First Steps of Leading through Hyper-Change – by Phil Buckley

12. Reversible versus Irreversible Decisions – by Farnham Street

13. Three Maps to Innovation Success – by Robyn Bolton

14. Why Most Corporate Innovation Programs Fail (And How To Make Them Succeed) – by Greg Satell

15. The Paradox of Innovation Leadership – by Janet Sernack

16. Innovation Management ISO 56000 Series Explained – by Diana Porumboiu

17. An Introduction to Journey Maps – by Braden Kelley

18. Sprint Toward the Innovation Action – by Mike Shipulski

19. Marriott’s Approach to Customer Service – by Shep Hyken

20. Should a Bad Grade in Organic Chemistry be a Doctor Killer? – NYU Professor Fired for Giving Students Bad Grades – by Arlen Meyers, M.D.

21. How Networks Power Transformation – by Greg Satell

22. Are We Abandoning Science? – by Greg Satell

23. A Tipping Point for Organizational Culture – by Janet Sernack

24. Latest Interview with the What’s Next? Podcast – with Braden Kelley

25. Scale Your Innovation by Mapping Your Value Network – by John Bessant

26. Leveraging Emotional Intelligence in Change Leadership – by Art Inteligencia

27. Visual Project Charter™ – 35″ x 56″ (Poster Size) and JPG for Online Whiteboarding – by Braden Kelley

28. Unintended Consequences. The Hidden Risk of Fast-Paced Innovation – by Pete Foley

29. A Shortcut to Making Strategic Trade-Offs – by Geoffrey A. Moore

30. 95% of Work is Noise – by Mike Shipulski


Build a common language of innovation on your team


31. 8 Strategies to Future-Proofing Your Business & Gaining Competitive Advantage – by Teresa Spangler

32. The Nine Innovation Roles – by Braden Kelley

33. The Fail Fast Fallacy – by Rachel Audige

34. What is the Difference Between Signals and Trends? – by Art Inteligencia

35. A Top-Down Open Innovation Approach – by Geoffrey A. Moore

36. FutureHacking – Be Your Own Futurist – by Braden Kelley

37. Five Key Digital Transformation Barriers – by Howard Tiersky

38. The Malcolm Gladwell Trap – by Greg Satell

39. Four Characteristics of High Performing Teams – by David Burkus

40. ACMP Standard for Change Management® Visualization – 35″ x 56″ (Poster Size) – Association of Change Management Professionals – by Braden Kelley

41. 39 Digital Transformation Hacks – by Stefan Lindegaard

42. The Impact of Artificial Intelligence on Future Employment – by Chateau G Pato

43. A Triumph of Artificial Intelligence Rhetoric – Understanding ChatGPT – by Geoffrey A. Moore

44. Imagination versus Knowledge – Is imagination really more important? – by Janet Sernack

45. A New Innovation Sphere – by Pete Foley

46. The Pyramid of Results, Motivation and Ability – Changing Outcomes, Changing Behavior – by Braden Kelley

47. Three HOW MIGHT WE Alternatives That Actually Spark Creative Ideas – by Robyn Bolton

48. Innovation vs. Invention vs. Creativity – by Braden Kelley

49. Where People Go Wrong with Minimum Viable Products – by Greg Satell

50. Will Artificial Intelligence Make Us Stupid? – by Shep Hyken


Accelerate your change and transformation success


51. A Global Perspective on Psychological Safety – by Stefan Lindegaard

52. Customer Service is a Team Sport – by Shep Hyken

53. Top 40 Innovation Bloggers of 2022 – Curated by Braden Kelley

54. A Flop is Not a Failure – by John Bessant

55. Generation AI Replacing Generation Z – by Braden Kelley

56. ‘Innovation’ is Killing Innovation. How Do We Save It? – by Robyn Bolton

57. Ten Ways to Make Time for Innovation – by Nick Jain

58. The Five Keys to Successful Change – by Braden Kelley

59. Back to Basics: The Innovation Alphabet – by Robyn Bolton

60. The Role of Stakeholder Analysis in Change Management – by Art Inteligencia

61. Will CHATgpt make us more or less innovative? – by Pete Foley

62. 99.7% of Innovation Processes Miss These 3 Essential Steps – by Robyn Bolton

63. Rethinking Customer Journeys – by Geoffrey A. Moore

64. Reasons Change Management Frequently Fails – by Greg Satell

65. The Experiment Canvas™ – 35″ x 56″ (Poster Size) – by Braden Kelley

66. AI Has Already Taken Over the World – by Braden Kelley

67. How to Lead Innovation and Embrace Innovative Leadership – by Diana Porumboiu

68. Five Questions All Leaders Should Always Be Asking – by David Burkus

69. Latest Innovation Management Research Revealed – by Braden Kelley

70. A Guide to Effective Brainstorming – by Diana Porumboiu

71. Unlocking the Power of Imagination – How Humans and AI Can Collaborate for Innovation and Creativity – by Teresa Spangler

72. Rise of the Prompt Engineer – by Art Inteligencia

73. Taking Care of Yourself is Not Impossible – by Mike Shipulski

74. Design Thinking Facilitator Guide – A Crash Course in the Basics – by Douglas Ferguson

75. What Have We Learned About Digital Transformation Thus Far? – by Geoffrey A. Moore

76. Building a Better Change Communication Plan – by Braden Kelley

77. How to Determine if Your Problem is Worth Solving – by Mike Shipulski

78. Increasing Organizational Agility – by Braden Kelley

79. Mystery of Stonehenge Solved – by Braden Kelley

80. Agility is the 2023 Success Factor – by Soren Kaplan


Get the Change Planning Toolkit


81. The Five Gifts of Uncertainty – by Robyn Bolton

82. 3 Innovation Types Not What You Think They Are – by Robyn Bolton

83. Using Limits to Become Limitless – by Rachel Audige

84. What Disruptive Innovation Really Is – by Geoffrey A. Moore

85. Today’s Customer Wants to Go Fast – by Shep Hyken

86. The 6 Building Blocks of Great Teams – by David Burkus

87. Unlock Hundreds of Ideas by Doing This One Thing – Inspired by Hollywood – by Robyn Bolton

88. Moneyball and the Beginning, Middle, and End of Innovation – by Robyn Bolton

89. There are Only 3 Reasons to Innovate – Which One is Yours? – by Robyn Bolton

90. A Shortcut to Making Strategic Trade-Offs – by Geoffrey A. Moore

91. Customer Experience Personified – by Braden Kelley

92. 3 Steps to a Truly Terrific Innovation Team – by Robyn Bolton

93. Building a Positive Team Culture – by David Burkus

94. Apple Watch Must Die – by Braden Kelley

95. Kickstarting Change and Innovation in Uncertain Times – by Janet Sernack

96. Take Charge of Your Mind to Reclaim Your Potential – by Janet Sernack

97. Psychological Safety, Growth Mindset and Difficult Conversations to Shape the Future – by Stefan Lindegaard

98. 10 Ways to Rock the Customer Experience In 2023 – by Shep Hyken

99. Artificial Intelligence is Forcing Us to Answer Some Very Human Questions – by Greg Satell

100. 23 Ways in 2023 to Create Amazing Experiences – by Shep Hyken

Curious which article just missed the cut? Well, here it is just for fun:

101. Why Business Strategies Should Not Be Scientific – by Greg Satell

These are the Top 100 innovation and transformation articles of 2023 based on the number of page views. If your favorite Human-Centered Change & Innovation article didn’t make the cut, then send a tweet to @innovate and maybe we’ll consider doing a People’s Choice List for 2023.

If you’re not familiar with Human-Centered Change & Innovation, we publish 1-6 new articles every week focused on human-centered change, innovation, transformation and design insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook feed or on Twitter or LinkedIn too!

Editor’s Note: Human-Centered Change & Innovation is open to contributions from any and all the innovation & transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have a valuable insight to share with everyone for the greater good. If you’d like to contribute, contact us.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Is AI Saving Corporate Innovation or Killing It?

Is AI Saving Corporate Innovation or Killing It?

GUEST POST from Robyn Bolton

AI is killing Corporate Innovation.

Last Friday, the brilliant minds of Scott Kirsner, Rita McGrath, and Alex Osterwalder (plus a few guest stars like me, no big deal) gathered to debate the truth of this statement.

Honestly, it was one of the smartest and most thoughtful debates on AI that I’ve heard (biased but right, as my husband would say), and you should definitely listen to the whole thing.

But if you don’t have time for the deep dive over your morning coffee, then here are the highlights (in my humble opinion)

Why this debate is important

Every quarter, InnoLead fields a survey to understand the issues and challenges facing corporate innovators.  The results from their Q2 survey and anecdotal follow-on conversations were eye-opening:

  • Resources are shifting from Innovation to AI: 61.5% of companies are increasing the resources allocated to AI, while 63.9% of companies are maintaining or decreasing their innovation investments
  • IT is more likely to own AI than innovation: 61.5% of companies put IT in charge of exploring potential AI use cases, compared to 53.9% of Innovation departments (percentages sum to greater than 0 because multiple departments may have responsibility)
  • Innovation departments are becoming AI departments.  In fact, some former VPs and Directors of Innovation have been retitled to VPs or Directors of AI

So when Scott asked if AI was killing Corporate Innovation, the data said YES.

The people said NO.

What’s killing corporate innovation isn’t technology.  It’s leadership.

Alex Osterwalder didn’t pull his punches and delivered a truth bomb right at the start. Like all the innovation tools and technologies that came before, the impact of AI on innovation isn’t about the technology itself—it’s about the leaders driving it.

If executives take the time to understand AI as a tool that enables successful outcomes and accelerates the accomplishment of key strategies, then there is no reason for it to threaten, let alone supplant, innovation. 

But if they treat it like a shiny new toy or a silver bullet to solve all their growth needs, then it’s just “innovation theater” all over again.

AI is an Inflection Point that leaders need to approach strategically

As Rita wrote in her book Seeing Around Corners, an inflection point has a 10x impact on business, for example, 10x cheaper, 10x faster, or 10x easier.  The emergence and large-scale adoption of AI is, without doubt, an inflection point for business.

Just like the internet and Netscape shook things up and changed the game, AI has the power to do the same—maybe even more. But, to Osterwalder’s point, leaders need to recognize AI as a strategic inflection point and proceed accordingly. 

Leaders don’t need to have it all figured out yet, but they need a plan, and that’s where we come in.

This inflection point is our time to shine

From what I’ve seen, AI isn’t killing corporate innovation. It’s creating the biggest corporate innovation opportunity in decades.  But it’s up to us, as corporate innovators, to seize the moment.

Unlike our colleagues in the core business, we are comfortable navigating ambiguity and uncertainty.  We have experience creating order from what seems like chaos and using innovation to grow today’s business and create tomorrow’s.

We can do this because we’ve done it before.  It’s exactly what we do,

AI is not a problem.  It’s an opportunity.  But only if we make it one.

AI is not the end of corporate innovation —it’s a tool, a powerful one at that.

As corporate innovators, we have the skills and knowledge required to steer businesses through uncertainty and drive meaningful change. So, let’s embrace AI strategically and unlock its full potential.

The path forward may not always be crystal clear, but that’s what makes it exciting. So, let’s seize the moment, navigate the chaos, and embrace AI as the innovation accelerant that it is.

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Framing Your 2024 Strategy

Framing Your 2024 Strategy

GUEST POST from Geoffrey A. Moore

Fall is in the air, which brings to mind the season’s favorite sport—no, not football, strategic planning! Let’s face it, 2023 has been a tough year for most of us, with few annual plans surviving first contact with an economy that was not so much sluggish as simply hesitant. With the exception of generative AI’s burst onto the scene, most technology sectors have been more or less trudging along, and that begs the question, what do we think we can do in 2024? Time to bring out the strategy frameworks, polish up those crystal balls that have been a bit murky of late, and chart our course forward.

This post will kick off a series of blogs about framing strategy, all organized around a meta-model we call the Hierarchy of Powers:

Geoffrey Moore Strategy Framework

The inspiration for this model came from looking at how investors prioritize their portfolios. The first thing they do is allocate by sector, based primarily on category power, referring both to the growth rate of the category as well as its potential size. Rising tides float all boats, and one of the toughest challenges in business is how to manage a premier franchise when category growth is negative. In conjunction with assessing our current portfolio’s category power, this is also a time to look at adjacent categories, whether as threats or as opportunities, to see if there are any transformative acquisitions that deserve our immediate attention.

Returning to our current set of assets, within each category the next question to answer is, what is our company power within that category? This is largely a factor of market share. The more share a company has of a given category, the more likely the ecosystem of partners that supports the category will focus first on that company’s installed base, adding more value to its offers, as well as to recommend that company’s products first, again because of the added leverage from partner engagement. Marketplaces, in other words, self-organize around category leaders, accelerating the sales and offloading the support costs of the market share leaders.

But what do you do when you don’t have company power? That’s when you turn your attention to market power. Marketplaces destabilize around problematic use cases that the incumbent vendors do not handle well. This creates openings for new entrants, provided they can authentically address the customer’s problems. The key is to focus product management on the whole product (not just what your enterprise supplies, but rather, everything the customer needs to be successful) and to focus your go-to-market engine on the target market segment. This is the playbook that has kept Crossing the Chasm on entrepreneur’s book lists some thirty years in, but it is a different matter to execute it in a large enterprise where sales and marketing are organized for global coverage, not rifle-shot initiatives. Nonetheless, when properly executed, it is the most reliable play in all of high-tech market development.

If market power is key to taking market share, offer power is key to maintaining it, both in high-growth categories as well as mature ones. Offer power is a function of three disciplines—differentiation to create customer preference, neutralization to catch up to and reduce a competitor’s differentiation, and optimization to eliminate non-value-adding costs. Anything that does not contribute materially to one of these three outcomes is waste.

Finally, execution power is the ability to take advantage of one’s inertial momentum rather than having it take advantage of you. Here the discipline of zone management has proved particularly valuable to enterprises who are seeking to balance investment in their existing lines of business, typically in mature categories, with forays into new categories that promise higher growth.

In upcoming blog posts I am going to dive deeper into each of the five powers outlined above to share specific frameworks that clarify what decisions need to be made during the strategic planning process and what principles can best guide them. In the meantime, there is still one more quarter in 2023 to make, and we all must do our best to make the most of it.

That’s what I think. What do you think?

Image Credit: Pixabay, Geoffrey A. Moore

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Innovation Evolution in the Era of AI

Innovation Evolution in the Era of AI

GUEST POST from Stefan Lindegaard

Half a decade ago, I laid out a perspective on the evolution of innovation. Now, I return to these reflections with a sentiment of both awe and unease as I observe the profound impacts of AI on innovation and business at large. The transformation unfolding before us presents a remarkable panorama of opportunities, yet it also carries with it the potential for disruption, hence the mixed feelings.

1. The Reign of R&D (1970-2015): There was a time when the Chief Technology Officer (CTO) held the reins. The focus was almost exclusively on Research and Development (R&D), with the power of the CTO often towering over the innovative impulses of the organization. Technology drove progress, but a tech-exclusive vision could sometimes be a hidden pitfall.

2. Era of Innovation Management (1990-2001): A shift towards understanding innovation as a strategic force began to emerge in the ’90s. The concept of managing innovation, previously only a flicker in the business landscape, began its journey towards being a guiding light. Pioneers like Christensen brought innovation into the educational mainstream, marking a paradigm shift in the mindsets of future business leaders.

3. Business Models & Customer Experience (2001-2008): The millennium ushered in an era where simply possessing superior technology wasn’t a winning card anymore. Process refinement, service quality, and most critically, innovative business models became the new mantra. Firms like Microsoft demonstrated this shift, evolving their strategies to stay competitive in this new game.

4. Ecosystems & Platforms (2008-2018): This phase saw the rise of ecosystems and platforms, representing a shift from isolated competition to interconnected collaboration. The lines that once defined industries began to blur. Companies from emerging markets, particularly China, became global players, and we saw industries morphing and intermingling. Case in point: was it still the automotive industry, or had the mobility industry arrived?

5. Corporate Transformation (2019-2025): With the onslaught of digital technologies, corporations faced the need to transform from within. Technological adoption wasn’t a mere surface-level change anymore; it demanded a thorough, comprehensive rethinking of strategies, structures, and processes. Anything less was simply insufficient to weather the storm of this digital revolution.

6. Comborg Transformation (2025-??): As we gaze into the future, the ‘Comborg’ era comes into view. This era sees organizations fusing human elements and digital capabilities into a harmonious whole. In this stage, the equilibrium between human creativity and AI-driven efficiency will be crucial, an exciting but challenging frontier to explore.

I believe that revisiting this timeline of innovation’s evolution highlights the remarkable journey we’ve undertaken. As we now figure out the role of AI in innovation and business, it’s an exciting but also challenging time. Even though it can be a bit scary, I believe we can create a successful future if we use AI in a responsible and thoughtful way.

Stefan Lindegaard Evolution of Innovation

Image Credit: Stefan Lindegaard, Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






An Innovation Rant: Just Because You Can Doesn’t Mean You Should

An Innovation Rant: Just Because You Can Doesn’t Mean You Should

GUEST POST from Robyn Bolton

Why are people so concerned about, afraid of, or resistant to new things?

Innovation, by its very nature, is good.  It is something new that creates value.

Naturally, the answer has nothing to do with innovation.

It has everything to do with how we experience it. 

And innovation without humanity is a very bad experience.

Over the last several weeks, I’ve heard so many stories of inhuman innovation that I have said, “I hate innovation” more than once.

Of course, I don’t mean that (I would be at an extraordinary career crossroads if I did).  What I mean is that I hate the choices we make about how to use innovation. 

Just because AI can filter resumes doesn’t mean you should remove humans from the process.

Years ago, I oversaw recruiting for a small consulting firm of about 50 people.  I was a full-time project manager, but given our size, everyone was expected to pitch in and take on extra responsibilities.  Because of our founder, we received more resumes than most firms our size, so I usually spent 2 to 3 hours a week reviewing them and responding to applicants.  It was usually boring, sometimes hilarious, and always essential because of our people-based business.

Would I have loved to have an AI system sort through the resumes for me?  Absolutely!

Would we have missed out on incredible talent because they weren’t out “type?”  Absolutely!

AI judges a resume based on keywords and other factors you program in.  This probably means that it filters out people who worked in multiple industries, aren’t following a traditional career path, or don’t have the right degree.

This also means that you are not accessing people who bring a new perspective to your business, who can make the non-obvious connections that drive innovation and growth, and who bring unique skills and experiences to your team and its ideas.

If you permit AI to find all your talent, pretty soon, the only talent you’ll have is AI.

Just because you can ghost people doesn’t mean you should.

Rejection sucks.  When you reject someone, and they take it well, you still feel a bit icky and sad.  When they don’t take it well, as one of my colleagues said when viewing a response from a candidate who did not take the decision well, “I feel like I was just assaulted by a bag of feathers.  I’m not hurt.  I’m just shocked.”

So, I understand ghosting feels like the better option.  It’s not.  At best, it’s lazy, and at worst, it’s selfish.  Especially if you’re a big company using AI to screen resumes. 

It’s not hard to add a function that triggers a standard rejection email when the AI filters someone out.  It’s not that hard to have a pre-programmed email that can quickly be clicked and sent when a human makes a decision.

The Golden Rule – do unto others as you would have done unto you – doesn’t apply to AI.  It does apply to you.

Just because you can stack bots on bots doesn’t mean you should.

At this point, we all know that our first interaction with customer service will be with a bot.  Whether it’s an online chatbot or an automated phone tree, the journey to a human is often long and frustrating. Fine.  We don’t like it, but we don’t have a choice.

But when a bot transfers us to a bot masquerading as a person?  Do you hate your customers that much?

Some companies do, as my husband and I discovered.  I was on the phone with one company trying to resolve a problem, and he was in a completely different part of the house on the phone with another company trying to fix a separate issue.  When I wandered to the room where my husband was to get information that the “person” I was talking to needed, I noticed he was on hold.  Then he started staring at me funny (not as unusual as you might think).  Then he asked me to put my call on speaker (that was unusual).  After listening for a few minutes, he said, “I’m talking to the same woman.”

He was right.  As we listened to each other’s calls, we heard the same “woman” with the same tenor of voice, unusual cadence of speech, and indecipherable accent.  We were talking to a bot.  It was not helpful.  It took each of us several days and several more calls to finally reach humans.  When that happened, our issues were resolved in minutes.

Just because innovation can doesn’t mean you should allow it to.

You are a human.  You know more than the machine knows (for now).

You are interacting with other humans who, like you, have a right to be treated with respect.

If you forget these things – how important you and your choices are and how you want to be treated – you won’t have to worry about AI taking your job.  You already gave it away.

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 10 Human-Centered Change & Innovation Articles of September 2023

Top 10 Human-Centered Change & Innovation Articles of September 2023Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are September’s ten most popular innovation posts:

  1. The Malcolm Gladwell Trap — by Greg Satell
  2. Where People Go Wrong with Minimum Viable Products — by Greg Satell
  3. Our People Metrics Are Broken — by Mike Shipulski
  4. Why You Don’t Need An Innovation Portfolio — by Robyn Bolton
  5. Do you have a fixed or growth mindset? — by Stefan Lindegaard
  6. Building a Psychologically Safe Team — by David Burkus
  7. Customer Wants and Needs Not the Same — by Shep Hyken
  8. The Hard Problem of Consciousness is Not That Hard — by Geoffrey A. Moore
  9. Great Coaches Do These Things — by Mike Shipulski
  10. How Not to Get in Your Own Way — by Mike Shipulski

BONUS – Here are five more strong articles published in August that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last three years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






AI and the Productivity Paradox

AI and the Productivity Paradox

GUEST POST from Greg Satell

In the 1970’s and 80’s, business investment in computer technology were increasing by more than twenty percent per year. Strangely though, productivity growth had decreased during the same period. Economists found this turn of events so strange that they called it the productivity paradox to underline their confusion.

Productivity growth would take off in the late 1990s, but then mysteriously drop again during the mid-aughts. At each juncture, experts would debate whether digital technology produced real value or if it was all merely a mirage. The debate would continue even as industry after industry was disrupted.

Today, that debate is over, but a new one is likely to begin over artificial intelligence. Much like in the early 1970s, we have increasing investment in a new technology, diminished productivity growth and “experts” predicting massive worker displacement . Yet now we have history and experience to guide us and can avoid making the same mistakes.

You Can’t Manage (Or Evaluate) What You Can’t Measure

The productivity paradox dumbfounded economists because it violated a basic principle of how a free market economy is supposed to work. If profit seeking businesses continue to make substantial investments, you expect to see a return. Yet with IT investment in the 70s and 80s, firms continued to increase their investment with negligible measurable benefit.

A paper by researchers at the University of Sheffield sheds some light on what happened. First, productivity measures were largely developed for an industrial economy, not an information economy. Second, the value of those investments, while substantial, were a small portion of total capital investment. Third, the aggregate productivity numbers didn’t reflect differences in management performance.

Consider a widget company in the 1970s that invested in IT to improve service so that it could ship out products in less time. That would improve its competitive position and increase customer satisfaction, but it wouldn’t produce any more widgets. So, from an economic point of view, it wouldn’t be a productive investment. Rival firms might then invest in similar systems to stay competitive but, again, widget production would stay flat.

So firms weren’t investing in IT to increase productivity, but to stay competitive. Perhaps even more importantly, investment in digital technology in the 70s and 80s was focused on supporting existing business models. It wasn’t until the late 90s that we began to see significant new business models being created.

The Greatest Value Comes From New Business Models—Not Cost Savings

Things began to change when firms began to see the possibilities to shift their approach. As Josh Sutton, CEO of Agorai, an AI marketplace, explained to me, “The businesses that won in the digital age weren’t necessarily the ones who implemented systems the best, but those who took a ‘digital first’ mindset to imagine completely new business models.”

He gives the example of the entertainment industry. Sure, digital technology revolutionized distribution, but merely putting your programming online is of limited value. The ones who are winning are reimagining storytelling and optimizing the experience for binge watching. That’s the real paradigm shift.

“One of the things that digital technology did was to focus companies on their customers,” Sutton continues. “When switching costs are greatly reduced, you have to make sure your customers are being really well served. Because so much friction was taken out of the system, value shifted to who could create the best experience.”

So while many companies today are attempting to leverage AI to provide similar service more cheaply, the really smart players are exploring how AI can empower employees to provide a much better service or even to imagine something that never existed before. “AI will make it possible to put powerful intelligence tools in the hands of consumers, so that businesses can become collaborators and trusted advisors, rather than mere service providers,” Sutton says.

It Takes An Ecosystem To Drive Impact

Another aspect of digital technology in the 1970s and 80s was that it was largely made up of standalone systems. You could buy, say, a mainframe from IBM to automate back office systems or, later, Macintoshes or a PCs with some basic software to sit on employees desks, but that did little more than automate basic clerical tasks.

However, value creation began to explode in the mid-90s when the industry shifted from systems to ecosystems. Open source software, such as Apache and Linux, helped democratize development. Application developers began offering industry and process specific software and a whole cadre of systems integrators arose to design integrated systems for their customers.

We can see a similar process unfolding today in AI, as the industry shifts from one-size-fits-all systems like IBM’s Watson to a modular ecosystem of firms that provide data, hardware, software and applications. As the quality and specificity of the tools continues to increase, we can expect the impact of AI to increase as well.

In 1987, Robert Solow quipped that, “ You can see the computer age everywhere but in the productivity statistics,” and we’re at a similar point today. AI permeates our phones, smart speakers in our homes and, increasingly, the systems we use at work. However, we’ve yet to see a measurable economic impact from the technology. Much like in the 70s and 80s, productivity growth remains depressed. But the technology is still in its infancy.

We’re Just Getting Started

One of the most salient, but least discussed aspects of artificial intelligence is that it’s not an inherently digital technology. Applications like voice recognition and machine vision are, in fact, inherently analog. The fact that we use digital technology to execute machine learning algorithms is actually often a bottleneck.

Yet we can expect that to change over the next decade as new computing architectures, such as quantum computers and neuromorphic chips, rise to the fore. As these more powerful technologies replace silicon chips computing in ones and zeroes, value will shift from bits to atoms and artificial intelligence will be applied to the physical world.

“The digital technology revolutionized business processes, so it shouldn’t be a surprise that cognitive technologies are starting from the same place, but that’s not where they will end up. The real potential is driving processes that we can’t manage well today, such as in synthetic biology, materials science and other things in the physical world,” Agorai’s Sutton told me.

In 1987, when Solow made his famous quip, there was no consumer Internet, no World Wide Web and no social media. Artificial intelligence was largely science fiction. We’re at a similar point today, at the beginning of a new era. There’s still so much we don’t yet see, for the simple reason that so much has yet to happen.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Hard Problem of Consciousness is Not That Hard

The Hard Problem of Consciousness is Not That Hard

GUEST POST from Geoffrey A. Moore

We human beings like to believe we are special—and we are, but not as special as we might like to think. One manifestation of our need to be exceptional is the way we privilege our experience of consciousness. This has led to a raft of philosophizing which can be organized around David Chalmers’ formulation of “the hard problem.”

In case this is a new phrase for you, here is some context from our friends at Wikipedia:

“… even when we have explained the performance of all the cognitive and behavioral functions in the vicinity of experience—perceptual discrimination, categorization, internal access, verbal report—there may still remain a further unanswered question: Why is the performance of these functions accompanied by experience?”

— David Chalmers, Facing up to the problem of consciousness

The problem of consciousness, Chalmers argues, is two problems: the easy problems and the hard problem. The easy problems may include how sensory systems work, how such data is processed in the brain, how that data influences behavior or verbal reports, the neural basis of thought and emotion, and so on. The hard problem is the problem of why and how those processes are accompanied by experience.3 It may further include the question of why these processes are accompanied by that particular experience rather than another experience.

The key word here is experience. It emerges out of cognitive processes, but it is not completely reducible to them. For anyone who has read much in the field of complexity, this should not come as a surprise. All complex systems share the phenomenon of higher orders of organization emerging out of lower orders, as seen in the frequently used example of how cells, tissues, organs, and organisms all interrelate. Experience is just the next level.

The notion that explaining experience is a hard problem comes from locating it at the wrong level of emergence. Materialists place it too low—they argue it is reducible to physical phenomena, which is simply another way of denying that emergence is a meaningful construct. Shakespeare is reducible to quantum effects? Good luck with that.

Most people’s problems with explaining experience, on the other hand, is that they place it too high. They want to use their own personal experience as a grounding point. The problem is that our personal experience of consciousness is deeply inflected by our immersion in language, but it is clear that experience precedes language acquisition, as we see in our infants as well as our pets. Philosophers call such experiences qualia, and they attribute all sorts of ineluctable and mysterious qualities to them. But there is a much better way to understand what qualia really are—namely, the pre-linguistic mind’s predecessor to ideas. That is, they are representations of reality that confer strategic advantage to the organism that can host and act upon them.

Experience in this context is the ability to detect, attend to, learn from, and respond to signals from our environment, whether they be externally or internally generated. Experiences are what we remember. That is why they are so important to us.

Now, as language-enabled humans, we verbalize these experiences constantly, which is what leads us to locate them higher up in the order of emergence, after language itself has emerged. Of course, we do have experiences with language directly—lots of them. But we need to acknowledge that our identity as experiencers is not dependent upon, indeed precedes our acquisition of, language capability.

With this framework in mind, let’s revisit some of the formulations of the hard problem to see if we can’t nip them in the bud.

  • The hard problem of consciousness is the problem of explaining why and how we have qualia or phenomenal experiences. Our explanation is that qualia are mental abstractions of phenomenal experiences that, when remembered and acted upon, confer strategic advantage to organisms under conditions of natural and sexual selection. Prior to the emergence of brains, “remembering and acting upon” is a function of chemical signals activating organisms to alter their behavior and, over time, to privilege tendencies that reinforce survival. Once brain emerges, chemical signaling is supplemented by electrical signaling to the same ends. There is no magic here, only a change of medium.
  • Annaka Harris poses the hard problem as the question of “how experience arise[s] out of non-sentient matter.” The answer to this question is, “level by level.” First sentience has to emerge from non-sentience. That happens with the emergence of life at the cellular level. Then sentience has to spread beyond the cell. That happens when chemical signaling enables cellular communication. Then sentience has to speed up to enable mobile life. That happens when electrical signaling enabled by nerves supplements chemical signaling enabled by circulatory systems. Then signaling has to complexify into meta-signaling, the aggregation of signals into qualia, remembered as experiences. Again, no miracles required.
  • Others, such as Daniel Dennett and Patricia Churchland believe that the hard problem is really more of a collection of easy problems, and will be solved through further analysis of the brain and behavior. If so, it will be through the lens of emergence, not through the mechanics of reductive materialism.
  • Consciousness is an ambiguous term. It can be used to mean self-consciousness, awareness, the state of being awake, and so on. Chalmers uses Thomas Nagel’s definition of consciousness: the feeling of what it is like to be something. Consciousness, in this sense, is synonymous with experience. Now we are in the language-inflected zone where we are going to get consciousness wrong because we are entangling it in levels of emergence that come later. Specifically, to experience anything as like anything else is not possible without the intervention of language. That is, likeness is not a qualia, it is a language-enabled idea. Thus, when Thomas Nagel famously asked, “What is it like to be a bat?” he is posing a question that has meaning only for humans, never for bats.

Going back to the first sentence above, self-consciousness is another concept that has been language-inflected in that only human beings have selves. Selves, in other words, are creations of language. More specifically, our selves are characters embedded in narratives, and use both the narratives and the character profiles to organize our lives. This is a completely language-dependent undertaking and thus not available to pets or infants. Our infants are self-sentient, but it is not until the little darlings learn language, hear stories, then hear stories about themselves, that they become conscious of their own selves as separate and distinct from other selves.

On the other hand, if we use the definitions of consciousness as synonymous with awareness or being awake, then we are exactly at the right level because both those capabilities are the symptoms of, and thus synonymous with, the emergence of consciousness.

  • Chalmers argues that experience is more than the sum of its parts. In other words, experience is irreducible. Yes, but let’s not be mysterious here. Experience emerges from the sum of its parts, just like any other layer of reality emergences from its component elements. To say something is irreducible does not mean that it is unexplainable.
  • Wolfgang Fasching argues that the hard problem is not about qualia, but about pure what-it-is-like-ness of experience in Nagel’s sense, about the very givenness of any phenomenal contents itself:

Today there is a strong tendency to simply equate consciousness with qualia. Yet there is clearly something not quite right about this. The “itchiness of itches” and the “hurtfulness of pain” are qualities we are conscious of. So, philosophy of mind tends to treat consciousness as if it consisted simply of the contents of consciousness (the phenomenal qualities), while it really is precisely consciousness of contents, the very givenness of whatever is subjectively given. And therefore, the problem of consciousness does not pertain so much to some alleged “mysterious, nonpublic objects”, i.e. objects that seem to be only “visible” to the respective subject, but rather to the nature of “seeing” itself (and in today’s philosophy of mind astonishingly little is said about the latter).

Once again, we are melding consciousness and language together when, to be accurate, we must continue to keep them separate. In this case, the dangerous phrase is “the nature of seeing.” There is nothing mysterious about seeing in the non-metaphorical sense, but that is not how the word is being used here. Instead, “seeing” is standing for “understanding” or “getting” or “grokking” (if you are nerdy enough to know Robert Heinlein’s Stranger in a Strange Land). Now, I think it is reasonable to assert that animals “grok” if by that we mean that they can reliably respond to environmental signals with strategic behaviors. But anything more than that requires the intervention of language, and that ends up locating consciousness per se at the wrong level of emergence.

OK, that’s enough from me. I don’t think I’ve exhausted the topic, so let me close by saying…

That’s what I think, what do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Top 10 Human-Centered Change & Innovation Articles of August 2023

Top 10 Human-Centered Change & Innovation Articles of August 2023Drum roll please…

At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?

But enough delay, here are August’s ten most popular innovation posts:

  1. The Paradox of Innovation Leadership — by Janet Sernack
  2. Why Most Corporate Innovation Programs Fail — by Greg Satell
  3. A Top-Down Open Innovation Approach — by Geoffrey A. Moore
  4. Innovation Management ISO 56000 Series Explained — by Diana Porumboiu
  5. Scale Your Innovation by Mapping Your Value Network — by John Bessant
  6. The Impact of Artificial Intelligence on Future Employment — by Chateau G Pato
  7. Leaders Avoid Doing This One Thing — by Robyn Bolton
  8. Navigating the Unpredictable Terrain of Modern Business — by Teresa Spangler
  9. Imagination versus Knowledge — by Janet Sernack
  10. Productive Disagreement Requires Trust — by Mike Shipulski

BONUS – Here are five more strong articles published in July that continue to resonate with people:

If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!

Have something to contribute?

Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.

P.S. Here are our Top 40 Innovation Bloggers lists from the last three years:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Robots Aren’t Really Going to Take Over

The Robots Aren't Really Going to Take Over

GUEST POST from Greg Satell

In 2013, a study at Oxford University found that 47% of jobs in the United States are likely to be replaced by robots over the next two decades. As if that doesn’t seem bad enough, Yuval Noah Harari, in his bestselling book Homo Deus, writes that “humans might become militarily and economically useless.” Yeesh! That doesn’t sound good.

Yet today, ten years after the Oxford Study, we are experiencing a serious labor shortage. Even more puzzling is that the shortage is especially acute in manufacturing, where automation is most pervasive. If robots are truly taking over, then why are having trouble finding enough humans to do work that needs being done?

The truth is that automation doesn’t replace jobs, it replaces tasks and when tasks become automated, they largely become commoditized. So while there are significant causes for concern about automation, such as increasing returns to capital amid decreasing returns to labor, the real danger isn’t with automation itself, but what we choose to do with it.

Organisms Are Not Algorithms

Harari’s rationale for humans becoming useless is his assertion that “organisms are algorithms.” Much like a vending machine is programed to respond to buttons, humans and other animals are programed by genetics and evolution to respond to “sensations, emotions and thoughts.” When those particular buttons are pushed, we respond much like a vending machine does.

He gives various data points for this point of view. For example, he describes psychological experiments in which, by monitoring brainwaves, researchers are able to predict actions, such as whether a person will flip a switch, even before he or she is aware of it. He also points out that certain chemicals, such as Ritalin and Prozac, can modify behavior.

Therefore, he continues, free will is an illusion because we don’t choose our urges. Nobody makes a conscious choice to crave chocolate cake or cigarettes any more than we choose whether to be attracted to someone other than our spouse. Those things are a product of our biological programming.

Yet none of this is at all dispositive. While it is true that we don’t choose our urges, we do choose our actions. We can be aware of our urges and still resist them. In fact, we consider developing the ability to resist urges as an integral part of growing up. Mature adults are supposed to resist things like gluttony, adultery and greed.

Revealing And Building

If you believe that organisms are algorithms, it’s easy to see how humans become subservient to machines. As machine learning techniques combine with massive computing power, machines will be able to predict, with great accuracy, which buttons will lead to what actions. Here again, an incomplete picture leads to a spurious conclusion.

In his 1954 essay, The Question Concerning Technology the German philosopher Martin Heidegger sheds some light on these issues. He described technology as akin to art, in that it reveals truths about the nature of the world, brings them forth and puts them to some specific use. In the process, human nature and its capacity for good and evil is also revealed.

He gives the example of a hydroelectric dam, which reveals the energy of a river and puts it to use making electricity. In much the same sense, Mark Zuckerberg did not “build” a social network at Facebook, but took natural human tendencies and channeled them in a particular way. After all, we go online not for bits or electrons, but to connect with each other.

In another essay, Building Dwelling Thinking, Heidegger explains that building also plays an important role, because to build for the world, we first must understand what it means to live in it. Once we understand that Mark Zuckerberg, or anyone else for that matter, is working to manipulate us, we can work to prevent it. In fact, knowing that someone or something seeks to control us gives us an urge to resist. If we’re all algorithms, that’s part of the code.
Social Skills Will Trump Cognitive Skills

All of this is, of course, somewhat speculative. What is striking, however, is the extent to which the opposite of what Harari and other “experts” predict is happening. Not only have greater automation and more powerful machine learning algorithms not led to mass unemployment it has, as noted above, led to a labor shortage. What gives?

To understand what’s going on, consider the legal industry, which is rapidly being automated. Basic activities like legal discovery are now largely done by algorithms. Services like LegalZoom automate basic filings. There are even artificial intelligence systems that can predict the outcome of a court case better than a human can.

So it shouldn’t be surprising that many experts predict gloomy days ahead for lawyers. By now, you can probably predict the punchline. The number of lawyers in the US has increased by 15% since 2008 and it’s not hard to see why. People don’t hire lawyers for their ability to hire cheap associates to do discovery, file basic documents or even, for the most part, to go to trial. In large part, they want someone they can trust to advise them.

The true shift in the legal industry will be from cognitive to social skills. When much of the cognitive heavy lifting can be done by machines, attorneys who can show empathy and build trust will have an advantage over those who depend on their ability to retain large amounts of information and read through lots of documents.

Value Never Disappears, It Just Shifts To Another Place

In 1900, 30 million people in the United States worked as farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a matter of speaking, 90% of American agriculture workers lost their jobs, mostly due to automation. Yet somehow, the twentieth century was seen as an era of unprecedented prosperity.

You can imagine anyone working in agriculture a hundred years ago would be horrified to find that their jobs would vanish over the next century. If you told them that everything would be okay because they could find work as computer scientists, geneticists or digital marketers, they would probably have thought that you were some kind of a nut.

But consider if you told them that instead of working in the fields all day, they could spend that time in a nice office that was cool and dry because of something called “air conditioning,” and that they would have machines that cook meals without needing wood to be chopped and hauled. To sweeten the pot you could tell them that ”work” would mostly consist largely of talking to other people. They may have imagined it as a paradise.

The truth is that value never disappears, it just shifts to another place. That’s why today we have less farmers, but more food and, for better or worse, more lawyers. It is also why it’s highly unlikely that the robots will take over, because we are not algorithms. We have the power to choose.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.