Tag Archives: Artificial Intelligence

Will CHATgpt make us more or less innovative?

Will CHATgpt make us more or less innovative?

GUEST POST from Pete Foley

The rapid emergence of increasingly sophisticated ‘AI ‘ programs such as CHATgpt will profoundly impact our world in many ways. That will inevitably include Innovation, especially the front end. But will it ultimately help or hurt us? Better access to information should be a huge benefit, and my intuition was to dive in and take full advantage. I still think it has enormous upside, but I also think it needs to be treated with care. At this point at least, it’s still a tool, not an oracle. It’s an excellent source for tapping existing information, but it’s (not yet) a source of new ideas. As with any tool, those who understand deeply how it works, its benefits and its limitations, will get the most from it. And those who use it wrongly could end up doing more harm than good. So below I’ve mapped out a few pros and cons that I see. It’s new, and like everybody else, I’m on a learning curve, so would welcome any and all thoughts on these pros and cons:

What is Innovation?

First a bit of a sidebar. To understand how to use a tool, I at least need to have a reasonably clear of what goals I want it to help me achieve. Obviously ‘what is innovation’ is a somewhat debatable topic, but my working model is that the front end of innovation typically involves taking existing knowledge or technology, and combining it in new, useful ways, or in new contexts, to create something that is new, useful and ideally understandable and accessible. This requires deep knowledge, curiosity and the ability to reframe problems to find new uses of existing assets. A recent illustrative example is Oculus Rift, an innovation that helped to make virtual reality accessible by combining fairly mundane components including a mobile phone screen and a tracking sensor and ski glasses into something new. But innovation comes in many forms, and can also involve serendipity and keen observation, as in Alexander Fleming’s original discovery of penicillin. But even this requires deep domain knowledge to spot the opportunity and reframing undesirable mold into a (very) useful pharmaceutical. So, my start-point is which parts of this can CHATgpt help with?

Another sidebar is that innovation is of course far more than simply discovery or a Eureka moment. Turning an idea into a viable product or service usually requires considerable work, with the development of penicillin being a case in point. I’ve no doubt that CHATgpt and its inevitable ‘progeny’ will be of considerable help in that part of the process too.   But for starters I’ve focused on what it brings to the discovery phase, and the generation of big, game changing ideas.

First the Pros:

1. Staying Current: We all have to strike a balance between keeping up with developments in our own fields, and trying to come up with new ideas. The sheer volume of new information, especially in developing fields, means that keeping pace with even our own area of expertise has become challenging. But spend too much time just keeping up, and we become followers, not innovators, so we have to carve out time to also stretch existing knowledge. But if we don’t get the balance right, and fail to stay current, we risk get leapfrogged by those who more diligently track the latest discoveries. Simultaneous invention has been pervasive at least since the development of calculus, as one discovery often signposts and lays the path for the next. So fail to stay on top of our field, and we potentially miss a relatively easy step to the next big idea. CHATgpt can become an extremely efficient tool for tracking advances without getting buried in them.

2. Pushing Outside of our Comfort Zone: Breakthrough innovation almost by definition requires us to step beyond the boundaries of our existing knowledge. Whether we are Dyson stealing filtration technology from a sawmill for his unique ‘filterless’ vacuum cleaner, physicians combining stem cell innovation with tech to create rejection resistant artificial organs, or the Oculus tech mentioned above, innovation almost always requires tapping resources from outside of the established field. If we don’t do this, then we not only tend towards incremental ideas, but also tend to stay in lock step with other experts in our field. This becomes increasingly the case as an area matures, low hanging fruit is exhausted, and domain knowledge becomes somewhat commoditized. CHATgpt simply allows us to explore beyond our field far more efficiently than we’ve ever been able to before. And as it or related tech evolves, it will inevitably enable ever more sophisticated search. From my experience it already enables some degree of analogous search if you are thoughtful about how to frame questions, thus allowing us to more effectively expand searches for existing solutions to problems that lie beyond the obvious. That is potentially really exciting.

Some Possible Cons:

1. Going Down the Rabbit Hole: CHATgpt is crack cocaine for the curious. Mea culpa, this has probably been the most time consuming blog I’ve ever written. Answers inevitably lead to more questions, and it’s almost impossible to resist playing well beyond the specific goals I initially have. It’s fascinating, it’s fun, you learn a lot of stuff you didn’t know, but I at least struggle with discipline and focus when using it. Hopefully that will wear off, and I will find a balance that uses it efficiently.

2. The Illusion of Understanding: This is a bit more subtle, but a topic inevitably enhances our understanding of it. The act of asking questions is as much a part of learning as reading answers, and often requires deep mechanistic understanding. CHATgpa helps us probe faster, and its explanations may help us to understand concepts more quickly. But it also risks the illusion of understanding. When the heavy loading of searching is shifted away from us, we get quick answers, but may also miss out on the deeper mechanistic understanding we’d have gleaned if we’d been forced to work a bit harder. And that deeper understanding can be critical when we are trying to integrate superficially different domains as part of the innovation process. For example, knowing that we can use a patient’s stem cells to minimize rejection of an artificial organ is quite different from understanding how the immune system differentiates between its own and other stem cells. The risk is that sophisticated search engines will do more heavy lifting, allow us to move faster, but also result in a more superficial understanding, which reduces our ability to spot roadblocks early, or solve problems as we move to the back end of innovation, and reduce an idea to practice.

3. Eureka Moment: That’s the ‘conscious’ watch out, but there is also an unconscious one. It’s no secret that quite often our biggest ideas come when we are not actually trying. Archimedes had his Eureka moment in the bath, and many of my better ideas come when I least expect them, perhaps in the shower, when I first wake up, or am out having dinner. The neuroscience of creativity helps explain this, in that the restructuring of problems that leads to new insight and the integration of ideas works mostly unconsciously, and when we are not consciously focused on a problem. It’s analogous to the ‘tip of the tongue’ effect, where the harder we try to remember something, the harder it gets, but then comes to us later when we are not trying. But the key for the Eureka moment is that we need sufficiently deep knowledge for those integrations to occur. If CHATgpt increases the illusion of understanding, we could see less of those Eureka moments, and the ‘obvious in hindsight ideas’ they create.

Conclusion

I think that ultimately innovation will be accelerated by CHATgpt and what follows, perhaps quite dramatically. But I also think that we as innovators need to try and peel back the layers and understand as much as we can about these tools, as there is potential for us to trip up. We need to constantly reinvent the way we interact with them, leverage them as sophisticated innovation tools, but avoid them becoming oracles. We also need to ensure that we, and future generations use them to extend our thinking skill set, but not become a proxy for it. The calculator has in some ways made us all mathematical geniuses, but in other ways has reduced large swathes of the population’s ability to do basic math. We need to be careful that CHATgpt doesn’t do the same for our need for cognition, and deep mechanistic and/or critical thinking.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 100 Innovation and Transformation Articles of 2022

Top 100 Innovation and Transformation Articles of 2022

2021 marked the re-birth of my original Blogging Innovation blog as a new blog called Human-Centered Change and Innovation.

Many of you may know that Blogging Innovation grew into the world’s most popular global innovation community before being re-branded as InnovationExcellence.com and being ultimately sold to DisruptorLeague.com.

Thanks to an outpouring of support I’ve ignited the fuse of this new multiple author blog around the topics of human-centered change, innovation, transformation and design.

I feel blessed that the global innovation and change professional communities have responded with a growing roster of contributing authors and more than 17,000 newsletter subscribers.

To celebrate we’ve pulled together the Top 100 Innovation and Transformation Articles of 2022 from our archive of over 1,000 articles on these topics.

We do some other rankings too.

We just published the Top 40 Innovation Bloggers of 2022 and as the volume of this blog has grown we have brought back our monthly article ranking to complement this annual one.

But enough delay, here are the 100 most popular innovation and transformation posts of 2022.

Did your favorite make the cut?

1. A Guide to Organizing Innovation – by Jesse Nieminen

2. The Education Business Model Canvas – by Arlen Meyers, M.D.

3. 50 Cognitive Biases Reference – Free Download – by Braden Kelley

4. Why Innovation Heroes Indicate a Dysfunctional Organization – by Steve Blank

5. The One Movie All Electric Car Designers Should Watch – by Braden Kelley

6. Don’t Forget to Innovate the Customer Experience – by Braden Kelley

7. What Latest Research Reveals About Innovation Management Software – by Jesse Nieminen

8. Is Now the Time to Finally End Our Culture of Disposability? – by Braden Kelley

9. Free Innovation Maturity Assessment – by Braden Kelley

10. Cognitive Bandwidth – Staying Innovative in ‘Interesting’ Times – by Pete Foley

11. Is Digital Different? – by John Bessant

12. Top 40 Innovation Bloggers of 2021 – Curated by Braden Kelley

13. Can We Innovate Like Elon Musk? – by Pete Foley

14. Why Amazon Wants to Sell You Robots – by Shep Hyken

15. Free Human-Centered Change Tools – by Braden Kelley

16. What is Human-Centered Change? – by Braden Kelley

17. Not Invented Here – by John Bessant

18. Top Five Reasons Customers Don’t Return – by Shep Hyken

19. Visual Project Charter™ – 35″ x 56″ (Poster Size) and JPG for Online Whiteboarding – by Braden Kelley

20. Nine Innovation Roles – by Braden Kelley

21. How Consensus Kills Innovation – by Greg Satell

22. Why So Much Innoflation? – by Arlen Meyers, M.D.

23. ACMP Standard for Change Management® Visualization – 35″ x 56″ (Poster Size) – Association of Change Management Professionals – by Braden Kelley

24. 12 Reasons to Write Your Own Letter of Recommendation – by Arlen Meyers, M.D.

25. The Five Keys to Successful Change – by Braden Kelley

26. Innovation Theater – How to Fake It ‘Till You Make It – by Arlen Meyers, M.D.

27. Five Immutable Laws of Change – by Greg Satell

28. How to Free Ourselves of Conspiracy Theories – by Greg Satell

29. An Innovation Action Plan for the New CTO – by Steve Blank

30. How to Write a Failure Resume – by Arlen Meyers, M.D.


Build a common language of innovation on your team


31. Entrepreneurs Must Think Like a Change Leader – by Braden Kelley

32. No Regret Decisions: The First Steps of Leading through Hyper-Change – by Phil Buckley

33. Parallels Between the 1920’s and Today Are Frightening – by Greg Satell

34. Technology Not Always the Key to Innovation – by Braden Kelley

35. The Era of Moving Fast and Breaking Things is Over – by Greg Satell

36. A Startup’s Guide to Marketing Communications – by Steve Blank

37. You Must Be Comfortable with Being Uncomfortable – by Janet Sernack

38. Four Key Attributes of Transformational Leaders – by Greg Satell

39. We Were Wrong About What Drove the 21st Century – by Greg Satell

40. Stoking Your Innovation Bonfire – by Braden Kelley

41. Now is the Time to Design Cost Out of Our Products – by Mike Shipulski

42. Why Good Ideas Fail – by Greg Satell

43. Five Myths That Kill Change and Transformation – by Greg Satell

44. 600 Free Innovation, Transformation and Design Quote Slides – Curated by Braden Kelley

45. FutureHacking – by Braden Kelley

46. Innovation Requires Constraints – by Greg Satell

47. The Experiment Canvas™ – 35″ x 56″ (Poster Size) – by Braden Kelley

48. The Pyramid of Results, Motivation and Ability – by Braden Kelley

49. Four Paradigm Shifts Defining Our Next Decade – by Greg Satell

50. Why Most Corporate Mindset Programs Are a Waste of Time – by Alain Thys


Accelerate your change and transformation success


51. Impact of Cultural Differences on Innovation – by Jesse Nieminen

52. 600+ Downloadable Quote Posters – Curated by Braden Kelley

53. The Four Secrets of Innovation Implementation – by Shilpi Kumar

54. What Entrepreneurship Education Really Teaches Us – by Arlen Meyers, M.D.

55. Reset and Reconnect in a Chaotic World – by Janet Sernack

56. You Can’t Innovate Without This One Thing – by Robyn Bolton

57. Why Change Must Be Built on Common Ground – by Greg Satell

58. Four Innovation Ecosystem Building Blocks – by Greg Satell

59. Problem Seeking 101 – by Arlen Meyers, M.D.

60. Taking Personal Responsibility – Back to Leadership Basics – by Janet Sernack

61. The Lost Tribe of Medicine – by Arlen Meyers, M.D.

62. Invest Yourself in All That You Do – by Douglas Ferguson

63. Bureaucracy and Politics versus Innovation – by Braden Kelley

64. Dare to Think Differently – by Janet Sernack

65. Bridging the Gap Between Strategy and Reality – by Braden Kelley

66. Innovation vs. Invention vs. Creativity – by Braden Kelley

67. Building a Learn It All Culture – by Braden Kelley

68. Real Change Requires a Majority – by Greg Satell

69. Human-Centered Innovation Toolkit – by Braden Kelley

70. Silicon Valley Has Become a Doomsday Machine – by Greg Satell

71. Three Steps to Digital and AI Transformation – by Arlen Meyers, M.D.

72. We need MD/MBEs not MD/MBAs – by Arlen Meyers, M.D.

73. What You Must Know Before Leading a Design Thinking Workshop – by Douglas Ferguson

74. New Skills Needed for a New Era of Innovation – by Greg Satell

75. The Leader’s Guide to Making Innovation Happen – by Jesse Nieminen

76. Marriott’s Approach to Customer Service – by Shep Hyken

77. Flaws in the Crawl Walk Run Methodology – by Braden Kelley

78. Disrupt Yourself, Your Team and Your Organization – by Janet Sernack

79. Why Stupid Questions Are Important to Innovation – by Greg Satell

80. Breaking the Iceberg of Company Culture – by Douglas Ferguson


Get the Change Planning Toolkit


81. A Brave Post-Coronavirus New World – by Greg Satell

82. What Can Leaders Do to Have More Innovative Teams? – by Diana Porumboiu

83. Mentors Advise and Sponsors Invest – by Arlen Meyers, M.D.

84. Increasing Organizational Agility – by Braden Kelley

85. Should You Have a Department of Artificial Intelligence? – by Arlen Meyers, M.D.

86. This 9-Box Grid Can Help Grow Your Best Future Talent – by Soren Kaplan

87. Creating Employee Connection Innovations in the HR, People & Culture Space – by Chris Rollins

88. Developing 21st-Century Leader and Team Superpowers – by Janet Sernack

89. Accelerate Your Mission – by Brian Miller

90. How the Customer in 9C Saved Continental Airlines from Bankruptcy – by Howard Tiersky

91. How to Effectively Manage Remotely – by Douglas Ferguson

92. Leading a Culture of Innovation from Any Seat – by Patricia Salamone

93. Bring Newness to Corporate Learning with Gamification – by Janet Sernack

94. Selling to Generation Z – by Shep Hyken

95. Importance of Measuring Your Organization’s Innovation Maturity – by Braden Kelley

96. Innovation Champions and Pilot Partners from Outside In – by Arlen Meyers, M.D.

97. Transformation Insights – by Bruce Fairley

98. Teaching Old Fish New Tricks – by Braden Kelley

99. Innovating Through Adversity and Constraints – by Janet Sernack

100. It is Easier to Change People than to Change People – by Annette Franz

Curious which article just missed the cut? Well, here it is just for fun:

101. Chance to Help Make Futurism and Foresight Accessible – by Braden Kelley

These are the Top 100 innovation and transformation articles of 2022 based on the number of page views. If your favorite Human-Centered Change & Innovation article didn’t make the cut, then send a tweet to @innovate and maybe we’ll consider doing a People’s Choice List for 2022.

If you’re not familiar with Human-Centered Change & Innovation, we publish 1-6 new articles every week focused on human-centered change, innovation, transformation and design insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook feed or on Twitter or LinkedIn too!

Editor’s Note: Human-Centered Change & Innovation is open to contributions from any and all the innovation & transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have a valuable insight to share with everyone for the greater good. If you’d like to contribute, contact us.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Unlocking the Power of Cause and Effect

Unlocking the Power of Cause and Effect

GUEST POST from Greg Satell

In 2011, IBM’s Watson system beat the best human players in the game show, Jeopardy! Since then, machines have shown that they can outperform skilled professionals in everything from basic legal work to diagnosing breast cancer. It seems that machines just get smarter and smarter all the time.

Yet that is largely an illusion. While even a very young human child understands the basic concept of cause and effect, computers rely on correlations. In effect, while a computer can associate the sun rising with the day breaking, it doesn’t understand that one causes the other, which limits how helpful computers can be.

That’s beginning to change. A group of researchers, led by artificial intelligence pioneer Judea Pearl, are working to help computers understand cause and effect based on a new causal calculus. The effort is still in its nascent stages, but if they’re successful we could be entering a new era in which machines not only answer questions, but help us pose new ones.

Observation and Association

Most of what we know comes from inductive reasoning. We make some observations and associate those observations with specific outcomes. For example, if we see animals going to a drink at a watering hole every morning, we would expect to see them at the same watering hole in the future. Many animals share this type of low-level reasoning and use it for hunting.

Over time, humans learned how to store these observations as data and that’s helped us make associations on a much larger scale. In the early years of data mining, data was used to make very basic types of predictions, such as the likelihood that somebody buying beer at a grocery store will also want to buy something else, like potato chips or diapers.

The achievement over the last decade or so is that advancements in algorithms, such as neural networks, have allowed us to make much more complex associations. To take one example, systems that have observed thousands of mammograms have learned to associate the ones that show a tumor with a very high degree of accuracy.

However, and this is a crucial point, the system that detects cancer doesn’t “know” it’s cancer. It doesn’t associate the mammogram with an underlying cause, such as a gene mutation or lifestyle choice, nor can it suggest a specific intervention, such as chemotherapy. Perhaps most importantly, it can’t imagine other possibilities and suggest alternative tests.

Confounding Intervention

The reason that correlation is often very different from causality is the presence of something called a confounding factor. For example, we might find a correlation between high readings on a thermometer and ice cream sales and conclude that if we put the thermometer next to a heater, we can raise sales of ice cream.

I know that seems silly, but problems with confounding factors arise in the real world all the time. Data bias is especially problematic. If we find a correlation between certain teachers and low test scores, we might assume that those teachers are causing the low test scores when, in actuality, they may be great teachers who work with problematic students.

Another example is the high degree of correlation between criminal activity and certain geographical areas, where poverty is a confounding factor. If we use zip codes to predict recidivism rates, we are likely to give longer sentences and deny parole to people because they are poor, while those with more privileged backgrounds get off easy.

These are not at all theoretical examples. In fact, they happen all the time, which is why caring, competent teachers can, and do, get fired for those particular qualities and people from disadvantaged backgrounds get mistreated by the justice system. Even worse, as we automate our systems, these mistaken interventions become embedded in our algorithms, which is why it’s so important that we design our systems to be auditable, explainable and transparent.

Imagining A Counterfactual

Another confusing thing about causation is that not all causes are the same. Some causes are sufficient in themselves to produce an effect, while others are necessary, but not sufficient. Obviously, if we intend to make some progress we need to figure out what type of cause we’re dealing with. The way to do that is by imagining a different set of facts.

Let’s return to the example of teachers and test scores. Once we have controlled for problematic students, we can begin to ask if lousy teachers are enough to produce poor test scores or if there are other necessary causes, such as poor materials, decrepit facilities, incompetent administrators and so on. We do this by imagining counterfactual, such as “What if there were better materials, facilities and administrators?”

Humans naturally imagine counterfactuals all the time. We wonder what would be different if we took another job, moved to a better neighborhood or ordered something else for lunch. Machines, however, have great difficulty with things like counterfactuals, confounders and other elements of causality because there’s been no standard way to express them mathematically.

That, in a nutshell, is what Judea Pearl and his colleagues have been working on over the past 25 years and many believe that the project is finally ready to bear fruit. Combining humans innate ability to imagine counterfactuals with machines’ ability to crunch almost limitless amounts of data can really be a game changer.

Moving Towards Smarter Machines

Make no mistake, AI systems’ ability to detect patterns has proven to be amazingly useful. In fields ranging from genomics to materials science, researchers can scour massive databases and identify associations that a human would be unlikely to detect manually. Those associations can then be studied further to validate whether they are useful or not.

Still, the fact that our machines don’t understand concepts like the fact that thermometers don’t increase ice cream sales limits their effectiveness. As we learn how to design our systems to detect confounders and imagine counterfactuals, we’ll be able to evaluate not only the effectiveness of interventions that have been tried, but also those that haven’t, which will help us come up with better solutions to important problems.

For example, in a 2019 study the Congressional Budget Office estimated that raising the national minimum wage to $15 per hour would result in a decrease in employment from zero to four million workers, based on a number of observational studies. That’s an enormous range. However, if we were able to identify and mitigate confounders, we could narrow down the possibilities and make better decisions.

While still nascent, the causal revolution in AI is already underway. McKinsey recently announced the launch of CausalNex, an open source library designed to identify cause and effect relationships in organizations, such as what makes salespeople more productive. Causal approaches to AI are also being deployed in healthcare to understand the causes of complex diseases such as cancer and evaluate which interventions may be the most effective.

Some look at the growing excitement around causal AI and scoff that it is just common sense. But that is exactly the point. Our historic inability to encode a basic understanding of cause and effect relationships into our algorithms has been a serious impediment to making machines truly smart. Clearly, we need to do better than merely fitting curves to data.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Challenges of Artificial Intelligence Adoption, Dissemination and Implementation

Challenges of Artificial Intelligence Adoption, Dissemination and Implementation

GUEST POST from Arlen Meyers, M.D.

Dissemination and Implementation Science (DIS) is a growing research field that seeks to inform how evidence-based interventions can be successfully adopted, implemented, and maintained in health care delivery and community settings.

Here is what you should know about dissemination and implementation.

Sickcare artificial intelligence products and services have a unique set of barriers to dissemination and implementation.

Every sickcare AI entrepreneur will eventually be faced with the task of finding customers willing and able to buy and integrate the product into their facility. But, every potential customer or segment is not the same.

There are differences in:

  1. The governance structure
  2. The process for vetting and choosing a particular vendor or solution
  3. The makeup of the buying group and decision makers
  4. The process customers use to disseminate and implement the solution
  5. Whether or not they are willing to work with vendors on pilots
  6. The terms and conditions of contracts
  7. The business model of the organization when it comes to working with early-stage companies
  8. How stakeholders are educated and trained
  9. When and how which end users and stakeholders have input in the decision
  10. The length of the sales cycle
  11. The complexity of the decision-making process
  12. Whether the product is a point solution or platform
  13. Whether the product can be used throughout all parts of just a few of the sickcare delivery network
  14. A transactional approach v a partnership and future development one
  15. The service after the sale arrangement

Here is what Sales Navigator won’t tell you.

Here is why ColdLinking does not work.

When it comes to AI product marketing and sales, when you have seen one successful integration, you have seen one process to make it happen and the success of the dissemination and implentation that creates the promised results will vary from one place to the next.

Do your homework. One size does not fit all.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

We Must Rethink the Future of Technology

We Must Rethink the Future of Technology

GUEST POST from Greg Satell

The industrial revolution of the 18th century was a major turning point. Steam power, along with other advances in areas like machine tools and chemistry transformed industry from the work of craftsmen and physical labor to that of managing machines. For the first time in world history, living standards grew consistently.

Yet during the 20th century, all of that technology needed to be rethought. Steam engines gave way to electric motors and internal combustion engines. The green revolution and antibiotics transformed agriculture and medicine. In the latter part of the century digital technology created a new economy based on information.

Today, we are on the brink of a new era of innovation in which we will need to rethink technology once again. Much like a century ago, we are developing new, far more powerful technologies that will change how we organize work, identify problems and collaborate to solve them. We will have to change how we compete and even redefine prosperity itself.

The End of the Digital Revolution

Over the past few decades, digital technology has become almost synonymous with innovation. Every few years, a new generation of chips would come out that was better, faster and cheaper than the previous one. This opened up new possibilities that engineers and entrepreneurs could exploit to create new products that would disrupt entire industries.

Yet there are only so many transistors you can cram onto a silicon wafer and digital computing is nearing its theoretical limits. We have just a few generations of advancements left before the digital revolution grinds to a halt. There will be some clever workarounds to stretch the technology a bit further, but we’re basically at the end of the digital era.

That’s not necessarily a bad thing. In many ways, the digital revolution has been a huge disappointment. Except for a relatively brief period in the late nineties and early aughts, the rise of digital technology has been marked by diminished productivity growth and rising inequality. Studies have also shown that some technologies, such as social media, worsen mental health.

Perhaps even more importantly, the end of the digital era will usher in a new age of heterogeneous computing in which we apply different computing architectures to specific tasks. Some of these architectures will be digital, but others, such as quantum and neuromorphic computing, will not be.

The New Convergence

In the 90s, media convergence seemed like a futuristic concept. We consumed information through separate and distinct channels, such as print, radio and TV. The idea that all media would merge into one digital channel just felt unnatural. Many informed analysts at the time doubted that it would ever actually happen.

Yet today, we can use a single device to listen to music, watch videos, read articles and even publish our own documents. In fact, we do these things so naturally we rarely stop to think how strange the concept once seemed. The Millennial generation doesn’t even remember the earlier era of fragmented media.

Today, we’re entering a new age of convergence in which computation powers the physical, as well as the virtual world. We’re beginning to see massive revolutions in areas like materials science and synthetic biology that will reshape massive industries such as energy, healthcare and manufacturing.

The impact of this new convergence is likely to far surpass anything that happened during the digital revolution. The truth is that we still eat, wear and live in the physical world, so innovating with atoms is far more valuable than doing so with bits.

Rethinking Prosperity

It’s a strange anachronism that we still evaluate prosperity in terms of GDP. The measure, developed by Simon Kuznets in 1934, became widely adopted after the Bretton Woods Conference a decade later. It is basically a remnant of the industrial economy, but even back then Kuznets commented, “the welfare of a nation can scarcely be inferred from a measure of national income.”

To understand why GDP is problematic, think about a smartphone, which incorporates many technologies, such as a camera, a video player, a web browser a GPS navigator and more. Peter Diamandis has estimated that a typical smartphone today incorporates applications that were worth $900,000 when they were first introduced.

So, you can see the potential for smartphones to massively deflate GDP. First of all, the price of the smartphone itself, which is just a small fraction of what the technology in it would have once cost. Then there is the fact that we save fuel by not getting lost, rarely pay to get pictures developed and often watch media for free. All of this reduces GDP, but makes us better off.

There are better ways to measure prosperity. The UN has proposed a measure that incorporates 9 indicators, the OECD has developed an alternative approach that aggregates 11 metrics, UK Prime Minister David Cameron has promoted a well-being index and even the small city of Somerville, MA has a happiness project.

Yet still, we seem to prefer GDP because it’s simple, not because its accurate. If we continue to increase GDP, but our air and water are more polluted, our children less educated and less healthy and we face heightened levels of anxiety and depression, then what have we really gained?

Empowering Humans to Design Work for Machines

Today, we face enormous challenges. Climate change threatens to pose enormous costs on our children and grandchildren. Hyperpartisanship, in many ways driven by social media, has created social strife, legislative inertia and has helped fuel the rise of authoritarian populism. Income inequality, at its highest levels since the 1920s, threatens to rip shreds in the social fabric.

Research shows that there is an increasing divide between workers who perform routine tasks and those who perform non-routine tasks. Routine tasks are easily automated. Non-routine tasks are not, but can be greatly augmented by intelligent systems. It is through this augmentation that we can best create value in the new century.

The future will be built by humans collaborating with other humans to design work for machines. That is how we will create the advanced materials, the miracle cures and new sources of clean energy that will save the planet. Yet if we remain mired in an industrial mindset, we will find it difficult to harness the new technological convergence to solve the problems we need to.

To succeed in the 21st century, we need to rethink our economy and our technology and begin to ask better questions. How does a particular technology empower people to solve problems? How does it improve lives? In what ways does it need to be constrained to limit adverse effects through economic externalities?

As our technology becomes almost unimaginably powerful, these questions will only become more important. We have the power to shape the world we want to live in. Whether we have the will remains to be seen.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Sickcare AI Field Notes

Sickcare AI Field Notes

I recently participated in a conference on Artificial Intelligence (AI) in healthcare. It was the first onsite meeting after 900 days of the pandemic.

Here is a report from the front:

  1. AI has a way to go before it can substitute for physician judgment, intuition, creativity and empathy
  2. There seems to be an inherent conflict between using AI to standardize decisions compared to using it for mass customization. Efforts to develop customized care must be designed around a deep understanding of what happens at the ground level along the patient pathway and must incorporate patient engagement by focusing on such things as shared decision-making, definition of appointments, and self-management, all of which are elements of a “build-to-order” approach.
  3. When it comes to dissemination and implementation, culture eats strategy for lunch.
  4. The majority of the conversations had to do with the technical aspects and use cases for AI. A small amount was about how to get people in your organization to understand and use it.
  5. The goal is to empower clinical teams to collaborate with patient teams and that will take some work. Moving sick care to healthcare also requires changing a sprint mindset to a marathon relay race mindset with all the hazards and risks of dropped handoffs and referral and information management leaks.
  6. AI is a facilitating technology that cuts across many applications, use cases and intended uses in sick care. Some day we might be recruiting medical students, residents and other sick care workers using AI instead of those silly resumes.
  7. The value proposition of AI includes improving workflow and improving productivity
  8. AI requires large, clean data sets regardless of applications
  9. It will take a while to create trust in technology
  10. There needs to be transparency in data models
  11. There is a large repository of data from non-traditional sources that needs to be mined e.g social media sites, community based sites providing tests, like health clubs and health fairs, as well as post acute care facilities
  12. AI is enabling both the clinical and business models of value based care
  13. Cloud based AI is changing diagnostic imaging and pattern recognition which will change manpower dynamics
  14. There are potential opportunities in AI for quality outcome stratification, cost accounting and pricing of episodes of care, determining risk premiums and optimizing margins for a bundled priced procedure given geographic disparities in quality and cost.
  15. We are in the second era of AI that is based on deep learning v rules based algorithms
  16. Value based care requires care coordination, risk stratification, patient centricity and managing risk
  17. Machine learning is being used, like Moneyball, to pick startup winners and losers, with a dose of high touch.
  18. It is encouraging to see more and more doctors attending and speaking at these kinds of meetings and lending a much needed perspective and reality check to technologists and non-sick care entrepreneurs. There were few healthcare executives besides those who were invited to be on panels.
  19. Overcoming the barriers to AI in sick care have mostly to do with changing behavior and not dwelling on the technicalities, but, rather, focusing on the jobs that doctors need to get done.
  20. The costs of AI , particularly for small, independent practitioners, are often not affordable, particularly when bundled with crippling EMR expenses . Moore’s law has not yet impacted medicine
  21. The promise of using AI to get more done with less conflicts with the paradox of productivity
  22. Top of mind problems to be solved were how to increase revenuces, cut costs , fill the workforce pipelines and address burnout and behavioral health employee and patient problems with scarce resouces.
  23. Nurses, pharmacists, public health professionals and veterinarians were under represented
  24. Payers were scarce
  25. Patients were scarce
  26. Students, residents and clinicians were looking for ways to get side gigs, non-clinical careers and exit ramps if need be.
  27. 70% of AI applications are in radiology
  28. AI is migrating from shiny to standard, running in the background to power diverse remote care modalities
  29. Chronic disease management and behavioral health have replace infectious disease as the global care management challenges
  30. AI education and training in sickcare professional schools is still woefully absent but international sickcare professional schools are filling the gaps
  31. Process and workflow improvements are a necessary part of digital and AI transformation

At its core, AI is part of a sick care eco-nervous system “brain” that is designed to change how doctors and patients think, feel and act as part of continuous behavioral improvement. Outcomes are irrelevant without impact.

AI is another facilitating technology that is part and parcel of almost every aspect of sick care. Like other shiny new objects, it remains to be seen how much value it actually delivers on its promise. I look forward to future conferences where we will be discussing how, not if to use AI and comparing best practices and results, not fairy tales and comparing mine with yours.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Should You Have a Department of Artificial Intelligence?

Should You Have a Department of Artificial Intelligence?

GUEST POST from Arlen Meyers, M.D.

Several hospitals, academic medical centers and medical schools are creating artificial intelligence organizational centers, institutes and programs. Examples are Stanford, the University of Colorado , Children’s Hospital of Orange County and Duke.

If you are contemplating doing the same, think about what is the best organizational structure? There’s a lot of debate about where AI and analytics capabilities should reside within organizations. Often leaders simply ask, “What organizational model works best?” and then, after hearing what succeeded at other companies, do one of three things: consolidate the majority of AI and analytics capabilities within a central “hub”; decentralize them and embed them mostly in the business units (“the spokes”); or distribute them across both, using a hybrid (“hub-and-spoke”) model. We’ve found that none of these models is always better than the others at getting AI up to scale; the right choice depends on a firm’s individual situation.

(click link for image)

The decision will depend on:

  1. What problems are you trying to solve? Form follows function.
  2. What resources do you have? People, money, processes, intrastructure, IP protection?
  3. What is your level of digital transformation?
  4. What is the level of your organizational innovation readiness?
  5. What are the underlying hypotheses of your intrapreneurial business model canvas and what evidence to you have that they are valid?
  6. How will you overcome the barriers to dissemination and implementation?
  7. What processes do you have in place to scale?
  8. Do you have the right people?
  9. Do you have a culture of innovation silos and, if so, how will you break them down?

10. How will you measure results? Dr Anthony Chang, the co- founder of the American Board of Artificial Intelligence, suggests that the following are some helpful metrics to measure the artificial intelligence capabilities of the health system in the context of an individual AI project:

AI Project Score

The projects that involve machine learning and artificial intelligence, either clinical oradministrative, can be followed in stages (with each stage being scored 1 point each to a maximumof 5 points) and scored to keep track as well as maintain momentum:

Stage 1: Ideation. The project is first discussed and brought to a regular meeting for input from all stakeholders. This is perhaps the most important part of an AI project that is often not regularly done with enough discussion and consideration.

Stage 2: Preparation. After approval from the group, the data access and curation takes place in order to perform the ML/AI steps that ensue. The team should appreciate that this stage takes the most effort and will require sufficient resources.

Stage 3: Operation. After the data is curated and managed, this stage entails a collaborative effort during the feature engineering and selection process. Using the ML/AI tools, the team then creates the algorithms that will lead to the models that will be used later on in the project.

Stage 4: Presentation. Upon completion of the model with real world data, the project is presented in front of the group and depending on the nature of the project, it is either presented only or is also presented at a regional or national meeting or advanced to be published in a journal.

Stage 5: Implementation. Beyond the presentation and publication, it is essential for the AI project to be implemented in the real world setting using real world data. This project still requires continual surveillance and maintenance as model and data often fatigue.

11. Are you connected to the other parts of the healthcare AI ecosystem?

(click link for image)

12. Are you prepared to overcome the ethical, legal, social, economic and privacy issues?

Feeding the organizational beasts that are resistant to change is hard. They have an insatiable appetite. Be sure your pantry is well stocked.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Three Steps to Digital and AI Transformation

Three Steps to Digital and AI Transformation

GUEST POST from Arlen Meyers, M.D.

In his book, The Four Steps to the Epiphany, Steve Blank described what has become the gospel of lean startup methodologies: Customer validation, customer discovery, customer creation and company building

The path to sickcare digital transformation is a bit shorter, but certainly no less difficult and plagued by failure: Personal innovation readiness, organizational innovation readiness and digital/AI transformation.

PERSONAL INNOVATION READINESS

Are you prepared to innovate? Here’s what you should know about innovation.

Before you start, prepare yourself with these things:

MINDSET

Starting down the entrepreneurship path means that you will not only have to change your mind about things, more importantly, you will have to change your mindset. Don’t make these rookie mindset mistakes. Here’s what it means to have an entrepreneurial mindset. There is a difference between a clinical and an entrepreneurial mindset. Innovation starts with the right mindset.

Here is how to cope in a VUCA world.

MOTIVATION

Organizational behavior gurus have been studying how to motivate employees for a very long time. Most have failed.

Indeed, most of your ideas will fail. Consequently, you will need a source of intrinsic motivation to keep you going. Make it personal, but don’t take it personally. Find the right mentors and sponsors to keep you on track and support you when you are down. Create a personal advisory board. Develop these entrepreneurial habits. Practice the power of negative entrepreneurial thinking.

MEANING

Meaning should drive what you are about to do. Practice virtuous entrepreneurship and find your ikigai. Instead of starting with the end in mind, start with the why in mind. Prune. Let go of the banana.

MEANS

Once these attitudes are in place, then focus on building your entrepreneurial knowledge, skills, behaviors and competencies. Take a financial inventory. Start accumulating the physical, human and emotional resources you will need to begin and sustain your journey. In addition to knowledge, you will need resources, networks, mentors, peer support and non-clinical career guidance.

METRICS

What are some standards and metrics you can us to measure your innovation readiness e.g. in the use of artificial intelligence in medicine?

The American National Standards Institute (ANSI) has released a new report that reflects stakeholder recommendations and opportunities for greater coordination of standardization for artificial intelligence (AI) in healthcare. The report, “Standardization Empowering AI-Enabled Systems in Healthcare,” reflects feedback from a 2020 ANSI leadership survey and national workshop, and pinpoints foundational principles and potential next steps for ANSI to work with standards developing organizations, the National Institute of Standards and Technology, other government agencies, industry, and other affected stakeholders.

The newly developed Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS) was found to be valid and reliable tool for evaluation and monitoring of perceived readiness levels of medical students on AI technologies and applications. Medical schools may follow ‘a physician training perspective that is compatible with AI in medicine’ to their curricula by using MAIRS-MS. This scale could be benefitted by medical and health science education institutions as a valuable curriculum development tool with its learner needs assessment and participants’ end-course perceived readiness opportunities.

As an important step to ensure successful integration of AI and avoid unnecessary investments and costly failures, better consideration should be given to: (1) Needs and added-value assessment; (2) Workplace readiness: stakeholder acceptance and engagement; (3) Technology-organization alignment assessment and (4) Business plan: financing and investments. In summary, decision-makers and technology promoters should better address the complexity of AI and understand the systemic challenges raised by its implementation in healthcare organizations and systems.

ORGANIZATIONAL INNOVATION READINESS

Improvement readiness is not the same as innovation readiness.

Giffford Pinchot, who originated the term “intrapreneur”, has suggested that you rate your organization in several domains to see whether your innovation future looks bright or bleek:

  1. Transmission of vision and strategic intent
  2. Tolerance for risk, failure and mistakes
  3. Support for intrapreneurs
  4. Managers who support innovation
  5. Empowered cross functional teams
  6. Decision making by the doers
  7. Discretionary time to innovate
  8. Attention on the new, not the now
  9. Self- selection
  10. No early hand offs to managers
  11. Internal boundary crossing
  12. Strong organizational culture of support
  13. Focus on customers
  14. Choice of internal suppliers
  15. Measurement of innovation
  16. Transparency and truth
  17. Good treatment of people
  18. Ethical and professional
  19. Swinging for singles, not home runs
  20. Robust external open networks

If you ask a sample of people to rate these in your company on a scale of 1-10, don’t be surprised if the average equals somewhere between 2-4. Few organizations, you see, are truly innovative or have a truly innovative culture. Most don’t even think about how to bridge the now with the new, let alone measure it.

Do a cultural audit. Creating a culture of innovation must include SALT and PRICES

AND

  • Process
  • Recognition
  • Incentives
  • Champions
  • Encouragement
  • Structure

Here is a rubrick that might help get you started

Learn from companies in other industries who transformed. Here are some tips from Levi Strauss.

DIGTAL/AI TRANSFORMATION

Develop and deploy the 6Ps:

  1. Problem seeking
  2. Problem solving
  3. People
  4. Platform/infrastructure
  5. Process/Project management
  6. Performance indicators that meet clinical, operational and business objectives and achieve the quintuple aims.

Here are some sickeare digital transformation tips.

The path to the end of the rainbow is filled with good intentions and lots of shiny new objects. Stay focused, use your moral compass to guide you and follow the yellow brick road.

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

How to Close the Sickcare AI DI Divide

How to Close the Sickcare AI DI Divide

GUEST POST from Arlen Meyers

The digital divide describes those having or not having access to broadband, hardware, software and technology support. It’s long been acknowledged that even as the digital industry exploded out of this country, America lived with a “digital divide.” While this is loosely understood as the gap between those who have access to reliable internet service and those who don’t, the true nature and extent of the divide is often under-appreciated. Internet infrastructure is, of course, an essential element of the divide, but infrastructure alone does not necessarily translate into adoption and beneficial use. Local and national institutions, affordability and access, and the digital proficiency of users, all play significant roles — and there are wide variations across the United States along each of these.

There is also a sickcare artificial intelligence (AI) dissemination and implementation (DI) divide. Infrastucture is one of many barriers.

As with most things American, there are the haves and the have nots. Here’s how hospitals are categorized. Generally, the smaller ones lack the resources to implement sickcare AI, particularly rural hospitals which are, increasingly, under stress and closing.

So, how do we close the AI-DI divide? Multisystems solutions involve:

  1. Data interoperability
  2. Federated learning Instead of bring Mohamed to the mountain, bring the mountain to Mohamed
  3. AI as a service
  4. Better data literacy
  5. IT infrastructure access improvement
  6. Making cheaper AI products
  7. Incorporating AI into a digital health whole product solution
  8. Close the doctor-data scientist divide
  9. Democratize data and AI
  10. Create business model competition for data by empowering patient data entrepreneurs
  11. Teach hospital and practice administrators how to make value based AI vendor purchasing decisions
  12. Encourage physician intrapreneurship and avoid the landmines
  13. Use no-code or low-code tools to innovate

We are still in the early stages of realizing the full potential of sickcare artificial intelligence. However, if we don’t close the AI-DI gaps, a large percentage of patients will never realize the benefits.

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

AI Has Already Taken Over the World

AI Has Already Taken Over the World

I don’t know about you, but it’s starting to feel as if machines and Artificial Intelligence (AI) have already taken over the world.

Remember in primary school when everyone tried really hard to impress, or even just to be recognized by, a handful of cool kids?

It’s feeling more and more each day as if the cool kids on the block that we’re most desperate to impress are algorithms and artificial intelligence.

We’re all desperate to get our web pages preferred over others by the algorithms of Google and Bing and are willing to spend real money on Search Engine Optimization (SEO) to increase our chances of ranking higher.

Everyone seems super keen to get their social media posts surfaced by Facebook, Twitter, Instagram, YouTube, Tik Tok, and even LinkedIn.

In today’s “everything is eCommerce” world, how your business ranks on Google and Bing increasingly can determine whether you’re in business or out of business.

Algorithms Have Become the New Cool Kids on the Block

According to the “Agencies SEO Services Global Market Report 2021: COVID-19 Impact and Recovery to 2030” report from The Business Research Company:

“The global agencies seo services market is expected to grow from $37.84 billion in 2020 to $40.92 billion in 2021 at a compound annual growth rate (CAGR) of 8.1%. The market is expected to reach $83.7 billion in 2025 at a CAGR of 19.6%.”

Think about that for a bit…

Companies and individuals are forecast to spend $40 Billion trying to impress the alogrithms and artificial intelligence applications of companies like Google and Microsoft in order to get their web sites and web pages featured higher in the search engine rankings.

The same can be true for companies and individuals trying to make a living selling on Amazon, Walmart.com and eBay. The algorithms of these companies determine which sellers get preferred placement and as a result can determine which individuals and companies profit and which will march down a path toward bankruptcy.

And then there is another whole industry and gamesmanship surrounding the world of social media marketing.

According to BEROE the size of the social media marketing market is in excess of $102 Billion.

These are huge numbers that, at least for me, demonstrate that the day that machines and AI take over the world is no longer out there in the future, but is already here.

Machines have become the gatekeepers between you and your customers.

Be afraid, be very afraid.

(insert maniacal laugh here)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.