Tag Archives: Artificial Intelligence

Unlocking the Power of Cause and Effect

Unlocking the Power of Cause and Effect

GUEST POST from Greg Satell

In 2011, IBM’s Watson system beat the best human players in the game show, Jeopardy! Since then, machines have shown that they can outperform skilled professionals in everything from basic legal work to diagnosing breast cancer. It seems that machines just get smarter and smarter all the time.

Yet that is largely an illusion. While even a very young human child understands the basic concept of cause and effect, computers rely on correlations. In effect, while a computer can associate the sun rising with the day breaking, it doesn’t understand that one causes the other, which limits how helpful computers can be.

That’s beginning to change. A group of researchers, led by artificial intelligence pioneer Judea Pearl, are working to help computers understand cause and effect based on a new causal calculus. The effort is still in its nascent stages, but if they’re successful we could be entering a new era in which machines not only answer questions, but help us pose new ones.

Observation and Association

Most of what we know comes from inductive reasoning. We make some observations and associate those observations with specific outcomes. For example, if we see animals going to a drink at a watering hole every morning, we would expect to see them at the same watering hole in the future. Many animals share this type of low-level reasoning and use it for hunting.

Over time, humans learned how to store these observations as data and that’s helped us make associations on a much larger scale. In the early years of data mining, data was used to make very basic types of predictions, such as the likelihood that somebody buying beer at a grocery store will also want to buy something else, like potato chips or diapers.

The achievement over the last decade or so is that advancements in algorithms, such as neural networks, have allowed us to make much more complex associations. To take one example, systems that have observed thousands of mammograms have learned to associate the ones that show a tumor with a very high degree of accuracy.

However, and this is a crucial point, the system that detects cancer doesn’t “know” it’s cancer. It doesn’t associate the mammogram with an underlying cause, such as a gene mutation or lifestyle choice, nor can it suggest a specific intervention, such as chemotherapy. Perhaps most importantly, it can’t imagine other possibilities and suggest alternative tests.

Confounding Intervention

The reason that correlation is often very different from causality is the presence of something called a confounding factor. For example, we might find a correlation between high readings on a thermometer and ice cream sales and conclude that if we put the thermometer next to a heater, we can raise sales of ice cream.

I know that seems silly, but problems with confounding factors arise in the real world all the time. Data bias is especially problematic. If we find a correlation between certain teachers and low test scores, we might assume that those teachers are causing the low test scores when, in actuality, they may be great teachers who work with problematic students.

Another example is the high degree of correlation between criminal activity and certain geographical areas, where poverty is a confounding factor. If we use zip codes to predict recidivism rates, we are likely to give longer sentences and deny parole to people because they are poor, while those with more privileged backgrounds get off easy.

These are not at all theoretical examples. In fact, they happen all the time, which is why caring, competent teachers can, and do, get fired for those particular qualities and people from disadvantaged backgrounds get mistreated by the justice system. Even worse, as we automate our systems, these mistaken interventions become embedded in our algorithms, which is why it’s so important that we design our systems to be auditable, explainable and transparent.

Imagining A Counterfactual

Another confusing thing about causation is that not all causes are the same. Some causes are sufficient in themselves to produce an effect, while others are necessary, but not sufficient. Obviously, if we intend to make some progress we need to figure out what type of cause we’re dealing with. The way to do that is by imagining a different set of facts.

Let’s return to the example of teachers and test scores. Once we have controlled for problematic students, we can begin to ask if lousy teachers are enough to produce poor test scores or if there are other necessary causes, such as poor materials, decrepit facilities, incompetent administrators and so on. We do this by imagining counterfactual, such as “What if there were better materials, facilities and administrators?”

Humans naturally imagine counterfactuals all the time. We wonder what would be different if we took another job, moved to a better neighborhood or ordered something else for lunch. Machines, however, have great difficulty with things like counterfactuals, confounders and other elements of causality because there’s been no standard way to express them mathematically.

That, in a nutshell, is what Judea Pearl and his colleagues have been working on over the past 25 years and many believe that the project is finally ready to bear fruit. Combining humans innate ability to imagine counterfactuals with machines’ ability to crunch almost limitless amounts of data can really be a game changer.

Moving Towards Smarter Machines

Make no mistake, AI systems’ ability to detect patterns has proven to be amazingly useful. In fields ranging from genomics to materials science, researchers can scour massive databases and identify associations that a human would be unlikely to detect manually. Those associations can then be studied further to validate whether they are useful or not.

Still, the fact that our machines don’t understand concepts like the fact that thermometers don’t increase ice cream sales limits their effectiveness. As we learn how to design our systems to detect confounders and imagine counterfactuals, we’ll be able to evaluate not only the effectiveness of interventions that have been tried, but also those that haven’t, which will help us come up with better solutions to important problems.

For example, in a 2019 study the Congressional Budget Office estimated that raising the national minimum wage to $15 per hour would result in a decrease in employment from zero to four million workers, based on a number of observational studies. That’s an enormous range. However, if we were able to identify and mitigate confounders, we could narrow down the possibilities and make better decisions.

While still nascent, the causal revolution in AI is already underway. McKinsey recently announced the launch of CausalNex, an open source library designed to identify cause and effect relationships in organizations, such as what makes salespeople more productive. Causal approaches to AI are also being deployed in healthcare to understand the causes of complex diseases such as cancer and evaluate which interventions may be the most effective.

Some look at the growing excitement around causal AI and scoff that it is just common sense. But that is exactly the point. Our historic inability to encode a basic understanding of cause and effect relationships into our algorithms has been a serious impediment to making machines truly smart. Clearly, we need to do better than merely fitting curves to data.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Challenges of Artificial Intelligence Adoption, Dissemination and Implementation

Challenges of Artificial Intelligence Adoption, Dissemination and Implementation

GUEST POST from Arlen Meyers, M.D.

Dissemination and Implementation Science (DIS) is a growing research field that seeks to inform how evidence-based interventions can be successfully adopted, implemented, and maintained in health care delivery and community settings.

Here is what you should know about dissemination and implementation.

Sickcare artificial intelligence products and services have a unique set of barriers to dissemination and implementation.

Every sickcare AI entrepreneur will eventually be faced with the task of finding customers willing and able to buy and integrate the product into their facility. But, every potential customer or segment is not the same.

There are differences in:

  1. The governance structure
  2. The process for vetting and choosing a particular vendor or solution
  3. The makeup of the buying group and decision makers
  4. The process customers use to disseminate and implement the solution
  5. Whether or not they are willing to work with vendors on pilots
  6. The terms and conditions of contracts
  7. The business model of the organization when it comes to working with early-stage companies
  8. How stakeholders are educated and trained
  9. When and how which end users and stakeholders have input in the decision
  10. The length of the sales cycle
  11. The complexity of the decision-making process
  12. Whether the product is a point solution or platform
  13. Whether the product can be used throughout all parts of just a few of the sickcare delivery network
  14. A transactional approach v a partnership and future development one
  15. The service after the sale arrangement

Here is what Sales Navigator won’t tell you.

Here is why ColdLinking does not work.

When it comes to AI product marketing and sales, when you have seen one successful integration, you have seen one process to make it happen and the success of the dissemination and implentation that creates the promised results will vary from one place to the next.

Do your homework. One size does not fit all.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

We Must Rethink the Future of Technology

We Must Rethink the Future of Technology

GUEST POST from Greg Satell

The industrial revolution of the 18th century was a major turning point. Steam power, along with other advances in areas like machine tools and chemistry transformed industry from the work of craftsmen and physical labor to that of managing machines. For the first time in world history, living standards grew consistently.

Yet during the 20th century, all of that technology needed to be rethought. Steam engines gave way to electric motors and internal combustion engines. The green revolution and antibiotics transformed agriculture and medicine. In the latter part of the century digital technology created a new economy based on information.

Today, we are on the brink of a new era of innovation in which we will need to rethink technology once again. Much like a century ago, we are developing new, far more powerful technologies that will change how we organize work, identify problems and collaborate to solve them. We will have to change how we compete and even redefine prosperity itself.

The End of the Digital Revolution

Over the past few decades, digital technology has become almost synonymous with innovation. Every few years, a new generation of chips would come out that was better, faster and cheaper than the previous one. This opened up new possibilities that engineers and entrepreneurs could exploit to create new products that would disrupt entire industries.

Yet there are only so many transistors you can cram onto a silicon wafer and digital computing is nearing its theoretical limits. We have just a few generations of advancements left before the digital revolution grinds to a halt. There will be some clever workarounds to stretch the technology a bit further, but we’re basically at the end of the digital era.

That’s not necessarily a bad thing. In many ways, the digital revolution has been a huge disappointment. Except for a relatively brief period in the late nineties and early aughts, the rise of digital technology has been marked by diminished productivity growth and rising inequality. Studies have also shown that some technologies, such as social media, worsen mental health.

Perhaps even more importantly, the end of the digital era will usher in a new age of heterogeneous computing in which we apply different computing architectures to specific tasks. Some of these architectures will be digital, but others, such as quantum and neuromorphic computing, will not be.

The New Convergence

In the 90s, media convergence seemed like a futuristic concept. We consumed information through separate and distinct channels, such as print, radio and TV. The idea that all media would merge into one digital channel just felt unnatural. Many informed analysts at the time doubted that it would ever actually happen.

Yet today, we can use a single device to listen to music, watch videos, read articles and even publish our own documents. In fact, we do these things so naturally we rarely stop to think how strange the concept once seemed. The Millennial generation doesn’t even remember the earlier era of fragmented media.

Today, we’re entering a new age of convergence in which computation powers the physical, as well as the virtual world. We’re beginning to see massive revolutions in areas like materials science and synthetic biology that will reshape massive industries such as energy, healthcare and manufacturing.

The impact of this new convergence is likely to far surpass anything that happened during the digital revolution. The truth is that we still eat, wear and live in the physical world, so innovating with atoms is far more valuable than doing so with bits.

Rethinking Prosperity

It’s a strange anachronism that we still evaluate prosperity in terms of GDP. The measure, developed by Simon Kuznets in 1934, became widely adopted after the Bretton Woods Conference a decade later. It is basically a remnant of the industrial economy, but even back then Kuznets commented, “the welfare of a nation can scarcely be inferred from a measure of national income.”

To understand why GDP is problematic, think about a smartphone, which incorporates many technologies, such as a camera, a video player, a web browser a GPS navigator and more. Peter Diamandis has estimated that a typical smartphone today incorporates applications that were worth $900,000 when they were first introduced.

So, you can see the potential for smartphones to massively deflate GDP. First of all, the price of the smartphone itself, which is just a small fraction of what the technology in it would have once cost. Then there is the fact that we save fuel by not getting lost, rarely pay to get pictures developed and often watch media for free. All of this reduces GDP, but makes us better off.

There are better ways to measure prosperity. The UN has proposed a measure that incorporates 9 indicators, the OECD has developed an alternative approach that aggregates 11 metrics, UK Prime Minister David Cameron has promoted a well-being index and even the small city of Somerville, MA has a happiness project.

Yet still, we seem to prefer GDP because it’s simple, not because its accurate. If we continue to increase GDP, but our air and water are more polluted, our children less educated and less healthy and we face heightened levels of anxiety and depression, then what have we really gained?

Empowering Humans to Design Work for Machines

Today, we face enormous challenges. Climate change threatens to pose enormous costs on our children and grandchildren. Hyperpartisanship, in many ways driven by social media, has created social strife, legislative inertia and has helped fuel the rise of authoritarian populism. Income inequality, at its highest levels since the 1920s, threatens to rip shreds in the social fabric.

Research shows that there is an increasing divide between workers who perform routine tasks and those who perform non-routine tasks. Routine tasks are easily automated. Non-routine tasks are not, but can be greatly augmented by intelligent systems. It is through this augmentation that we can best create value in the new century.

The future will be built by humans collaborating with other humans to design work for machines. That is how we will create the advanced materials, the miracle cures and new sources of clean energy that will save the planet. Yet if we remain mired in an industrial mindset, we will find it difficult to harness the new technological convergence to solve the problems we need to.

To succeed in the 21st century, we need to rethink our economy and our technology and begin to ask better questions. How does a particular technology empower people to solve problems? How does it improve lives? In what ways does it need to be constrained to limit adverse effects through economic externalities?

As our technology becomes almost unimaginably powerful, these questions will only become more important. We have the power to shape the world we want to live in. Whether we have the will remains to be seen.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Sickcare AI Field Notes

Sickcare AI Field Notes

I recently participated in a conference on Artificial Intelligence (AI) in healthcare. It was the first onsite meeting after 900 days of the pandemic.

Here is a report from the front:

  1. AI has a way to go before it can substitute for physician judgment, intuition, creativity and empathy
  2. There seems to be an inherent conflict between using AI to standardize decisions compared to using it for mass customization. Efforts to develop customized care must be designed around a deep understanding of what happens at the ground level along the patient pathway and must incorporate patient engagement by focusing on such things as shared decision-making, definition of appointments, and self-management, all of which are elements of a “build-to-order” approach.
  3. When it comes to dissemination and implementation, culture eats strategy for lunch.
  4. The majority of the conversations had to do with the technical aspects and use cases for AI. A small amount was about how to get people in your organization to understand and use it.
  5. The goal is to empower clinical teams to collaborate with patient teams and that will take some work. Moving sick care to healthcare also requires changing a sprint mindset to a marathon relay race mindset with all the hazards and risks of dropped handoffs and referral and information management leaks.
  6. AI is a facilitating technology that cuts across many applications, use cases and intended uses in sick care. Some day we might be recruiting medical students, residents and other sick care workers using AI instead of those silly resumes.
  7. The value proposition of AI includes improving workflow and improving productivity
  8. AI requires large, clean data sets regardless of applications
  9. It will take a while to create trust in technology
  10. There needs to be transparency in data models
  11. There is a large repository of data from non-traditional sources that needs to be mined e.g social media sites, community based sites providing tests, like health clubs and health fairs, as well as post acute care facilities
  12. AI is enabling both the clinical and business models of value based care
  13. Cloud based AI is changing diagnostic imaging and pattern recognition which will change manpower dynamics
  14. There are potential opportunities in AI for quality outcome stratification, cost accounting and pricing of episodes of care, determining risk premiums and optimizing margins for a bundled priced procedure given geographic disparities in quality and cost.
  15. We are in the second era of AI that is based on deep learning v rules based algorithms
  16. Value based care requires care coordination, risk stratification, patient centricity and managing risk
  17. Machine learning is being used, like Moneyball, to pick startup winners and losers, with a dose of high touch.
  18. It is encouraging to see more and more doctors attending and speaking at these kinds of meetings and lending a much needed perspective and reality check to technologists and non-sick care entrepreneurs. There were few healthcare executives besides those who were invited to be on panels.
  19. Overcoming the barriers to AI in sick care have mostly to do with changing behavior and not dwelling on the technicalities, but, rather, focusing on the jobs that doctors need to get done.
  20. The costs of AI , particularly for small, independent practitioners, are often not affordable, particularly when bundled with crippling EMR expenses . Moore’s law has not yet impacted medicine
  21. The promise of using AI to get more done with less conflicts with the paradox of productivity
  22. Top of mind problems to be solved were how to increase revenuces, cut costs , fill the workforce pipelines and address burnout and behavioral health employee and patient problems with scarce resouces.
  23. Nurses, pharmacists, public health professionals and veterinarians were under represented
  24. Payers were scarce
  25. Patients were scarce
  26. Students, residents and clinicians were looking for ways to get side gigs, non-clinical careers and exit ramps if need be.
  27. 70% of AI applications are in radiology
  28. AI is migrating from shiny to standard, running in the background to power diverse remote care modalities
  29. Chronic disease management and behavioral health have replace infectious disease as the global care management challenges
  30. AI education and training in sickcare professional schools is still woefully absent but international sickcare professional schools are filling the gaps
  31. Process and workflow improvements are a necessary part of digital and AI transformation

At its core, AI is part of a sick care eco-nervous system “brain” that is designed to change how doctors and patients think, feel and act as part of continuous behavioral improvement. Outcomes are irrelevant without impact.

AI is another facilitating technology that is part and parcel of almost every aspect of sick care. Like other shiny new objects, it remains to be seen how much value it actually delivers on its promise. I look forward to future conferences where we will be discussing how, not if to use AI and comparing best practices and results, not fairy tales and comparing mine with yours.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Should You Have a Department of Artificial Intelligence?

Should You Have a Department of Artificial Intelligence?

GUEST POST from Arlen Meyers, M.D.

Several hospitals, academic medical centers and medical schools are creating artificial intelligence organizational centers, institutes and programs. Examples are Stanford, the University of Colorado , Children’s Hospital of Orange County and Duke.

If you are contemplating doing the same, think about what is the best organizational structure? There’s a lot of debate about where AI and analytics capabilities should reside within organizations. Often leaders simply ask, “What organizational model works best?” and then, after hearing what succeeded at other companies, do one of three things: consolidate the majority of AI and analytics capabilities within a central “hub”; decentralize them and embed them mostly in the business units (“the spokes”); or distribute them across both, using a hybrid (“hub-and-spoke”) model. We’ve found that none of these models is always better than the others at getting AI up to scale; the right choice depends on a firm’s individual situation.

(click link for image)

The decision will depend on:

  1. What problems are you trying to solve? Form follows function.
  2. What resources do you have? People, money, processes, intrastructure, IP protection?
  3. What is your level of digital transformation?
  4. What is the level of your organizational innovation readiness?
  5. What are the underlying hypotheses of your intrapreneurial business model canvas and what evidence to you have that they are valid?
  6. How will you overcome the barriers to dissemination and implementation?
  7. What processes do you have in place to scale?
  8. Do you have the right people?
  9. Do you have a culture of innovation silos and, if so, how will you break them down?

10. How will you measure results? Dr Anthony Chang, the co- founder of the American Board of Artificial Intelligence, suggests that the following are some helpful metrics to measure the artificial intelligence capabilities of the health system in the context of an individual AI project:

AI Project Score

The projects that involve machine learning and artificial intelligence, either clinical oradministrative, can be followed in stages (with each stage being scored 1 point each to a maximumof 5 points) and scored to keep track as well as maintain momentum:

Stage 1: Ideation. The project is first discussed and brought to a regular meeting for input from all stakeholders. This is perhaps the most important part of an AI project that is often not regularly done with enough discussion and consideration.

Stage 2: Preparation. After approval from the group, the data access and curation takes place in order to perform the ML/AI steps that ensue. The team should appreciate that this stage takes the most effort and will require sufficient resources.

Stage 3: Operation. After the data is curated and managed, this stage entails a collaborative effort during the feature engineering and selection process. Using the ML/AI tools, the team then creates the algorithms that will lead to the models that will be used later on in the project.

Stage 4: Presentation. Upon completion of the model with real world data, the project is presented in front of the group and depending on the nature of the project, it is either presented only or is also presented at a regional or national meeting or advanced to be published in a journal.

Stage 5: Implementation. Beyond the presentation and publication, it is essential for the AI project to be implemented in the real world setting using real world data. This project still requires continual surveillance and maintenance as model and data often fatigue.

11. Are you connected to the other parts of the healthcare AI ecosystem?

(click link for image)

12. Are you prepared to overcome the ethical, legal, social, economic and privacy issues?

Feeding the organizational beasts that are resistant to change is hard. They have an insatiable appetite. Be sure your pantry is well stocked.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Three Steps to Digital and AI Transformation

Three Steps to Digital and AI Transformation

GUEST POST from Arlen Meyers, M.D.

In his book, The Four Steps to the Epiphany, Steve Blank described what has become the gospel of lean startup methodologies: Customer validation, customer discovery, customer creation and company building

The path to sickcare digital transformation is a bit shorter, but certainly no less difficult and plagued by failure: Personal innovation readiness, organizational innovation readiness and digital/AI transformation.


Are you prepared to innovate? Here’s what you should know about innovation.

Before you start, prepare yourself with these things:


Starting down the entrepreneurship path means that you will not only have to change your mind about things, more importantly, you will have to change your mindset. Don’t make these rookie mindset mistakes. Here’s what it means to have an entrepreneurial mindset. There is a difference between a clinical and an entrepreneurial mindset. Innovation starts with the right mindset.

Here is how to cope in a VUCA world.


Organizational behavior gurus have been studying how to motivate employees for a very long time. Most have failed.

Indeed, most of your ideas will fail. Consequently, you will need a source of intrinsic motivation to keep you going. Make it personal, but don’t take it personally. Find the right mentors and sponsors to keep you on track and support you when you are down. Create a personal advisory board. Develop these entrepreneurial habits. Practice the power of negative entrepreneurial thinking.


Meaning should drive what you are about to do. Practice virtuous entrepreneurship and find your ikigai. Instead of starting with the end in mind, start with the why in mind. Prune. Let go of the banana.


Once these attitudes are in place, then focus on building your entrepreneurial knowledge, skills, behaviors and competencies. Take a financial inventory. Start accumulating the physical, human and emotional resources you will need to begin and sustain your journey. In addition to knowledge, you will need resources, networks, mentors, peer support and non-clinical career guidance.


What are some standards and metrics you can us to measure your innovation readiness e.g. in the use of artificial intelligence in medicine?

The American National Standards Institute (ANSI) has released a new report that reflects stakeholder recommendations and opportunities for greater coordination of standardization for artificial intelligence (AI) in healthcare. The report, “Standardization Empowering AI-Enabled Systems in Healthcare,” reflects feedback from a 2020 ANSI leadership survey and national workshop, and pinpoints foundational principles and potential next steps for ANSI to work with standards developing organizations, the National Institute of Standards and Technology, other government agencies, industry, and other affected stakeholders.

The newly developed Medical Artificial Intelligence Readiness Scale for Medical Students (MAIRS-MS) was found to be valid and reliable tool for evaluation and monitoring of perceived readiness levels of medical students on AI technologies and applications. Medical schools may follow ‘a physician training perspective that is compatible with AI in medicine’ to their curricula by using MAIRS-MS. This scale could be benefitted by medical and health science education institutions as a valuable curriculum development tool with its learner needs assessment and participants’ end-course perceived readiness opportunities.

As an important step to ensure successful integration of AI and avoid unnecessary investments and costly failures, better consideration should be given to: (1) Needs and added-value assessment; (2) Workplace readiness: stakeholder acceptance and engagement; (3) Technology-organization alignment assessment and (4) Business plan: financing and investments. In summary, decision-makers and technology promoters should better address the complexity of AI and understand the systemic challenges raised by its implementation in healthcare organizations and systems.


Improvement readiness is not the same as innovation readiness.

Giffford Pinchot, who originated the term “intrapreneur”, has suggested that you rate your organization in several domains to see whether your innovation future looks bright or bleek:

  1. Transmission of vision and strategic intent
  2. Tolerance for risk, failure and mistakes
  3. Support for intrapreneurs
  4. Managers who support innovation
  5. Empowered cross functional teams
  6. Decision making by the doers
  7. Discretionary time to innovate
  8. Attention on the new, not the now
  9. Self- selection
  10. No early hand offs to managers
  11. Internal boundary crossing
  12. Strong organizational culture of support
  13. Focus on customers
  14. Choice of internal suppliers
  15. Measurement of innovation
  16. Transparency and truth
  17. Good treatment of people
  18. Ethical and professional
  19. Swinging for singles, not home runs
  20. Robust external open networks

If you ask a sample of people to rate these in your company on a scale of 1-10, don’t be surprised if the average equals somewhere between 2-4. Few organizations, you see, are truly innovative or have a truly innovative culture. Most don’t even think about how to bridge the now with the new, let alone measure it.

Do a cultural audit. Creating a culture of innovation must include SALT and PRICES


  • Process
  • Recognition
  • Incentives
  • Champions
  • Encouragement
  • Structure

Here is a rubrick that might help get you started

Learn from companies in other industries who transformed. Here are some tips from Levi Strauss.


Develop and deploy the 6Ps:

  1. Problem seeking
  2. Problem solving
  3. People
  4. Platform/infrastructure
  5. Process/Project management
  6. Performance indicators that meet clinical, operational and business objectives and achieve the quintuple aims.

Here are some sickeare digital transformation tips.

The path to the end of the rainbow is filled with good intentions and lots of shiny new objects. Stay focused, use your moral compass to guide you and follow the yellow brick road.

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

How to Close the Sickcare AI DI Divide

How to Close the Sickcare AI DI Divide

GUEST POST from Arlen Meyers

The digital divide describes those having or not having access to broadband, hardware, software and technology support. It’s long been acknowledged that even as the digital industry exploded out of this country, America lived with a “digital divide.” While this is loosely understood as the gap between those who have access to reliable internet service and those who don’t, the true nature and extent of the divide is often under-appreciated. Internet infrastructure is, of course, an essential element of the divide, but infrastructure alone does not necessarily translate into adoption and beneficial use. Local and national institutions, affordability and access, and the digital proficiency of users, all play significant roles — and there are wide variations across the United States along each of these.

There is also a sickcare artificial intelligence (AI) dissemination and implementation (DI) divide. Infrastucture is one of many barriers.

As with most things American, there are the haves and the have nots. Here’s how hospitals are categorized. Generally, the smaller ones lack the resources to implement sickcare AI, particularly rural hospitals which are, increasingly, under stress and closing.

So, how do we close the AI-DI divide? Multisystems solutions involve:

  1. Data interoperability
  2. Federated learning Instead of bring Mohamed to the mountain, bring the mountain to Mohamed
  3. AI as a service
  4. Better data literacy
  5. IT infrastructure access improvement
  6. Making cheaper AI products
  7. Incorporating AI into a digital health whole product solution
  8. Close the doctor-data scientist divide
  9. Democratize data and AI
  10. Create business model competition for data by empowering patient data entrepreneurs
  11. Teach hospital and practice administrators how to make value based AI vendor purchasing decisions
  12. Encourage physician intrapreneurship and avoid the landmines
  13. Use no-code or low-code tools to innovate

We are still in the early stages of realizing the full potential of sickcare artificial intelligence. However, if we don’t close the AI-DI gaps, a large percentage of patients will never realize the benefits.

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

AI Has Already Taken Over the World

AI Has Already Taken Over the World

I don’t know about you, but it’s starting to feel as if machines and Artificial Intelligence (AI) have already taken over the world.

Remember in primary school when everyone tried really hard to impress, or even just to be recognized by, a handful of cool kids?

It’s feeling more and more each day as if the cool kids on the block that we’re most desperate to impress are algorithms and artificial intelligence.

We’re all desperate to get our web pages preferred over others by the algorithms of Google and Bing and are willing to spend real money on Search Engine Optimization (SEO) to increase our chances of ranking higher.

Everyone seems super keen to get their social media posts surfaced by Facebook, Twitter, Instagram, YouTube, Tik Tok, and even LinkedIn.

In today’s “everything is eCommerce” world, how your business ranks on Google and Bing increasingly can determine whether you’re in business or out of business.

Algorithms Have Become the New Cool Kids on the Block

According to the “Agencies SEO Services Global Market Report 2021: COVID-19 Impact and Recovery to 2030” report from The Business Research Company:

“The global agencies seo services market is expected to grow from $37.84 billion in 2020 to $40.92 billion in 2021 at a compound annual growth rate (CAGR) of 8.1%. The market is expected to reach $83.7 billion in 2025 at a CAGR of 19.6%.”

Think about that for a bit…

Companies and individuals are forecast to spend $40 Billion trying to impress the alogrithms and artificial intelligence applications of companies like Google and Microsoft in order to get their web sites and web pages featured higher in the search engine rankings.

The same can be true for companies and individuals trying to make a living selling on Amazon, Walmart.com and eBay. The algorithms of these companies determine which sellers get preferred placement and as a result can determine which individuals and companies profit and which will march down a path toward bankruptcy.

And then there is another whole industry and gamesmanship surrounding the world of social media marketing.

According to BEROE the size of the social media marketing market is in excess of $102 Billion.

These are huge numbers that, at least for me, demonstrate that the day that machines and AI take over the world is no longer out there in the future, but is already here.

Machines have become the gatekeepers between you and your customers.

Be afraid, be very afraid.

(insert maniacal laugh here)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Innovation or Not – Amazon Echo Frames

Amazon Echo Frames

Amazon announced yesterday that they were making their Amazon Echo Frames available to the general public. Amazon previously announced Echo Frames over a year ago. But, after extensive testing with a limited group of users over this past year, Amazon has decided that Echo Frames are ready for prime time and is making them available to anyone who wants a pair.

Amazon doesn’t green light every experiment that they invest in, as they simultaneously announced an unceremonious end to the Amazon Echo Loop Ring.

Amazon Echo Frames are very much what they sound like, a pair of $249.99 eyeglass frames that pair with your Android 9.0+ or iOS 13.6+ smartphone to allow you to give voice commands to that supercomputer you carry around in your pocket every day. Here is the demo video from last year:

You might be asking yourself – Why is Amazon making an iOS version?

It is kind of surprising given the rumors indicating that Apple will be launching their own Siri glasses at some point, but Amazon has decided to instead allow Echo Frames to tap into Google Assistant or Siri if people so choose.

It is important to note that Echo Frames are NOT smartglasses or even augmented reality glasses, but instead a Zero UI extension of your smartphone and an audio system for text messages and the occasional phone call, allowing you to cut down on your screen time and keep your smartphone tucked away more of the day.

It will be interesting to see whether these catch on or whether people opt for in ear solutions like Google Pixelbuds or Apple’s Airpods Pro. I guess only time will tell.

So, what do you think? Innovation or not?

Accelerate your change and transformation success

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Just Walk Out Groceries — by Amazon

Just Walk Out Groceries -- by Amazon

Amazon Go is going big – grocery store big. Today it was revealed that Amazon has opened up a new Amazon Go that is four times (4x) bigger than previous Amazon Go stores. What’s new?

Well, this new Amazon Go store has produce, packaged meats, an expanded frozen food section, sundries like paper towels, and more!

This is a big step forward for Amazon and will be stretching its technology to the breaking point as Amazon looks not only to explore what’s possible, but to prove its technology to the point where its collection of technology could become another revenue pillar that it can build by licensing its technology to other convenience store and grocery store chains.

The Amazon Go approach, should it expand, also puts even more of the 3 million grocery store jobs in the United States at risk. This 3 million jobs number is already declining because of self checkout and Walmart’s robotic inventory systems, among other pressures.

Is the Amazon Go approach a good thing?

Do we really all want to live in a world where packages show up at the door or food can be obtained in a grocery store without talking to anyone?

Americans are becoming increasingly lonely and isolated. I could include dozens of supporting links to back this up, but here is a good one:


The grocery store has become one of the last remaining places where someone will actually speak to you, but self checkout and technologies like Amazon Go look to stamp out this human interaction too!

But even though there are still humans in the grocery store, the level of human interaction seems to be fading there too as younger, non-unionized workers replace older unionized workers in grocery stores. Has this been your experience?

What’s next the barbershop and the hairdresser?

And can our society survive any more isolation?

Accelerate your change and transformation success

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.