Making Decisions in Uncertainty

This 25-Year-Old Tool Actually Works

Making Decisions in Uncertainty

GUEST POST from Robyn Bolton

Just as we got used to VUCA (volatile, uncertain, complex, ambiguous) futurists now claim “the world is BANI now.”  BANI (brittle, anxious, nonlinear, incomprehensible) is much worse than VUCA and reflects “the fractured, unpredictable state of the modern world.”

Not to get too Gen X on the futurists who coined and are spreading this term but…shut up.

Is the world fractured and unpredictable? Yes.

Does it feel brittle? Are we more anxious than ever? Are things changing at exponential speed, requiring nonlinear responses? Does the world feel incomprehensible? Yes, to all.

Naming a problem is the first step in solving it. The second step is falling in love with the problem so that we become laser focused on solving it. BANI does the first but fails at the second. It wallows in the problem without proposing a path forward. And as the sign says, “Ain’t nobody got time for this.”

(Re)Introducing the Cynefin Framework

The Cynefin framework recognizes that leadership and problem-solving must be contextual to be effective. Using the Welsh word for “habitat,” the framework is a tool to understand and name the context of a situation and identify the approaches best suited for managing or solving the situation.

It’s grounded in the idea that every context – situation, challenge, problem, opportunity – exists somewhere on a spectrum between Ordered and Unordered. At the Ordered end of the spectrum, cause and affect are obvious and immediate and the path forward is based on objective, immutable facts. Unordered contexts, however, have no obvious or immediate relationship between cause and effect and moving forward requires people to recognize patterns as they emerge.

Both VUCA and BANI point out the obvious – we’re spending more time on the Unordered end of the spectrum than ever. Unlike the acronyms, Cynefin helps leaders decide and act.

Five Contexts, Five Ways Forward

The Cynefin framework identifies five contexts, each with its own best practices for making decisions and progress.

On the Ordered end of the spectrum:

  • Simple contexts are characterized by stability and obvious and undisputed right answers. Here, patterns repeat, and events are consistent. This is where leaders rely on best practices to inform decisions and delegation, and direct communication to move their teams forward.
  • Complicated contexts have many possible right answers and the relationship between cause and effect isn’t known but can be discovered. Here, leaders need to rely on diverse expertise and be particularly attuned to conflicting advice and novel ideas to avoid making decisions based on outdated experience.

On the Unordered end of the spectrum:

  • Complex contexts are filled with unknown unknowns, many competing ideas, and unpredictable cause and effects. The most effective leadership approach in this context is one that is deeply uncomfortable for most leaders but familiar to innovators – letting patterns emerge. Using small-scale experiments and high levels of collaboration, diversity, and dissent, leaders can accelerate pattern-recognition and place smart bets.
  • Chaos are contexts fraught with tension. There are no right answers or clear cause and effect. There are too many decisions to make and not enough time. Here, leaders often freeze or make big bold decisions. Neither is wise. Instead, leaders need to think like emergency responders and rapidly response to re-establish order where possible to bring the situation into a Complex state, rather than trying to solve everything at once.

The final context is Disorder. Here leaders argue, multiple perspectives fight for dominance, and the organization is divided into fractions. Resolution requires breaking the context down into smaller parts that fit one of the four previous contexts and addressing them accordingly.

The Only Way Out is Through

Our VUCA/BANI world isn’t going to get any simpler or easier. And fighting it, freezing, or fleeing isn’t going to solve anything. Organizations need leaders with the courage to move forward and the wisdom and flexibility to do so in a way that is contextually appropriate. Cynefin is their map.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

You Must Accept That People Are Irrational

You Must Accept That People Are Irrational

GUEST POST from Greg Satell

For decades, economists have been obsessed with the idea of “enlightened self-interest,” building elaborate models based on the assumption that people make rational choices. Business and political leaders have used these models to shape competitive strategies, compensation, tax policies and social services among other things.

It’s clear that the real world is far more complex than that. Consider the prisoner’s dilemma, a famous thought experiment in which individuals acting in their self-interest make everyone worse off. In a wide array of real world and experimental contexts, people will cooperate for the greater good rather than pursue pure self-interest.

We are wired to cooperate as well as to compete. Identity and dignity will guide our actions even more than the prospect for loss or gain. While business schools have trained generations of managers to assume that they can optimize results by designing incentives, the truth is that leaders that can forge a sense of shared identity and purpose have the advantage.

Overcoming The Prisoner’s Dilemma

John von Neumann was a frustrated poker player. Despite having one of the best mathematical minds in history that could probably calculate the odds better than anyone on earth, he couldn’t tell whether other players were bluffing or not. It was his failure at poker that led him to create game theory, which calculates the strategies of other players.

As the field developed, it was expanded to include cooperative games in which players could choose to collaborate and even form coalitions with each other. That led researchers at RAND to create the prisoner’s dilemma, in which two suspects are being interrogated separately and each offered a reduced sentence to confess.

Prisoner's Dilemma

Here’s how it works: If both prisoners cooperate with each other and neither confesses, they each get one year in prison on a lesser charge. If one confesses, he gets off scot-free, while his partner gets 5 years. If they both rat each other out, then they get three years each—collectively the worst outcome of all.

Notice how from a rational viewpoint, the best strategy is to defect. No matter what one guy does, the other one is better off ratting him out. If both pursue self-interest, they are made worse off. It’s a frustrating problem. Game theorists call it a Nash equilibrium—one in which nobody can improve their position by unilateral move. In theory, you’re basically stuck.

Yet in a wide variety of real-world contexts, ranging from the survival strategies of guppies to military alliances, cooperation is credibly maintained. In fact, there are a number of strategies that have proved successful in overcoming the prisoner’s dilemma. One, called tit-for-tat, relies on credible punishments for defections. Even more effective, however, is building a culture of shared purpose and trust.

Kin Selection And Identity

Evolutionary psychology is a field very similar to game theory. It employs mathematical models to explain what types of behaviors provide the best evolutionary outcomes. At first, this may seem like the utilitarian approach that economists have long-employed, but when you combine genetics with natural selection, you get some surprising answers.

Consider the concept of kin selection. From a purely selfish point of view, there is no reason for a mother to sacrifice herself for her child. However, from an evolutionary point of view, it makes perfect sense for parents to put their kids first. Groups who favor children are more likely to grow and outperform groups who don’t.

This is what Richard Dawkins meant when he called genes selfish. If we look at things from our genes’ point of view, it makes perfect sense for them to want us to sacrifice ourselves for children, who are more likely to be able to propagate our genes than we are. The effect would logically also apply to others, such as cousins, that likely carry our genes.

Researchers have also applied the concept of kin selection to other forms of identity that don’t involve genes, but ideas (also known as memes) in examples such as patriotism. When it comes to people or ideas we see as an important part of our identity, we tend to take a much more expansive view of our interests than traditional economic models would predict.

Cultures of Dignity

It’s not just identity that figures into our decisions, but dignity as well. Consider the ultimatum game. One player is given a dollar and needs to propose how to split it with another player. If the offer is accepted, both players get the agreed upon shares. If it is not accepted, neither player gets anything.

If people acted purely rationally, offers as low as a penny would be routinely accepted. After all, a penny is better than nothing. Yet decades of experiments across different cultures show that most people do not accept a penny. In fact, offers of less than 30 cents are routinely rejected as unfair because they offend people’s dignity and sense of self.

Results from ultimatum game are not uniform, but vary in different cultures and more recent research suggests why. In a study in which a similar public goods game was played it was found that cooperative—as well as punitive—behavior is contagious, spreading through three degrees of interactions, even between people who haven’t had any direct contact.

Whether we know it or not, we are constantly building ecosystems of norms that reward and punish behavior according to expectations. If we see the culture we are operating in as trusting and generous, we are much more likely to act collaboratively. However, if we see our environment as cutthroat and greedy, we’ll tend to model that behavior in the same way.

Forging Shared Identity And Shared Purpose

In an earlier age, organizations were far more hierarchical. Power rested at the top. Information flowed up, orders went down, work got done and people got paid. Incentives seemed to work. You could pay more and get more. Yet in today’s marketplace, that’s no longer tenable because the work we need done is increasingly non-routine.

That means we need people to do more than merely carry out tasks, they need to put all of their passion and creativity into their work to perform at a high-level. They need to collaborate effectively in teams and take pride in the impact their efforts produce. To achieve that at an organizational level, leaders need to shift their mindsets.

As David Burkus explained in his TED Talk, humans are prosocial. They are vastly more likely to perform when they understand and identify with who their work benefits than when they are given financial incentives or fed some grandiose vision. Evolutionary psychologists have long established that altruism is deeply embedded in our sense of tribe.

The simple truth is that we can no longer coerce people to do what we want with Rube Goldberg-like structures of carrots and sticks, but must inspire people to want what we want. Humans are not purely rational beings, responding to stimuli as if they were vending machines that spit out desired behaviors when the right buttons are pushed, but are motivated by identity and dignity more than anything else.

Leadership is not an algorithm, but a practice of creating meaning through relationships of trust in the context of a shared purpose.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






AI, Cognitive Obesity and Arrested Development

AI, Cognitive Obesity and Arrested Development

GUEST POST from Pete Foley

Some of the biggest questions of our age are whether AI will ultimately benefit or hurt us, and how big its’ effect will ultimately be.

And that of course is a problem with any big, disruptive technology.  We want to anticipate how it will play out in the real world, but our forecasts are rarely very accurate, and all too often miss a lot of the more important outcomes. We often don’t anticipate it’s killer applications, how it will evolve or co-evolve with other emergent technologies, or predict all of the side effects and ‘off label’ uses that come with it.  And the bigger the potential impact new tech has, and the broader the potential applications, the harder prediction becomes.  The reality is that in virtually every case, it’s not until we set innovation free that we find its full impact, good, bad or indifferent.

Pandora’s Box

And that can of course be a sizable concern.  We have to open Pandora’s Box in order to find out what is inside, but once open, it may not be possible to close it again.   For AI, the potential scale of its impact makes this particularly risky. It also makes any meaningful regulation really difficult. We cannot regulate what we cannot accurately predict. And if we try we risk not only missing our target, but also creating unintended consequences, and distorting ‘innovation markets’ in unexpected, potentially negative ways.

So it’s not surprising there is a lot of discussion around what AI will or will not do. How will it effect jobs, the economy, security, mental health. Will it ‘pull’ a Skynet, turn rogue and destroy humanity? Will it simply replace human critical thinking to the point where it rules us by default? Or will it ultimately fizzle out to some degree, and become a tool in a society that looks a lot like today, rather than revolutionizing it?

I don’t even begin to claim to predict the future with any accuracy, for all of the reasons mentioned above. But as a way to illustrate how complex an issue this is, I’d like to discuss a few less talked about scenarios.

1.  Less obvious issues:  Obviously AI comes with potential for enormous benefits and commensurate problems.  It’s likely to trigger an arms race between ‘good’ and ‘bad’ applications, and that of itself will likely be a moving target.  An obvious, oft discussed potential issue is of course the ‘Terminator Scenario’ mentioned above.  That’s not completely far fetched, especially with recent developments in AI self preservation and scheming that I’ll touch on later. But there are plenty of other potential, if less extreme pitfalls, many of which involve AI amplifying and empowering bad behavior by humans.  The speed and agility AI hands to hackers, hostile governments, black-hats, terrorists and organized crime vastly enhanced capability for attacks on infrastructure, mass fraud or worse. And perhaps more concerning, there’s the potential for AI to democratize cyber crime, and make it accessible to a large number of ‘petty’ criminals who until now have lacked resources to engage in this area. And when the crime base expands, so does the victim base. Organizations or individuals who were too small to be targeted for ransomware when it took huge resources to create, will presumably become more attractive targets as AI allows similar code to be built in hours by people who possess limited coding skills.

And all of this of course adds another regulation challenge. The last thing we want to do is slow legitimate AI development via legislation, while giving free reign to illegitimate users, who presumably will be far less likely to follow regulations. If the arms race mentioned above occurs, the last thing we want to do is unintentionally tip the advantage to the bad guys!

Social Impacts

But AI also has the potential to be disruptive in more subtle ways.  If the internet has taught us anything, it is that how the general public adopts technology, and how big tech monetizes matter a lot. But this is hard to predict.  Some of the Internet’s biggest negative impacts have derived from largely unanticipated damage to our social fabric.  We are still wrestling with its impact on social isolation, mental health, cognitive development and our vital implicit skill-set. To the last point, simply deferring mental tasks to phones and computers means some cognitive muscles lack exercise, and atrophy, while reduction in human to human interactions depreciate our emotion and social intelligence.

1. Cognitive Obesity  The human brain evolved over tens of thousands, arguable millions of years (depending upon where in you start measuring our hominid history).  But 99% of that evolution was characterized by slow change, and occurred in the context of limited resources, limited access to information, and relatively small social groups.  Today, as the rate of technological innovation explodes, our environment is vastly different from the one our brain evolved to deal with.  And that gap between us and our environment is widening rapidly, as the world is evolving far faster than our biology.  Of course, as mentioned above, the nurture part of our cognitive development does change with changing context, so we do course correct to some degree, but our core DNA cannot, and that has consequences.

Take the current ‘obesity epidemic’.  We evolved to leverage limited food resources, and to maximize opportunities to stock up calories when they occurred.  But today, faced with near infinite availability of food, we struggle to control our scarcity instincts. As a society, we eat far too much, with all of the health issues that brings with it. Even when we are cognitively aware of the dangers of overeating, we find it difficult to resist our implicit instincts to gorge on more food than we need.  The analogy to information is fairly obvious. The internet brought us near infinite access to information and ‘social connections’.  We’ve already seen the negative impact this can have, contributing to societal polarization, loss of social skills, weakened emotional intelligence, isolation, mental health ‘epidemics’ and much more. It’s not hard to envisage these issues growing as AI increases the power of the internet, while also amplifying the seduction of virtual environments.  Will we therefore see a cognitive obesity epidemic as our brain simply isn’t adapted to deal with near infinite resources? Instead of AI turning us all into hyper productive geniuses, will we simply gorge on less productive content, be it cat videos, porn or manipulative but appealing memes and misinformation? Instead of it acting as an intelligence enhancer, will it instead accelerate a dystopian Brave New World, where massive data centers gorge on our common natural resources primarily to create trivial entertainment?

2. Amplified Intelligence.  Even in the unlikely event that access to AI is entirely democratic, it’s guaranteed that its benefits will not be. Some will leverage it far more effectively than others, creating significant risk of accelerating social disparity.  While many will likely gorge unproductively as described above, others will be more disciplined, more focused and hence secure more advantage.  To return to the obesity analogy, It’s well documented that obesity is far more prevalent in lower income groups. It’s hard not to envisage that productive leverage of AI will follow a similar pattern, widening disparities within and between societies, with all of the issues and social instability that comes with that.

3. Arrested Development.  We all know that ultimately we are products of both nature and nurture. As mentioned earlier, our DNA evolves slowly over time, but how it is expressed in individuals is impacted by current or context.  Humans possess enormous cognitive plasticity, and can adapt and change very quickly to different environments.  It’s arguably our biggest ‘blessing’, but can also be a curse, especially when that environment is changing so quickly.

The brain is analogous to a muscle, in that the parts we exercise expand or sharpen, and the parts we don’t atrophy.    As we defer more and more tasks to AI, it’s almost certain that we’ll become less capable in those areas.  At one level, that may not matter. Being weaker at math or grammar is relatively minor if our phones can act as a surrogate, all of my personal issues with autocorrect notwithstanding.

But a bigger potential issue is the erosion of causal reasoning.  Critical thinking requires understanding of underlying mechanisms.  But when infinite information is available at a swipe of a finger, it becomes all too easy to become a ‘headline thinker’, and unconsciously fail to penetrate problems with sufficient depth.

That risks what Art Markman, a psychologist at UT, and mentor and friend, used to call the ‘illusion of understanding’.  We may think we know how something works, but often find that knowledge is superficial, or at least incomplete, when we actually need it.   Whether its fixing a toilet, changing a tire, resetting a fuse, or unblocking a sink, often the need to actually perform a task reveals a lack in deep, causal knowledge.   This often doesn’t matter until it does in home improvement contexts, but at least we get a clear signal when we discover we need to rush to YouTube to fix that leaking toilet!

This has implications that go far beyond home improvement, and is one factor helping to tear our social fabric apart.   We only have to browse the internet to find people with passionate, but often opposing views on a wide variety of often controversial topics. It could be interest rates, Federal budgets, immigration, vaccine policy, healthcare strategy, or a dozen others. But all too often, the passion is not matched by deep causal knowledge.  In reality, these are all extremely complex topics with multiple competing and interdependent variables.  And at risk of triggering hate mail, few if any of them have easy, conclusive answers.  This is not physics, where we can plug numbers into an equation and it spits out a single, unambiguous solution.  The reality is that complex, multi-dimensional problems often have multiple, often competing partial solutions, and optimum outcomes usually require trade offs.  Unfortunately few of us really have the time to assimilate the expertise and causal knowledge to have truly informed and unambiguous answers to most, if not all of these difficult problems.

And worse, AI also helps the ‘bad guys’. It enables unscrupulous parties to manipulate us for their own benefit, via memes, selective information and misinformation that are often designed to make us think we understand complex problems far better than we really do. As we increasingly rely on input from AI, this will inevitable get worse. The internet and social media has already contributed to unprecedented social division and nefarious financial rimes.   Will AI amplify this further?

This problem is not limited to complex social challenges. The danger is that for ALL problems, the internet, and now AI, allows us to create the illusion for ourselves that we understand complex systems far more deeply than we really do.  That in turn risks us becoming less effective problem solvers and innovators. Deep causal knowledge is often critical for innovating or solving difficult problems.  But in a world where we can access answers to questions so quickly and easily, the risk is that we don’t penetrate topics as deeply. I personally recall doing literature searches before starting a project. It was often tedious, time consuming and boring. Exactly the types of task AI is perfect for. But that tedious process inevitably built my knowledge of the space I was moving into, and often proved valuable when we hit problems later in the project. If we now defer this task to AI, even in part, this reduces depth of understanding. And in in complex systems or theoretic problem solving, will often lack the unambiguous signal that usually tells us our skills and knowledge are lacking when doing something relatively simple like fixing a toilet. The more we use AI, the more we risk lacking necessary depth of understanding, but often without realizing it.

Will AI become increasingly unreliable?

We are seeing AI develop the capability to lie, together with a growing propensity to cover it’s tracks when it does so. The AI community call it ’scheming’, but in reality it’s fundamentally lying.  https://openai.com/index/detecting-and-reducing-scheming-in-ai-models/?_bhlid=6a932f218e6ebc041edc62ebbff4f40bb73e9b14. We know from the beginning we’ve faced situations where AI makes mistakes.  And as I discussed recently, the risks associated with that are amplified because of it’s increasingly (super)human or oracle-like interface creating an illusion of omnipotence.

But now it appears to be increasingly developing properties that mirror self preservation.  A few weeks ago there were reports of difficulties in getting AI’s to shut themselves down, and even of AI’s using defensive blackmail when so threatened. Now we are seeing reports of AI’s deliberately trying to hide their mistakes.  And perhaps worse, concerns that attempts to fix this may simply “teach the model to become better at hiding its deceptive behavior”, or in other words, become a better liar.

If we are already in an arms race with an entity to keep it honest, and put our interests above its own, given it’s vastly superior processing power and speed, it may be a race we’ve already lost.  That may sound ‘doomsday-like’, but that doesn’t make it any less possible. And keep in mind, much of the Doomsday projections around AI focus on a ’singularity event’ when AI suddenly becomes self aware. That assumes AI awareness and consciousness will be similar to human, and forces a ‘birth’ analogy onto the technology. However, recent examples of self preservation and dishonesty maybe hint at a longer, more complex transition, some of which may have already started.

How big will the impact of AI be?

I think we all assume that AI’s impact will be profound. After all,  it’s still in its infancy, and is already finding it’s way into all walks of life.  But what if we are wrong, or at least overestimating its impact?  Just to play Devils Advocate, we humans do have a history of over-estimating both the speed and impact of technology driven change.

Remember the unfounded (in hindsight) panic around Y2K?  Or when I was growing up, we all thought 2025 would be full of people whizzing around using personal jet-packs.  In the 60’s and 70’s we were all pretty convinced we were facing nuclear Armageddon. One of the greatest movies of all time, 2001, co-written by inventor and futurist Arthur C. Clark, had us voyaging to Jupiter 24 years ago!  Then there is the great horse manure crisis of 1894. At that time, London was growing rapidly, and literally becoming buried in horse manure.  The London Times predicted that in 50 years all of London would be buried under 9 feet of poop. In 1898 the first global urban planning conference could find no solution, concluding that civilization was doomed. But London, and many other cities received salvation from an unexpected quarter. Henry Ford invented the motor car, which surreptitiously saved the day.  It was not a designed solution for the manure problem, and nobody saw it coming as a solution to that problem. But nonetheless, it’s yet another example of our inability to see the future in all of it’s glorious complexity, and for our predictions to screw towards worse case scenarios and/or hyperbole.

Change Aversion:

That doesn’t of course mean that AI will not have a profound impact. But lot’s of factors could potentially slow down, or reduce its effects.  Not least of these is human nature. Humans possess a profound resistance to change.  For sure, we are curious, and the new and innovative holds great appeal.  That curiosity is a key reason as to why humans now dominate virtually every ecological niche on our planet.   But we are also a bit schizophrenic, in that we love both change and stability and consistency at the same time.  Our brains have limited capacity, especially for thinking about and learning new stuff.  For a majority of our daily activities, we therefore rely on habits, rituals, and automatic behaviors to get us through without using that limited higher cognitive capacity. We can drive, or type, or do parts of our job without really thinking about it. This ‘implicit’ mental processing frees up our conscious brain to manage the new or unexpected.  But as technology like AI accelerates, a couple of things could happen.  One is that as our cognitive capacity gets overloaded, and we unconsciously resist it.  Instead of using the source of all human knowledge for deep self improvement, we instead immerse ourselves in less cognitively challenging content such as social media.

Or, as mentioned earlier, we increasingly lose causal understanding of our world, and do so without realizing it.   Why use our limited thinking capacity for tasks when it is quicker, easier, and arguably more accurate to defer to an AI. But lack of causal understanding seriously inhibits critical thinking and problem solving.  As AI gets smarter, there is a real risk that we as a society become dumber, or at least less innovative and creative.

Our Predictions are Wrong.

If history teaches us anything, most, if not all of the sage and learned predictions about AI will be mostly wrong. There is no denying that it is already assimilating into virtually every area of human society.  Finance, healthcare, medicine, science, economics, logistics, education etc.  And it’s a snooze and you lose scenario, and in many fields of human endeavor, we have little choice.  Fail to embrace the upside of AI and we get left behind.

That much power in things that can think so much faster than us, that may be developing self-interest, if not self awareness, that has no apparent moral framework, and is in danger of becoming an expert liar, is certainly quite sobering.

The Doomsday Mindset.

As suggested above, loss aversion and other biases drive us to focus on the downside of change.   It’s a bias that makes evolutionary sense, and helped keep our ancestors alive long enough to breed and become our ancestors. But remember, that bias is implicitly built into most, if not all of our predictions.   So there’s at least  chance that it’s impact wont be quite as good or bad as our predictions suggest

But I’m not sure we want to rely on that.  Maybe this time a Henry Ford won’t serendipitously rescue us from a giant pile of poop of our own making. But whatever happens, I think it’s a very good bet that we are in for some surprises, both good and bad. Probably the best way to deal with that is to not cling too tightly to our projections or our theories, remain agile, and follow the surprises as much, if not more than met expectations.

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The AI Innovations We Really Need

The Future of Sustainable AI Data Centers and Green Algorithms

The AI Innovations We Really Need

GUEST POST from Art Inteligencia

The rise of Artificial Intelligence represents a monumental leap in human capability, yet it carries an unsustainable hidden cost. Today’s large language models (LLMs) and deep learning systems are power and water hungry behemoths. Training a single massive model can consume the energy equivalent of dozens of homes for a year, and data centers globally now demand staggering amounts of fresh water for cooling. As a human-centered change and innovation thought leader, I argue that the next great innovation in AI must not be a better algorithm, but a greener one. We must pivot from the purely computational pursuit of performance to the holistic pursuit of water and energy efficiency across the entire digital infrastructure stack. A sustainable AI infrastructure is not just an environmental mandate; it is human-centered mandate for equitable, accessible global technology. The withdrawal of Google’s latest AI data center project in Indiana this week after months of community opposition is proof of this need.

The current model of brute-force computation—throwing more GPUs and more power at the problem—is a dead end. Sustainable innovation requires targeting every element of the AI ecosystem, from the silicon up to the data center’s cooling system. This is an immediate, strategic imperative. Failure to address the environmental footprint of AI is not just an ethical lapse; it’s an economic and infrastructural vulnerability that will limit global AI deployment and adoption, leaving entire populations behind.

Strategic Innovation Across the AI Stack

True, sustainable AI innovation must be decentralized and permeate six core areas:

  1. Processors (ASICs, FPGAs, etc.): The goal is to move beyond general-purpose computing toward Domain-Specific Architecture. Custom ASICs and highly specialized FPGAs designed solely for AI inference and training, rather than repurposed hardware, offer orders of magnitude greater performance-per-watt. The shift to analog and neuromorphic computing drastically reduces the power needed for each calculation by mimicking the brain’s sparse, event-driven architecture.
  2. Algorithms: The most powerful innovation is optimization at the source. Techniques like Sparsity (running only critical parts of a model) and Quantization (reducing the numerical precision required for calculation, e.g., from 32-bit to 8-bit) can cut compute demands by over 50% with minimal loss of accuracy. We need algorithms that are trained to be inherently efficient.
  3. Cooling: The biggest drain on water resources is evaporative cooling. We must accelerate the adoption of Liquid Immersion Cooling (both single-phase and two-phase), which significantly reduces reliance on water and allows for more effective waste heat capture for repurposing (e.g., district heating).
  4. Networking and Storage: Innovations optical networking (replacing copper with fiber) and silicon photonics reduce the energy spikes for data transfer between thousands of chips. For storage, emerging non-volatile memory technologies can cut the energy consumed during frequent data retrieval and writes.
  5. Security: Encryption and decryption are computationally expensive. We need Homomorphic Encryption (HE) accelerators and specialized ASICs that can execute complex security protocols with minimal power draw. Additionally, efficient algorithms for federated learning reduce the need to move sensitive data to central, high-power centers.

“We are generating moderate incremental intelligence by wasting massive amounts of water and power. Sustainability is not a constraint on AI; it is the ultimate measure of its long-term viability.” — Braden Kelley


Case Study 1: Google’s TPU and Data Center PUE

The Challenge:

Google’s internal need for massive, hyper-efficient AI processing far outstripped the efficiency available from standard, off-the-shelf GPUs. They were running up against the physical limits of power consumption and cooling capacity in their massive fleet.

The Innovation:

Google developed the Tensor Processing Unit (TPU), a custom ASIC optimized entirely for their TensorFlow workload. The TPU achieved significantly better performance-per-watt for inference compared to conventional processors at the time of its introduction. Simultaneously, Google pioneered data center efficiency, achieving industry-leading Power Usage Effectiveness (PUE) averages near 1.1. (PUE is defined as Total Energy entering the facility divided by the Energy used by IT Equipment.)

The Impact:

This twin focus—efficient, specialized silicon paired with efficient facility management—demonstrated that energy reduction is a solvable engineering problem. The TPU allows Google to run billions of daily AI inferences using a fraction of the energy that would be required by repurposed hardware, setting a clear standard for silicon specialization and driving down the facility overhead costs.


Case Study 2: Microsoft’s Underwater Data Centers (Project Natick)

The Challenge:

Traditional data centers struggle with constant overheating, humidity, and high energy use for active, water-intensive cooling, leading to high operational and environmental costs.

The Innovation:

Microsoft’s Project Natick experimented with deploying sealed data center racks underwater. The ambient temperature of the deep ocean or a cold sea serves as a massive, free, passive heat sink. The sealed environment (filled with inert nitrogen) also eliminated the oxygen-based corrosion and humidity that cause component failures, resulting in a 8x lower failure rate than land-based centers.

The Impact:

Project Natick provides a crucial proof-of-concept for passive cooling innovation and Edge Computing. By using the natural environment for cooling, it dramatically reduces the PUE and water consumption tied to cooling towers, pushing the industry to consider geographical placement and non-mechanical cooling as core elements of sustainable design. The sealed environment also improves hardware longevity, reducing e-waste.


The Next Wave: Startups and Companies to Watch

The race for the “Green Chip” is heating up. Keep an eye on companies pioneering specialized silicon like Cerebras and Graphcore, whose large-scale architectures aim to minimize data movement—the most energy-intensive part of AI training. Startups like Submer and Iceotope are rapidly commercializing scalable liquid immersion cooling solutions, transforming the data center floor. On the algorithmic front, research labs are focusing Spiking Neural Networks (SNNs) and neuromorphic chips (like those from Intel’s Loihi project), which mimic the brain’s energy efficiency by only firing when necessary. Furthermore, the development of carbon-aware scheduling tools by startups is beginning to allow cloud users to automatically shift compute workloads to times and locations where clean, renewable energy is most abundant, attacking the power consumption problem from the software layer and offering consumers a transparent, green choice.

The Sustainable Mandate

Sustainable AI is not an optional feature; it is a design constraint for all future human-centered innovation. The shift requires organizational courage to reject the incremental path. We must move funding away from simply purchasing more conventional hardware and towards investing in these strategic innovations: domain-specific silicon, quantum-inspired algorithms, liquid cooling, and security protocols designed for minimum power draw. The true power of AI will only be realized when its environmental footprint shrinks, making it globally scalable, ethically sound, and economically viable for generations to come. Human-centered innovation demands a planet-centered infrastructure.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






The Need for Organizational Learning

The Need for Organizational Learning

GUEST POST from Mike Shipulski

The people within companies have development plans so they can learn new things and become more effective. There are two types of development plans – one that builds on strengths and another that shore up shortcomings. And for both types, the most important step is to acknowledge it’s important to improve. Before a plan can be created to improve on a strength, there must be recognition that something good can come from the improvement. And before there can be a plan to improve on a shortcoming, there must be recognition that there’s something missing and it needs to be improved.

And thanks to Human Resources, the whole process is ritualized. The sequence is defined, the timing is defined and the tools are defined. Everyone knows when it will happen, how it will happen and, most importantly, that it will happen. In that way, everyone knows it’s important to learn new skills for the betterment of all.

Organizational learning is altogether different and more difficult. With personal learning, it’s clear who must do the learning (the person). But with organizational learning, it’s unclear who must learn because the organization, as a whole, must learn. But we can’t really see the need for organizational learning because we get trapped in trying to fix the symptoms. Team A has a problem, so let’s fix Team A. Or, Team B has a problem, so let’s fix Team B. But those are symptoms. Real organizational learning comes when we recognize problematic themes shared by all the teams. Real organization learning comes when we realize these problems don’t result from doing things wrong, rather, they are a natural byproduct of how the company goes about its work.

The difficulty with organizational learning is not fixing the thematic problems. The difficulty is recognizing the thematic problems. When all the processes are followed and all the best practices are used, yet the same problematic symptoms arise, the problem is inherent in the foundational processes and practices. Yet, these are the processes and practices responsible for past success. It’s difficult for company leaders recognize and declare that the things that made the company successful are now the things that are holding the company back. But that’s the organizational learning that must happen.

What worked last time will work next time, as long as the competitive landscape remains constant. But when the landscape changes, what worked last time doesn’t work anymore. And this, I think, is how recipes responsible for past success can, over time, begin to show cracks and create these systematic problems that are so difficult to see.

The best way I know to recognize the need for organizational learning is to recognize changes in the competitive landscape. Once these changes are recognized, thought experiments can be run to evaluate potential impacts on how the company does business. Now that the landscape changed like this, it could stress our business model like that. Now that our competitors provide new services like this, it could create a gap in our capabilities like that.

Organizational learning occurs when the right leaders feel the problems. Fight the urge to fix the problems. Instead, create the causes and conditions for the right leaders to recognize they have a real problem on their hands.

Image credit: 1 of 950+ FREE quote slides available at http://misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Secret to Endless Customers

The Secret to Endless Customers

GUEST POST from Shep Hyken

Marcus Sheridan owns a pool and spa manufacturing company in Virginia — not a very sexy business, unless you consider the final product, which is often surrounded by beautiful people. What he did to stand out in a marketplace filled with competition is a masterclass in how to get noticed and, more importantly, get business. His most recent book, Endless Customers, is a follow-up to his bestselling book They Ask, You Answer, with updated information and new ideas that will help you build a business that has, as the title implies, endless customers.

Sheridan’s journey began in 2001 when he started a pool company with two friends. When the 2008 market collapse hit, they were on the verge of losing everything. This crisis forced them to think differently about how to reach customers. Sheridan realized that potential buyers were searching for answers to their questions, so he decided his company would become “the Wikipedia of fiberglass swimming pools.”

By brainstorming every question he’d ever received as a pool salesperson and addressing them through content online, his company’s website became the most trafficked swimming pool website in the world within just a couple of years. This approach transformed his business and became the foundation for his business philosophy.

In our interview on Amazing Business Radio, Sheridan shared what he believes is the most important strategy that businesses can use to get and keep customers, and that is to become a known and trusted brand. They must immerse themselves in what he calls the Four Pillars of a Known and Trusted Brand.

  1. Say What Others Aren’t Willing to Say: The No. 1 reason people leave websites is because they can’t find what they’re looking for — and the top information they seek is pricing. Sheridan emphasizes that businesses should openly discuss costs and pricing on their websites. While you don’t need to list exact prices, you should educate consumers about what drives costs up or down in your industry. Sheridan suggests creating a comprehensive pricing page that teaches potential customers how to buy in your industry. According to him, 90% of industries still avoid this conversation, even though it’s what customers want most.
  2. Show What Others Aren’t Willing to Show: When Sheridan’s company was manufacturing fiberglass swimming pools, it became the first to show its entire manufacturing process from start to finish through a series of videos. They were so complete that someone could literally learn how to start their own manufacturing company by watching these videos. Sheridan recognized that sharing the “secret sauce” was a level of transparency that built trust, helping to make his company the obvious choice for many customers.
  3. Sell in Ways Others Aren’t Willing to Sell: According to Sheridan, 75% of today’s buyers prefer a “seller-free sales experience.” He says, “That doesn’t mean we hate salespeople. We just don’t want to talk to them until we’re very, very, ready.” Sheridan suggests meeting customers where they are by offering self-service options on your website. For his pool and spa business, that included a price estimator solution that helped potential customers determine how much they could afford — without the pressure of talking to a salesperson.
  4. Be More Human than Others Are Willing to Be: In a world that is becoming dominated by AI and technology, showing the human side of a business is critical to a trusting business relationship. Sheridan suggests putting leaders and employees on camera. They are truly the “face of the brand.” It’s okay to use AI, just find the balance that helps you stay human in a technology-dominated world.

As we wrapped up the interview, I asked Sheridan to share his most powerful idea, and the answer goes back to a word he used several times throughout the interview: Trust. “In a time of change, we need, as businesses, constants that won’t change,” Sheridan explained. “One thing I can assure you is that in 10 years, you’re going to be in a battle for trust. It’s the one thing that binds all of us. It’s the great currency that is not going to go away. So, become that voice of trust. If you do, your organization is going to be built to last.”

And that, according to Sheridan, is how you create “endless customers.”

Image Credits: Shep Hyken

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






How Incumbents Can React to Disruption

How Incumbents Can React to Disruption

GUEST POST from Geoffrey A. Moore

Think back a couple of years and imagine …

You are Jim Farley at Ford, with Tesla banging at the door. You are Bob Iger at Disney with Netflix pounding on the gates. You are Pat Gelsinger at Intel with Nvidia invading your turf. You are virtually every CEO in retail with Amazon Prime wreaking havoc on your customer base. So, what are you supposed to do now?

The answer I give in Zone to Win is that you have to activate the Transformation Zone. This is true, but it is a bit like saying, you have to climb a mountain. It begs the question, How?

There are five key questions executives facing potential disruption must ask:

1. When?

If you go too soon, your investors will lose patience with you and desert the ship. If you go too late, your customers will realize you’re never really going to get there, so they too, reluctantly, will depart. Basically, everybody gets that a transformation takes more than one year, and no one will give you three, so by default, when the window of opportunity to catch the next wave looks like it will close within the next two years, that’s when you want to pull the ripcord.

2. What does transformation really mean?

It means you are going to break your established financial performance covenants with your investors and drastically reduce your normal investment in your established product lines in order to throw your full weight behind launching yourself into the emerging fray. The biggest mistake executives can make at this point is to play down the severity of these actions. Believe me, they are going to show, if not this quarter, then soon, and when they do, if you have not prepared the way, your entire ecosystem of investors, partners, customers, and employees are going to feel betrayed.

3. What can you say to mitigate the consequences?

Simply put, tell the truth. The category is being disrupted. If we are to serve our customers, we need to transition our business to the new technology. This is our number one priority, we have clear milestones to measure our progress, and we plan to share this information in our earnings calls. In the meantime, we continue to support our core business and to work with our customers and partners to address their current needs as well as their future roadmaps.

4. What is the immediate goal?

The immediate goal is to neutralize the threat by getting “good enough, fast enough.” It is not to leapfrog the disruptor. It is not to break any new ground. Rather, it is simply to get included in the category as a fast follower, and by so doing to secure the continuing support of the customer base and partner ecosystem. The good news here is that customers and partners do not want to switch vendors if they can avoid it. If you show you are making decent progress against your stated milestones, most will give you the benefit of the doubt. Once you have gotten your next-generation offerings to a credible state, you can assess your opportunities to differentiate long-term—but not before.

5. In what ways do we act differently?

This is laid out in detail in the chapter on the Transformation Zone in Zone to Win. The main thing is that supporting the transformation effort is the number one priority for everyone in the enterprise every day until you have reached and passed the tipping point. Anyone who is resisting or retarding the effort needs to be counseled to change or asked to leave. That said, most people will still spend most of their time doing what they were doing before. It is just that if anyone on the transformation initiative asks anyone else for help, the person asked should do everything they can to provide that help ASAP. Executive staff meetings make the transformation initiative the number one item on the agenda for the duration of the initiative, the goal being at each session to assess current progress, remove any roadblocks, and do whatever possible to further accelerate the effort.

Conclusion

The net of all of the above is transformation is a bit like major surgery. There is a known playbook, and if you follow it, there is every reason to expect a successful outcome. But woe to anyone who gets distracted along the way or who gives up in discouragement halfway through. There is no halfway house with transformations—you’re either a caterpillar or a butterfly, there’s nothing salvageable in between.

That’s what I think. What do you think?

Image Credit: Slashgear.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






How Compensation Reveals Culture

Five Questions with Kate Dixon

How Compensation Reveals Culture

GUEST POST from Robyn Bolton

It’s time for your company’s All-Hands meeting. Your CEO stands on stage and announces ambitious innovation goals, talking passionately about the importance of long-term thinking and breakthrough results. Everyone nods enthusiastically, applauds politely, and returns to their desks to focus on hitting this quarter’s numbers.  After all, that’s what their bonuses depend on.

Kate Dixon, compensation expert and founder of Dixon Consulting, has watched this contradiction play out across Fortune 500 companies, B Corps, and startups. Her insight cuts to the heart of why so many innovation initiatives fail: we’re asking people to think long-term while paying them to deliver short-term.

In our conversation, Kate revealed why most companies are inadvertently sabotaging their own innovation efforts through their compensation structures—and what the smartest organizations are doing differently.


Robyn Bolton: Kate, when I first heard you say, “compensation is the expression of a company’s culture,” it blew my mind.  What do you mean by that?

Kate Dixon: If you want to understand what an organization values, look at how they pay their people: Who gets paid more? Who gets paid less? Who gets bigger bonuses? Who moves up in the organization and who doesn’t? Who gets long-term incentives?

The answers to these questions, and a million others, express the culture of the organization.  How we reward people’s performance, either directly or indirectly, establishes and reinforces cultural norms.  Compensation is usually the biggest, if not the biggest, expenses that a company has so they’re very thoughtful and deliberate about how it is used.  Which is why it tells you what the company actually does value.

RB: What’s the biggest mistake companies make when trying to incentivize innovation?

KD: Let’s start by what companies are good at when it comes to compensations and incentives.  They’re really good about base pay, because that’s the biggest part of pay for most people in an organization. Then they spend the next amount of time and effort trying to figure out the annual bonus structure. After that comes other benefits, like long term incentives, assuming they don’t fall by the wayside.

As you know, innovation can take a long time to payout, so long-term incentives are key to encouraging that kind of investment.  Stock options and restricted shares are probably the most common long-term incentives but cash bonuses, phantom stock, and ESOP shares in employee-owned companies are also considered long term incentives.

Large companies are pretty good using some equity as an incentive, but they tie it t long term revenue goals, not innovation. As you often remind us, “innovation is a means to the end, which is growth,” so tying incentives to growth isn’t bad but I believe that we can do better. Tying incentives to the growth goals and how they’re achieved will go a long way towards driving innovation.

RB: I’ve worked in and with big companies and I’ve noticed that while they say, “innovation is everyone’s job,” the people who get long-term incentives are typically senior execs.  What gives?

Long-term incentives are definitely underutilized, below the executive level, and maybe below the director level. Assuming that most companies’ innovation efforts aren’t moonshots that take decades to realize, it makes a ton of sense to use long-term incentives throughout the organization and its ecosystem.  However, when this idea is proposed, people often pushback because “it’s too complex” for folks lower in the organization, “they wouldn’t understand.” or “they won’t appreciate it”. That stance is both arrogant and untrue.  I’ve consistently seen that when you explain long-term incentives to people, they do get it, it does motivate them, and the company does see results.

RB: Are there any examples of organizations that are getting this right?

We’re seeing a lot more innovative and interesting risk-taking behaviors in companies that are not primarily focused on profit.

Our B Corp clients are doing some crazy, cool stuff.  We have an employee-owned company that is a consulting firm, but they had an idea for a software product.  They launched it and now it’s becoming a bigger and bigger part of their business.

Family-owned or public companies that have a single giganto shareholder are also hotbeds of long-term thinking and, therefore, innovation.  They don’t have that same quarter to quarter pressure that drives a relentless focus on what’s happening right now and allows people to focus on the future.

What’s the most important thing leaders need to understand about compensation and innovation?

If you’re serious about innovation, you should be incentivizing people all over the organization.  If you want innovation to be a more regular piece of the culture so you get better results, you’ve got to look at long term incentives.  Yes, you should reward people for revenue and short-term goals.  But you also need to consider what else is a precursor to our innovation. What else is makes the conditions for innovating better for people, and reward that, too.


Kate’s insight reveals the fundamental contradiction at the heart of most companies’ innovation struggles: you can’t build long-term value with short-term thinking, especially when your compensation system rewards only the latter.

What does your company’s approach to compensation say about its culture and values?

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Learning Business and Life Lessons from Monkeys

Learning Business and Life Lessons from Monkeys

GUEST POST from Greg Satell

Franz Kafka was especially skeptical about parables. “Many complain that the words of the wise are always merely parables and of no use in daily life,” he wrote. “When the sage says: ‘Go over,’ he does not mean that we should cross to some actual place… he means some fabulous yonder…that he cannot designate more precisely, and therefore cannot help us here in the very least.

Business pundits, on the other hand, tend to favor parables, probably because telling simple stories allows for the opportunity to seem both folksy and wise at the same time. When Warren Buffet says “Only when the tide goes out do you discover who’s been swimming naked,” it doesn’t sound so much like an admonishment.

Over the years I’ve noticed that some of the best business parables involve monkeys. I’m not sure why that is, but I think it has something to do with taking intelligence out of the equation. We’re often prone to imagining ourselves as the clever hero of our own story and we neglect simple truths. That may be why monkey parables have so much to teach us.

1. Build The #MonkeyFirst

When I work with executives, they often have a breakthrough idea they are excited about. They begin to tell me what a great opportunity it is and how they are perfectly positioned to capitalize on it. However, when I begin to dig a little deeper it appears that there is some major barrier to making it happen. When I try to ask about it, they just shut down.

One reason that this happens is that there is a fundamental tension between innovation and operations. Operational executives tend to focus on identifying clear benchmarks to track progress. That’s fine for a typical project, but when you are trying to do something truly new and different, you have to directly confront the unknown.

At Google X, the tech giant’s “moonshot factory,” the mantra is #MonkeyFirst. The idea is that if you want to get a monkey to recite Shakespeare on a pedestal, you start by training the monkey, not building the pedestal, because training the monkey is the hard part. Anyone can build a pedestal.

The problem is that most people start with the pedestal, because it’s what they know and by building it, they can show early progress against a timeline. Unfortunately, building a pedestal gets you nowhere. Unless you can actually train the monkey, working on the pedestal is wasted effort.

The moral: Make sure you address the crux of the problem and don’t waste time with peripheral issues.

2. Don’t Get Taken In By Coin Flipping Monkeys

We live in a world that worships accomplishment. Sports stars who have never worked in an office are paid large fees to speak to corporate audiences. Billionaires who have never walked a beat speak out on how to fight crime (even as they invest in gun manufacturers). Others like to espouse views on education, although they have never taught a class.

Many say that you can’t argue with success, but consider this thought experiment: Put a million monkeys in a coin flipping contest. The winners in each round win a dollar and the losers drop out. After twenty rounds, there will only be two monkeys left, each winning $262,144. The vast majority of the other monkeys leave with merely pocket change.

How much would you pay the winning monkeys to speak at your corporate event? Would you invite them to advise your company? Sit on your board? Would you be interested in their views about how to raise your children, invest your savings or make career choices? Would you try to replicate their coin-flipping success? (Maybe it’s all in the wrist).

The truth is that chance and luck play a much bigger part in success than we like to admit. Einstein, for example, became the most famous scientist of the 20th century not just because of his discoveries but also due to an unlikely coincidence. True accomplishment is difficult to evaluate, so we look for signals of success to guide our judgments.

The moral: Next time you judge someone, either by their success or lack thereof, ask yourself whether you are judging actual accomplishment or telltale signs of successful coin flipping. It’s harder to tell the difference than you’d think.

3. The Infinite Monkey Theorem

There is an old thought experiment called the Infinite Monkey Theorem, which is eerily disturbing. The basic idea is that if there were an infinite amount of monkeys pecking away on an infinite amount of keyboards they would, in time, produce the complete works of Shakespeare, Tolstoy and every other literary masterpiece.

It’s a perplexing thought because we humans pride ourselves on our ability to recognize and evaluate patterns. The idea that something we value so highly could be randomly generated is extremely unsettling. Yet there is an entire branch of mathematics, called Ramsey Theory, devoted to the study of how order emerges from random sets of data.

While the infinite monkey theorem is, of course, theoretical, technology is forcing us to confront the very real dilemma’s it presents. For example, music scholar and composer David Cope has been able to create algorithms that produce original works of music that are so good even experts can’t tell they are computer generated. So what is the value of human input?

The moral: Much like the coin flipping contest, the infinite monkey theorem makes us confront what we value and why. What is the difference between things human produced and identical works that are computer generated? Are Tolstoy’s words what give his stories meaning? Or is it the intent of the author and the fact that a human was trying to say something important?

Imagining Monkeys All Around Us

G. H. Hardy, widely considered a genius, wrote that “For any serious purpose, intelligence is a very minor gift.” What he meant was that even in purely intellectual pursuits, such as his field of number theory, there are things that are far more important. It was, undoubtedly, intellectual humility that led Hardy to Ramanujuan, perhaps his greatest discovery of all.

Imagining ourselves to be heroes of our own story can rob us of the humility we need to succeed and prosper. Mistaking ourselves for geniuses can often get us into trouble. People who think they’re playing it smart tend to make silly mistakes, both because they expect to see things that others don’t and because they fail to look for and recognize trouble signs.

Parables about monkeys can be useful because nobody expects them to be geniuses, which demands that we ask ourselves hard questions. Are we doing the important work, or the easiest tasks to show progress on? If monkeys flipping coins can simulate professional success, what do we really celebrate? If monkeys tapping randomly on typewriters can create masterworks, what is the value of human agency?

The truth is that humans are prone to be foolish. We are unable, outside a few limited areas of expertise, to make basic distinctions in matters of importance. So we look for signals of prosperity, intelligence, shared purpose and other things we value to make judgments about what information we should trust. Imagining monkeys around us helps us to be more careful.

Sometimes the biggest obstacle between where we are now and the fabulous yonder we seek is just the few feet in front of us.

— Article courtesy of the Digital Tonto blog
— Image credit: Flickr

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






7 Things Leaders Need to Know About Team AI Usage

7 Things Leaders Need to Know About Team AI Usage

GUEST POST from David Burkus

Leaders, we need to talk about intelligence.

By now you’ve–hopefully–started to take it as seriously as many leaders of industry have been. Either way you look at artificial intelligence, good or bad, it is here to stay. And so we need to start thinking of answers for several questions at the intersection of leadership and AI.

How can it be used effectively, not just to cut costs but to supercharge productivity? How can we use artificial intelligence to supplement our solid foundational leadership? Where should we NOT be using artificial intelligence?

It’s still early in the new world of artificial intelligence in the workplace. A lot of companies are delaying hiring, some are already cutting teams to embrace the optimistic promises AI will bring. But I don’t think we should be all in…yet.

I do know one thing to be true: Leaders using AI will quickly outpace leaders who don’t. And it’s important you get equipped, and in the right way.

Artificial intelligence will make good managers better, but not mediocre bosses better

They say a great actor can bring a C+ movie script up to a B+ or even an A if they are really good. But if a C+ actor is given a C+ script, then it’s going be a C+ movie. The same goes for artificial intelligence and leadership. You need to be a great leader before you start implementing artificial intelligence. AI will not bump up a mediocre manager and turn them into a great leader. It’s not some miracle machine. The truth is you need to have your foundations as a manager be solid first. AI is a good supplement for already successful managers.

Don’t use artificial intelligence to monitor

Often the first temptation of leaders experimenting with AI is to find a productivity AI tool out there, plug it into their IT systems, and start virtually looking over their team’s shoulders to monitor output. There are already dozens of stories…horror stories…of companies doing just that. And it’s not a good look, and deeply hurts morale.

If you need a technology tool to ensure your people are actually working when they say they are, you screwed up a long time ago—back during the hiring process.

And the current research on this isn’t in artificial intelligence’s favor. If AI is used to “collect and analyze data about workers,” then eight out of ten workers say AI use on them would definitely or probably make them feel inappropriately watched. In addition, about a one third of the public does not think AI would lead to equitable evaluations. A majority also agrees this would lead to the information collected about workers being misused (66%).

Artificial intelligence is good at turning anything and everything into a metric. Time is an easy metric. Number of sales calls is an easy metric. Messages on slack is an easy metric. How often you move your mouse is an easy, and terrifying, metric. But just because you have easy numbers to pull on your team doesn’t mean they are the right metrics to be pulling.

Leadership is really about people, not the metrics. How you solicit and give feedback is important. How you support and grow individual employees is important. Inspiring your team and being transparent is important. If you monitor your team endlessly, and your team knows that you’re outsourcing the process of harvesting that data with artificial intelligence, it creates distance between you and them.

And that ultimately works against you in the long run. People don’t like leaders who seem far from them and far from…reality.

Become fluent in artificial intelligence, or risk getting lost in translation

There’s some interesting data from Deloitte on AI that came out in Spring 2024. Organizations reporting “very high” Generative AI expertise expect to change their talent strategies even faster, with 32 percent already making changes. According to their findings, a lot of companies are redesigning work processes and changing workflows to integrate AI at different points.

You’re probably already experiencing this with Google, Microsoft and others integrating artificial intelligence into their core products like email and chats.

Another big focus is going to be on AI fluency. Deloitte found that 47 percent of respondents are dedicating time towards it. The leadership who gets educated on AI early, and keeps training consistently on as it develops, will be the best equipped to shepherd their teams going forward. It’s inevitable that career paths and job descriptions are going to evolve. It’s up to you to stay current.

You NEED to know what the technology is, how it’s being used, and how it’s helping those you’re serving. Be it clients, customers, the public–whomever. Saying you just typed some words into a text box and out came some more words….is not a good answer. Or a good look for you. You sound like you’re treating it like magic, when it’s actually just code.

Turn your conversations and meetings into a database

Middle managers spend a lot of time, arguably too much time, sending progress reports up the chain to the C-Suite and marching orders down to the individual contributors at the bottom. And there’s a fair amount of investigating to find out where things really stand, and time can be spent having to meet multiple people to get all the correct and current information. This is a time slog.

Meanwhile, there are dozens of AI tools now that just take notes. Notes from meetings. Notes from calls. They take the transcript and pair it down to the key takeaways, action items, attendance –a full brief for your records.

So, instead of asking someone to take notes during a meeting or having all your notes in the chat only to evaporate once the zoom call ends, you have a searchable document that you can reference, build on, and keep track of. New hires can use the database to catch up, and senior leaders can get a quick read of the progress and where everything stands.

Use AI/Chat bots to offload small, clerical questions

Here’s a situation: You run a small team and maybe you have a few new hires. You’re going to get a bunch of clerical questions from them over their first 90 days. That’s normal. That’s how it’s supposed to be. Onboarding takes time. “Who’s the point person for this? What’s so and so’s email from HR? What’s the policy for remote days at the company?”

Here’s where artificial intelligence can be really useful. Depending on the sort of chat platform you use– Slack, Teams, whatever, you could make a simple chat bot that you upload a full archive of the company’s policies and your own team norms, clerical details– everything new hires will probably ask you about. So, when those quick questions, quick stop-and-chats happen, the chat-bot can take care of that.

This shouldn’t subtract your time with your new hires. This just subtracts the lower stakes conversations. Now, you have more time for the high-level conversations with them. More coaching. More mentorship. More progression towards team goals. It might sound simple but…that’s because it is.

Use AI as an audience for decisions before taking them public

Being in a leadership role requires making decisive decisions. You include feedback and perspectives from your team as much as possible. Do the research. Talk to people. But then comes the actual decision making. And that is often just you, alone, with your thoughts.

Instead of making your pros and cons list, one practical thing to try is inputting proposed decisions or actions in an AI tool and then asking for all the counterpoints and possible outcomes.

You could even scale this out to your whole team. Ideally, teams should be leveraging task-focused conflict in team discussions to spark new and better ideas. But conflict can be tricky. So, what if AI is always the devil’s advocate? As your team is generating or discussing ideas, you can be feeding those ideas into an AI tool and asking it for counterpoints or how competitors might respond.

Don’t let it make the decision for you but do let it help guide you to possible solutions.

Get the legal clearance before going too deep

One last disclaimer: check with your human resources or your senior leadership, your informational technology (IT) people—or honestly, all of them—to know the boundaries you can work within when using AI tools.

Many of the tools out there are free and still in beta mode or come with a small fee. And most of the larger AI companies are taking whatever data you input and using it to better refine their product. Your company may have rules on the books about data privacy. Certainly, if you work in legal, healthcare, or government services, you’re dealing with sensitive data that may be protected.

Get clear answers before using any AI tools. Until someone above you with authority gives you the OK, you should probably just play with the tools on your own time with your own personal projects.

Conclusion

Artificial intelligence is just getting started in the workplace. And it’s all playing out in real time. If you’re a manager starting to get your hands dirty with these new tools, acknowledge to your team that this is all a work in progress and the norms around AI are likely to evolve. Be sure to keep the playing field level with your team. Practice that transparency, onboard everyone to the tools you’re using and that they can use and see where this takes you. Remember, AI, at its best, is here to enhance our human capabilities, not replace them.

AI will never take the place of a great boss…. but it might be better than being managed by a bad one.

Image credit: David Burkus

Originally published at https://davidburkus.com on September 9, 2024.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.