How to Not Get Depleted

How to Not Get Depleted

GUEST POST from Mike Shipulski

On every operating plan there are more projects than there are people to do them and at every meeting there more new deliverables than people to take them on. At every turn, our demand for increased profits pushes our people for more. And, to me, I think this is the reason every day feel fuller than the last.

This year do you have more things to accomplish or fewer? Do you have more meetings or fewer? Do you get more emails or fewer?

We add work to people’s day as if their capacity to do work is infinite. And we add metrics to measure them to make sure they get the work done. And that’s a recipe for depletion. At some point, even the best, most productive people reach their physical and emotional limits. And at some point, as the volume of work increases, we all become depleted. It’s not that we’re moving slowly, being wasteful or giving it less than our all. When the work exceeds our capacity to do it, we run out of gas.

Here are some thoughts that may help you over the next year.

The amount of work you will get done this year is the same as you got done last year. But don’t get sidetracked here. This has nothing to do with the amount of work you were asked to do last year. Because you didn’t complete everything you were asked to do last year, the same thing will happen this year unless the amount of work on this year’s plan is equal to the amount of work you actually accomplished last year. Every year, scrub a little work off your yearly commitments until the work content finally equals your capacity to get it done.

Once the work content of your yearly plan is in line, the mantra becomes – finish one before you start one. If you had three projects last year and you finished one, you can add one project this year. If you didn’t finish any projects last year you can’t start one this year, at least until you finish one this year. It’s a simple mantra, but a powerful one. It will help you stop starting and start finishing.

There’s a variant to the finish-before-you-start approach that doesn’t have to wait for the completion of a long project. Instead of finishing a project, unimportant projects are stopped before they’re finished. This is loosely known as – stop doing before start doing. Stopping is even more powerful than finishing because low value work is stopped and the freed-up resources are immediately applied to higher value work. This takes judgement and courage to stop a dull project, but it’s well worth the discomfort.

If you want to get ahead of the game, create a stop-doing list. For each item on the list estimate how much time you will free up and sum the freed-up time for the lot. Be ruthless. Stop all but the most important work. And when your boss says you can’t stop something because it’s too important, propose that you stop for a week and see what happens. And when no one notices you stopped, propose to stop for a month and see what happens. Rinse and repeat.

When the amount of work you have to get done fits with your capacity to do it, your physical and mental health will improve. You’ll regain that spring in your step and you’ll be happier. And the quality of your work will improve. But more importantly, your family life and personal relationships will improve. You’ll be able to let go of work and be fully present with your friends and family.

Regardless of the company’s growth objectives, one person can only do the work of one person. And it’s better for everyone (and the company) if we respect this natural constraint.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Customers Love These Five Words

Customers Love These Five Words

GUEST POST from Shep Hyken

“So you don’t have to …” These five words are powerful, and whether or not customers realize it, they love them.

Think about what makes certain companies stand out from their competitors. Is it their product? Is it price? These matter, but as I’ve been preaching for decades, the differentiator is the customer experience. And specifically, the experience I want to focus on in this article is convenience.

These five words, “So you don’t have to,” form a statement that embodies the essence of creating a convenient customer experience. When companies take on certain responsibilities, eliminate friction points and other tasks to make the buying process easier for a customer, they are sending a message to their customers that says, “We’ll handle this so you don’t have to.”

  • Amazon delivers packages to your doorstep … so you don’t have to drive to the store.
  • Online grocery delivery services shop for your food and deliver it … so you don’t have to spend time in the store, pushing the cart, waiting in line to check out, and like Amazon, you don’t even have to drive to the store.
  • Auto-renewal subscriptions charge you automatically … so you don’t have to remember to re-subscribe.

Shep Hyken Five Words Cartoon

I can go on with numerous examples. The So You Don’t Have To experience is about making it easy for your customers and saving them time, energy and effort. My annual customer service and experience research consistently shows that convenience is a major driver of customer loyalty. In fact, 66% of customers say convenience is more important than friendly service, and 58% of customers are willing to pay more for it.

So, how can you deliver the So You Don’t Have To experience to your customers? Here are four ideas to get you started:

  1. Identify Your Customers’ Friction Points – Identify any areas of stress or effort in your process that can be changed or eliminated to make it easier for your customers.
  2. Practice Proactive Service – Train your team to solve customers’ problems proactively before they contact you – ideally before they even know there is a problem. Examine the reasons for these problems and find ways to eliminate them altogether.
  3. Become Your Customer – Look at your processes as if you are the customer. Mystery shop your own business and experience what your customers experience.
  4. Don’t Be Shy – If you’re going to make it easy for your customers, let them know. Explain why doing business with you is different.

Every time you remove a step, eliminate a form, reduce waiting time or simplify a process, you’re telling the customer you value their time. Whether the words are explicitly stated or implied through your actions, you’re saying, “We’ll handle this … so you don’t have to.”

Image Credits: Unsplash, Shep Hyken

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Have We Made AI Interfaces Too Human?

Could a Little Uncanny Valley Help Add Some Much Needed Skepticism to How We Treat AI Output?

Have We Made AI Interfaces Too Human?

GUEST POST from Pete Foley

A cool element of AI is how ‘human’ it appear’s to be. This is of course a part of its ‘wow’ factor, and has helped to drive rapid and widespread adoption. It’s also of course a clever illusion, as AI’s don’t really ‘think’ like real humans. But the illusion is pretty convincing. And most of us, me included, who have interacted with AI at any length, have probably at times all but forgotten they are having a conversation with code, albeit sophisticated code.

Benefits of a Human-LIke Interface: And this humanizing of the user interface brings multiple benefits. It is of course a part of the ‘wow’ factor that has helped drive rapid and widespread adoption of the technology. The intuitive, conversational interface also makes it far easier for everyday users to access information without training in search techniques. While AI’s they don’t fundamentally have access to better information than an old fashioned Google search, they are much easier to use. And the humanesque output not only provides ‘ready to use’ and pre-synthesized information, but also increases the believability of the output. Furthermore, by creating an illusion of human-like intelligence, it implicitly implies emotions, compassion and critical thinking behind the output, even if it’s not really there

Democratizing Knowledge: And in many ways, this is a really good thing. Knowledge is power. Democratizing access to it has many benefits, and in so doing adds checks and balances to our society we’ve never before enjoyed. And it’s part of a long-term positive trend. Our societies have evolved from shaman and priests jealously guarding knowledge for their own benefit, through the broader dissemination enabled by the Gutenberg press, books and libraries. That in turn gave way to mass media, the internet, and now the next step, AI. Of course, it’s not quite that simple, as it’s also a bit of an arms race. With this increased access to information has come ever more sophisticated ways in which today’s ’shamans’ or leaders try to protect their advantage. They may no longer use solar eclipses to frighten an astronomically ignorant populace into submission and obedience. But spinning, framing, controlled narratives, selective dissemination of information, fake news, media control, marketing, behavioral manipulation and ’nudging’ are just a few ways in which the flow of information is controlled or manipulated today. We have moved in the right direction, but still have a way to go, and freedom of information and it’s control are always in some kind of arms race.

Two Edged Sword: But this humanization of AI can also be a two edged sword, and comes with downsides in addition to the benefits described above. It certainly improves access and believability, and makes output easier to disseminate, but also hides its true nature. AI operates in a quite different way from a human mind. It lacks intrinsic ethics, emotional connections, genuine empathy, and ‘gut feelings’. To my inexpert mind, it in some uncomfortable ways resembles a psychopath. It’s not evil in a human sense by any means, but it also doesn’t care, and lacks a moral or ethical framework

A brutal example is the recent case of Adam Raine, where ChatGPT advised him on ways to commit suicide, and helped him write a suicide note. A sane human would never do this, but the humanesque nature of the interface appeared to create an illusion for that unfortunate individual that he was dealing with a human, and the empathy, emotional intelligence and compassion that comes with that.

That may be an extreme example. But the illusion of humanity and the ability to access unfiltered information can also bring more subtle issues. For example, while the ability to interrogate AI around our symptoms before visiting a physician certainly empowers us to take a more proactive role in our healthcare. But it can also be counterproductive. A patient who has convinced themselves of an incorrect diagnosis can actually harm themselves, or make a physicians job much harder. And AI lacks the compassion to break bad news gently, or add context in the way a human can.

The Uncanny Valley: That brings me to the Uncanny Valley. This describes when technology approaches but doesn’t quite achieve perfection in human mimicry. In the past we could often detect synthetic content on a subtle and implicit level, even if we were not conscious of it. For example, a computerized voice that missed subtle tonal inflections, or a photoshopped image or manipulated video that missed subtle facial micro expressions might not be obvious, but often still ‘felt’ wrong. Or early drum machines were so perfect that they lacked the natural ’swing’ of even the most precise human drummer, and so had to be modified to include randomness that was below the threshold of conscious awareness, but made them ‘feel’ real.

This difference between conscious and unconscious evaluation creates cognitive dissonance that can result in content feeling odd, or even ‘creepy’. And often, the closer we got to eliminating that dissonance, the creepier it feels. When I’ve dealt with the uncanny valley in the past, it’s generally been something we needed to ‘fix’. For example, over-photoshopping in a print ad, or poor CGI. But be careful what you wish for. AI appears to have marched through the ‘uncanny valley’ to the point where its output feels human. But despite feeling right, it may still lack the ethical, moral or emotional framework of the human responses it mimics.

This begs a question, ‘do we need some implicit as well as explicit cues that remind us we are not dealing with a real human? Could a slight feeling of ‘creepiness maybe help to avoid another Adam Raine? Should we add back some ‘uncanny valley’, and turn what used to be something we thought of as an ‘enemy’ to good use? The latter is one of my favorite innovation strategies. Whether it’s vaccination, or exposure to risks during childhood, or not over-sanitizing, sometimes a little of what does us harm can do us good. Maybe the uncanny valley we’ve typical tried to overcome could now actually help us?

Would just a little implicit doubt also encourage us to think a bit more deeply about the output, rather than simply cut and paste it into a report? By making AI output sound so human, it potentially removes the need for cognitive effort to process the output. Thinking that played a key role in translating search into output can now be skipped. Synthesizing and processing output from a ‘old fashioned’ Google search requires effort and comprehension. With AI, it is all to easy to regurgitate the output, skip meaningful critical thinking, and share what we really don’t understand. Or perhaps worse, we can create an illusion of understanding where we don’t think deeply or causally enough to even realize that we don’t understand what we are sharing. It’s in some ways analogous to proof reading, in that it’s all to easy to skip over content we think we already know, even if we really don’t . And the more we skip over content, the more difficult it is to be discerning, or question the output. When a searcher receives answers in prose he or she can cut and paste into a report or essay, less effort effort and critical thinking goes into comprehension and the critical thinking, and the risk of sharing inaccurate information, or even nonsense increases.

And that also brings up another side effect of low engagement with output – confirmation bias. If the output is already in usable form, doesn’t require synthesizing or comprehension, and it agrees with our beliefs or motivations, it’s a perfect storm. There is little reason to question it, or even truly understand it. We are generally pretty good at challenging something that surprises us, or that we disagree with. But it takes a lot of will, and a deep adherence to the scientific method to challenge output that supports our beliefs or theories

Question everything, and you do nothing! The corollary to this is surely ‘that’s the point of AI?’ It’s meant to give us well structured, and correct answers, and in so doing free up our time for more important things, or to act on ideas, rather than just think about them. If we challenge and analyze every output, why use AI in the first place? That’s certainly fair, but taking AI output without any question is not smart either. Remember that it isn’t human, and is still capable of making really stupid mistakes. Okay, so are humans, but AI is still far earlier in its evolutionary journey, and prone to unanticipated errors. I suspect the answer to this lies in how important the output is, and where it will be used. If it’s important, treat AI output as a hypothesis. Don’t believe everything you read, and before simply sharing or accepting, ask ourselves and AI itself questions around what went into the conclusions, where the data came from, and what the critical thinking path is. Basically apply the scientific method to AI output much the same as we would, or should our own ideas.

Cat Videos and AI Action Figures: Another related risk with AI is if we let it become an oracle. We not only treat its output as human, but as super human. With access to all knowledge, vastly superior processing power compared to us mere mortals, and apparent human reasoning, why bother to think for ourselves? A lot of people worry about AI becoming sentient, more powerful than humans, and the resultant doomsday scenarios involving Terminators and Skynet. While it would be foolish to ignore such possibilities, perhaps there is a more clear and present danger, where instead of AI conquering humanity, we simply cede our position to it. Just as basic mathematical literacy has plummeted since the introduction of calculators, and spell-check has reduced our basic literary capability, what if AI erodes our critical thinking and problem solving? I’m not the first to notice that with the internet we have access to all human knowledge, but all too often use it for cat videos and porn. With AI, we have an extraordinary creativity enhancing tool, but use masses of energy and water for data centers to produce dubious action figures in our own image. Maybe we need a little help doing better with AI. A little ‘uncanny Valley’ would not begin to deal with all of the potential issues, but maybe simply not fully trusting AI output on an implicit level might just help a little bit.

Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Back to Basics for Leaders and Managers

Back to Basics for Leaders and Managers

GUEST POST from Robyn Bolton

Imagine that you are the CEO working with your CHRO on a succession plan.  Both the CFO and COO are natural candidates, and both are, on paper, equally qualified and effective.

The CFO distinguishes herself by consistently working with colleagues to find creative solutions to business issues, even if it isn’t the optimal solution financially, and inspiring them with her vision of the future. She attracts top talent and builds strong relationships with investors who trust her strategic judgment. However, she sometimes struggles with day-to-day details and can be inconsistent in her communication with direct reports.

The COO inspires deep loyalty from his team through consistent execution and reliability. People turn down better offers to stay because they trust his systematic approach, flawless delivery, and deep commitment to developing people. However, his vision rarely extends beyond “do things better,” rigidly adhering to established processes and shutting down difficult conversations with peers when change is needed.

Who so you choose?

The COO feels like the safer bet, especially in uncertain times, given his track record of proven execution, loyal teams, and predictable results. While the CFO feels riskier because she’s brilliant but inconsistent, visionary but scattered.

It’s not an easy question to answer.

Most people default to “It depends.”

It doesn’t depend.

It doesn’t “depend,” because being CEO is a leadership role and only the CFO demonstrates leadership behaviors. The COO, on the other hand, is a fantastic manager, exactly the kind of person you want and need in the COO role. But he’s not the leader a company needs, no matter how stable or uncertain the environment.

Yet we all struggle with this choice because we’ve made “leadership” and “management” synonyms. Companies no longer have “senior management teams,” they have “senior/executive leadership teams.”  People moving from independent contributor roles to oversee teams are trained in “people leadership,” not “team management” (even though the curriculum is still largely the same).

But leadership and management are two fundamentally different things.

Leader OR Manager?

There are lots of definitions of both leaders and managers, so let’s go back to the “original” distinction as defined by Warren Bennis in his 1987 classic On Becoming a Leader

LeadersManagers
·       Do the right things·       Challenge the status quo·       Innovate·       Develops·       Focuses on people·       Relies on trust·       Has a long-range perspective·       Asks what and why·       Has an eye on the horizon·       Do things right·       Accept the status quo·       Administers·       Maintains·       Focuses on systems and structures·       Relies on control·       Has a short-range view·       Asks how and when·       Has an eye on the bottom line

In a nutshell: leaders inspire people to create change and pursue a vision while managers control systems to maintain operations and deliver results.

Leaders AND Managers!

Although the roles of leaders and managers are different, it doesn’t mean that the person who fills those roles is capable of only one or the other. I’ve worked with dozens of people who are phenomenal managers AND leaders and they are as inspiring as they are effective.

But not everyone can play both roles and it can be painful, even toxic, when we ask managers to take on leadership roles and vice versa. This is the problem with labeling everything outside of individual contributor roles as “leadership.”

When we designate something as a “people leadership” role and someone does an outstanding job of managing his team, we believe he’s a leader and promote him to a true leadership role (which rarely ends well).  Conversely, when we see someone displaying leadership qualities and promote her into “people leadership,” we may be shocked and disappointed when she struggles to manage as effortlessly as she inspires.

The Bottom Line

Leadership and Management aren’t the same thing, but they are both essential to an organization’s success. They key is putting the right people in the right roles and celebrating their unique capabilities and contributions.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Sometimes Ancient Wisdom Needs to be Left Behind

Sometimes Ancient Wisdom Needs to be Left Behind

GUEST POST from Greg Satell

I recently visited Panama and learned the incredible story of how the indigenous Emberá people there helped to teach jungle survival skills to Apollo mission astronauts. It is a fascinating combining and contrast of ancient wisdom and modern technology, equipping the first men to go to the moon with insights from both realms.

Humans tend to have a natural reverence for old wisdom that is probably woven into our DNA. It stands to reason that people more willing to stick with the tried and true might have a survival advantage over those who were more reckless. Ideas that stand the test of time are, by definition, the ones that worked well enough to be passed on.

Paradoxically, to move forward we need to abandon old ideas. It was only by discarding ancient wisdoms that we were able to create the modern world. In much the same way, to move forward now we’ll need to debunk ideas that qualify as expertise today. As in most things, our past can help serve as a guide. Here are three old ideas we managed to transcend.

1. Euclid’s Geometry

The basic geometry we learn in grade school, also known as Euclidean geometry, is rooted in axioms observed from the physical world, such as the principle that two parallel lines never intersect. For thousands of years mathematicians built proofs based on those axioms to create new knowledge, such as how to calculate the height of an object. Without these insights, our ability to shape the physical world would be negligible.

In the 19th century, however, men like Gauss, Lobachevsky, Bolyai and Riemann started to build new forms of non-Euclidean geometry based on curved spaces. These were, of course, completely theoretical and of no use in daily life. The universe, as we experience it, doesn’t curve in any appreciable way, which is why police ask us to walk a straight line if they think we’ve been drinking.

But when Einstein started to think about how gravity functioned, he began to suspect that the universe did, in fact, curve over large distances. To make his theory of general relativity work he had to discard the old geometrical thinking and embrace new mathematical concepts. Without those critical tools, he would have been hopelessly stuck.

Much like the astronauts in the Apollo program, we now live in a strange mix of old and new. To travel to Panama, for example, I personally moved through linear space and the old Euclidean axioms worked perfectly well. However, to navigate, I had to use GPS, which must take into account curved spaces for Einstein’s equations to correctly calculate distances between the GPS satellites and points on earth.

2. Aristotle’s Logic

In terms of longevity and impact, only Aristotle’s logic rivals Euclid’s geometry. At the core of Aristotle’s system is the syllogism, which is made up of propositions that consist of two terms (a subject and a predicate). If the propositions in the syllogism are true, then the argument has to be true. This basic notion that conclusions follow premises imbues logical statements with a mathematical rigor.

Yet much like with geometry, scholars began to suspect that there might be something amiss. At first, they noticed minor flaws that had to do with a strange paradox in set theory which arose with sets that are members of themselves. For example, if the barber who shaves everyone in town who doesn’t shave themselves, then who shaves the barber?

At first, these seemed like strange anomalies, minor exceptions to rules that could be easily explained away. Still, the more scholars tried to close the gaps, the more problems appeared, leading to a foundational crisis. It would only be resolved when a young logician named Kurt Gödel published his theorems that proved logic, at least as we knew it, is hopelessly broken.

In a strange twist, another young mathematician, Alan Turing, built on Gödel’s work to create an imaginary machine that would make digital computers possible. In other words, in order for Silicon Valley engineers to code to create logical worlds online, they need to use machines built on the premise that perfectly logical systems are inherently unworkable.

Of course, as I write this, I am straddling both universes, trying to put build logical sentences on those very same machines.

3. The Miasma Theory of Disease

Before the germ theory of disease took hold in medicine, the miasma theory, the notion that bad air caused disease, was predominant. Again, from a practical perspective this made perfect sense. Harmful pathogens tend to thrive in environments with decaying organic matter that gives off bad smells. So avoiding those areas would promote better health.

Once again, this basic paradigm would begin to break down with a series of incidents. First, a young doctor named Ignaz Semmelweis showed that doctors could prevent infections by washing their hands, which suggested that something besides air carried disease. Later John Snow was able to trace the source of a cholera epidemic to a single water pump.

Perhaps not surprisingly, these were initially explained away. Semmelweis failed to format his data properly and was less than an effective advocate for his work. John Snow’s work was statistical, based on correlation rather than causality. A prominent statistician William Farr, who supported the miasma theory, argued for an alternative explanation.

Still, as doubts grew, more scientists looked for answers. The work of Robert Koch, Joseph Lister and Louis Pasteur led to the germ theory. Later, Alexander Fleming, Howard Florey and Ernst Chain would pioneer the development of antibiotics in the 1940s. That would open the floodgates and money poured into research, creating modern medicine.

Today, we have gone far beyond the germ theory of disease and even lay people understand that disease has myriad causes, including bacteria, viruses and other pathogens, as well as genetic diseases and those caused by strange misfolded proteins known as prions.

To Create The Future, We Need To Break Free Of The Past

If you were a person of sophistication and education in the 19th century, your world view was based on certain axiomatic truths, such as parallel lines never cross, logical propositions are either true or false and “bad airs” made people sick. For the most part, these ideas would have served you well for the challenges you faced in daily life.

Even more importantly, your understanding of these concepts would signal your inclusion and acceptance into a particular tribe, which would confer prestige and status. If you were an architect or engineer, you needed to understand Euclid’s geometric axions. Aristotle’s rules of logic were essential to every educated profession. Medical doctors were expected to master the nuances of the miasma theory.

To stray from established orthodoxies carries great risk, even now. It is no accident that those who were able to bring about new paradigms, such as Einstein, Turing and John Snow, came from outside the establishment. More recently, people like Benoit Mandelbrot, Jim Allison and Katalin Karikó had to overcome fierce resistance to bring new ways of thinking to finance, cancer immunotherapy and mRNA vaccines respectively.

Today, it’s becoming increasingly clear we need to break with the past. In just over a decade, we’ve been through a crippling financial crisis, a global pandemic, deadly terrorist attacks, and the biggest conflict in Europe since World War II. We need to confront climate change and a growing mental health crisis. Yet it is also clear that we can’t just raze the global order to the ground and start all over again.

So what do we leave in the past and what do we bring with us into the future? Which new lessons do we need to learn and which old ones do we need to unlearn? Perhaps most importantly, what do we need to create anew and what can we rediscover in the ancient?

Throughout history, we have learned that the answer lies not in merely speculating about ideas, but in finding real solutions to problems we face.

— Article courtesy of the Digital Tonto blog
— Image credit: 1 of 950+ FREE quote slides from http://misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Metaphysics Philosophy

Metaphysics Philosophy

GUEST POST from Geoffrey A. Moore

Philosophy is arguably the most universal of all subjects. And yet, it is one of the least pursued in the liberal arts curriculum. The reason for this, I will claim, is that the entire field was kidnapped by some misguided academics around a century ago, and since then no one has paid the ransom to free it. That’s not OK, and with that in mind, here is a series of four blogs that taken together constitute an Emancipation Proclamation.

There are four branches of philosophy, and in order of importance they are

  1. metaphysics,
  2. ethics,
  3. epistemology, and
  4. logic.

This post will address the first of these four, with subsequent posts addressing the remaining three.

Metaphysics is best understood in terms of Merriam-Webster’s definition: “the philosophical study of the ultimate causes and underlying nature of things.” In everyday language, it answers the most fundamental kinds of philosophical questions:

  • What’s happening?
  • What is going on?
  • Where and how do we fit in?
  • In other words, what kind of a hand have we been dealt?

Metaphysics, however, is not normally conceived in everyday terms. Here is what the Oxford English Dictionary (OED) has to say about it in its lead definition:

That branch of speculative inquiry which treats of the first principles of things, including such concepts as being, substance, essence, time, space, cause, identity, etc.; theoretical philosophy as the ultimate science of Being and Knowing.

The problem is that concepts like substance and essence are not only intimidatingly abstract, they have no meaning in modern cosmology. That is, they are artifacts of an earlier era when things like the atomic nature of matter and the electromagnetic nature of form were simply not understood. Today, they are just verbiage.

But wait, things get worse. Here is the OED in its third sense of the word:

[Used by some followers of positivist, linguistic, or logical philosophy] Concepts of an abstract or speculative nature which are not verifiable by logical or linguistic methods.

The Oxford Companion to the Mind sheds further light on this:

The pejorative sense of ‘obscure’ and ‘over-speculative’ is recent, especially following attempts by A.J. Ayer and others to show that metaphysics is strictly nonsense.

Now, it’s not hard to understand what Ayer and others were trying to get at, but do we really want to say that the philosophical study of the ultimate causes and underlying nature of things is strictly nonsense? Instead, let’s just say that there is a bunch of unsubstantiated nonsense that calls itself metaphysics but that isn’t really metaphysics at all. We can park that stuff with magic crystals and angels on the head of a pin and get back to what real metaphysics needs to address—what exactly is the universe, what is life, what is consciousness, and how do they all work together?

The best platform for so doing, in my view, is the work done in recent decades on complexity and emergence, and that is what organizes the first two-thirds of The Infinite Staircase. Metaphysics, it turns out, needs to be understood in terms of strata, and then within those strata, levels or stair steps. The three strata that make the most sense of things are as follows:

  1. Material reality as described by the sciences of physics, chemistry, and biology, or what I called the metaphysics of entropy. This explains all emergence up to the entrance of consciousness.
  2. Psychological and social reality, as explained by the social sciences, or what I called the metaphysics of Darwinism, which builds the transition from a world of mindless matter up to one of matter-less mind, covering the intermediating emergence of desire, consciousness, values, and culture.
  3. Symbolic reality, as explained by the humanities, or what I called the metaphysics of memes, which begins with the introduction of language that in turn enables the emergence of humanity’s two most powerful problem-solving tools, narrative and analytics, culminating in the emergence of theory, ideally a theory of everything, which is, after all, what metaphysics promised to be in the first place.

The key point here is that every step in this metaphysical journey is grounded in verifiable scholarship ranging over multiple centuries and involving every department in a liberal arts faculty—except, ironically, the philosophy department which is holed up somewhere on campus, held hostage by forces to be discussed in later blogs.

That’s what I think. What do you think?

Image Credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Most Challenging Obstacles to Achieving Artificial General Intelligence

The Unclimbed Peaks

The Most Challenging Obstacles to Achieving Artificial General Intelligence

GUEST POST from Art Inteligencia

The pace of artificial intelligence (AI) development over the last decade has been nothing short of breathtaking. From generating photo-realistic images to holding surprisingly coherent conversations, the progress has led many to believe that the holy grail of artificial intelligence — Artificial General Intelligence (AGI) — is just around the corner. AGI is defined as a hypothetical AI that possesses the ability to understand, learn, and apply its intelligence to solve any problem, much like a human. As a human-centered change and innovation thought leader, I am here to argue that while we’ve made incredible strides, the path to AGI is not a straight line. It is a rugged, mountainous journey filled with profound, unclimbed peaks that require us to solve not just technological puzzles, but also fundamental questions about consciousness, creativity, and common sense.

We are currently operating in the realm of Narrow AI, where systems are exceptionally good at a single task, like playing chess or driving a car. The leap from Narrow AI to AGI is not just an incremental improvement; it’s a quantum leap. It’s the difference between a tool that can hammer a nail perfectly and a person who can understand why a house is being built, design its blueprints, and manage the entire process while also making a sandwich and comforting a child. The true obstacles to AGI are not merely computational; they are conceptual and philosophical. They require us to innovate in a way that goes beyond brute-force data processing and into the realm of true understanding.

The Three Grand Obstacles to AGI

While there are many technical hurdles, I believe the path to AGI is blocked by three foundational challenges:

  • 1. The Problem of Common Sense and Context: Narrow AI lacks common sense, a quality that is effortless for humans but incredibly difficult to code. For example, an AI can process billions of images of cars, but it doesn’t “know” that a car needs fuel or that a flat tire means it can’t drive. Common sense is a vast, interconnected web of implicit knowledge about how the world works, and it’s something we’ve yet to find a way to replicate.
  • 2. The Challenge of Causal Reasoning: Current AI models are masterful at recognizing patterns and correlations in data. They can tell you that when event A happens, event B is likely to follow. However, they struggle with causal reasoning — understanding why A causes B. True intelligence involves understanding cause-and-effect relationships, a critical component for true problem-solving, planning, and adapting to novel situations.
  • 3. The Final Frontier of Human-Like Creativity & Understanding: Can an AI truly create something new and original? Can it experience “aha!” moments of insight? Current models can generate incredibly creative outputs based on patterns they’ve seen, but do they understand the deeper meaning or emotional weight of what they create? Achieving AGI requires us to cross the final chasm: imbuing a machine with a form of human-like creativity, insight, and self-awareness.

“We are excellent at building digital brains, but we are still far from replicating the human mind. The real work isn’t in building bigger models; it’s in cracking the code of common sense and consciousness.”


Case Study 1: The Fight for Causal AI (Causaly vs. Traditional Models)

The Challenge:

In scientific research, especially in fields like drug discovery, identifying causal relationships is everything. Traditional AI models can analyze a massive database of scientific papers and tell a researcher that “Drug X is often mentioned alongside Disease Y.” However, they cannot definitively state whether Drug X *causes* a certain effect on Disease Y, or if the relationship is just a correlation. This lack of causal understanding leads to a time-consuming and expensive process of manual verification and experimentation.

The Human-Centered Innovation:

Companies like Causaly are at the forefront of tackling this problem. Instead of relying solely on a brute-force approach to pattern recognition, Causaly’s platform is designed to identify and extract causal relationships from biomedical literature. It uses a different kind of model to recognize phrases and structures that denote cause and effect, such as “is associated with,” “induces,” or “results in.” This allows researchers to get a more nuanced, and scientifically useful, view of the data.

The Result:

By focusing on the causal reasoning obstacle, Causaly has enabled researchers to accelerate the drug discovery process. It helps scientists filter through the noise of correlation to find genuine causal links, allowing them to formulate hypotheses and design experiments with a much higher probability of success. This is not about creating AGI, but about solving one of its core components, proving that a human-centered approach to a single, deep problem can unlock immense value. They are not just making research faster; they are making it smarter and more focused on finding the *why*.


Case Study 2: The Push for Common Sense (OpenAI’s Reinforcement Learning Efforts)

The Challenge:

As impressive as large language models (LLMs) are, they can still produce nonsensical or factually incorrect information, a phenomenon known as “hallucination.” This is a direct result of their lack of common sense. For instance, an LLM might confidently tell you that you can use a toaster to take a bath, because it has learned patterns of words in sentences, not the underlying physics and danger of the real world.

The Human-Centered Innovation:

OpenAI, a leader in AI research, has been actively tackling this through a method called Reinforcement Learning from Human Feedback (RLHF). This is a crucial, human-centered step. In RLHF, human trainers provide feedback to the AI model, essentially teaching it what is helpful, honest, and harmless. The model is rewarded for generating responses that align with human values and common sense, and penalized for those that do not. This process is an attempt to inject a form of implicit, human-like understanding into the model that it cannot learn from raw data alone.

The Result:

RLHF has been a game-changer for improving the safety, coherence, and usefulness of models like ChatGPT. While it’s not a complete solution to the common sense problem, it represents a significant step forward. It demonstrates that the path to a more “intelligent” AI isn’t just about scaling up data and compute; it’s about systematically incorporating a human-centric layer of guidance and values. It’s a pragmatic recognition that humans must be deeply involved in shaping the AI’s understanding of the world, serving as the common sense compass for the machine.


Conclusion: AGI as a Human-Led Journey

The quest for AGI is perhaps the greatest scientific and engineering challenge of our time. While we’ve climbed the foothills of narrow intelligence, the true peaks of common sense, causal reasoning, and human-like creativity remain unscaled. These are not problems that can be solved with bigger servers or more data alone. They require fundamental, human-centered innovation.

The companies and researchers who will lead the way are not just those with the most computing power, but those who are the most creative, empathetic, and philosophically minded. They will be the ones who understand that AGI is not just about building a smart machine; it’s about building a machine that understands the world the way we do, with all its nuances, complexities, and unspoken rules. The path to AGI is a collaborative, human-led journey, and by solving its core challenges, we will not only create more intelligent machines but also gain a deeper understanding of our own intelligence in the process.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credit: Dall-E

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Growth is Not the Answer

Growth is Not the Answer

GUEST POST from Mike Shipulski

Most companies have growth objectives – make more, sell more and generate more profits. Increase profit margin, sell into new markets and twist our products into new revenue. Good news for the stock price, good news for annual raises and plenty of money to buy the things that will help us grow next year. But it’s not good for the people that do the work.

To increase sales the same sales folks will have to drive more, call more and do more demos. Ten percent more work for three percent more compensation. Who really benefits here? The worker who delivers ten percent more or the company that pays them only three percent more? Pretty clear to me it’s all about the company and not about the people.

To increase the number of units made implies that there can be no increase in the number of people required to make them. To increase throughput without increasing headcount, the production floor will have less time for lunch, less time for improving their skills and less time to go to the bathroom. Sure, they can do Lean projects to eliminate waste, as long as they don’t miss their daily quota. And sure, they can help with Six Sigma projects to reduce variation, as long as they don’t miss TAKT time. Who benefits more – the people or the company?

Increased profit margin (or profit percentage) is the worst offender. There are only two ways to improve the metric – sell it for more or make it for less. And even better than that is to sell it for more AND make it for less. No one can escape this metric. The sales team must meet with more customers; the marketing team must work doubly hard to define and communicate the value proposition; the engineering staff must reduce the time to launch the product and make it perform better than their best work; and everyone else must do more with less or face the chopping block.

In truth, corporate growth is the fundamental behind global warming, reduced life expectancy in the US and the ridiculous increase in the cost of healthcare. Growth requires more products and more products require more material mined, pumped or clear-cut from the planet. Growth puts immense pressure on the people doing the work and increases their stress level. And when they can’t deliver, their deep sense of helplessness and inadequacy causes them to kill themselves. And healthcare costs increase because the companies within (and insuring) the system need to make more profit. Who benefits here? The people in our community? The people doing the work? The planet? Or the companies?

What if we decided that companies could not grow? What if instead companies paid dividends to the people do the work based on the profit the company makes? With constant output wouldn’t everyone benefit year-on-year?

What if we decided output couldn’t grow? What if instead, as productivity increased, companies required people to work fewer hours? What if everyone could make the same number of products in seven hours and went home an hour early, working seven and getting paid for eight? Would everyone be better off? Wouldn’t the planet be better off?

What if we decided the objective of companies was to employ more people and give them a sense of purpose and give meaning to their lives? What if we used the profit created by productivity improvements to employ more people? Wouldn’t our communities benefit when more people have good jobs? Wouldn’t people be happier because they can make a contribution to their community? Wouldn’t there be less stress and fewer suicides when parents have enough money to feed their kids and buy them clothes? Wouldn’t everyone benefit? Wouldn’t the planet benefit?

Year-on-year growth is a fallacy. Year-on-year growth stresses the planet and the people doing the work. Year-on-year growth is good for no one except the companies demanding year-on-year growth.

The planet’s resources are finite; people’s ability to do work is finite; and the stress level people can tolerate is finite. Why not recognize these realities?

And why not figure out how to structure companies in a way that benefits the owners of the company, the people doing the work, the community where the work is done and the planet?

Image credit: Dall-E

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Tricky Business of Tariffs

The Tricky Business of Tariffs

GUEST POST from Shep Hyken

Tariffs are creating havoc and panic for both customers and businesses. Depending on what you read, the cost-of-living increase for the average consumer can be thousands of dollars a year. And it’s the same for business, but often at a much higher cost. Anything a business purchases to run its day-to-day operations is potentially exposed to higher prices due to tariffs. Whatever businesses buy—supplies, inventory, equipment and more—when it costs them more, that cost is passed on to their customers.

This isn’t the first time there has been “tariff panic.” As recently as 2018, there were tariffs. I wrote a Forbes article about an e-bike company that was forced to raise its prices due to a 25% import tariff. The company was open about the reasons for the price increase and embraced the problem rather than becoming a victim of it. Here are some ways to manage the impact of tariffs:

  • Be Transparent: Everyone may know about the tariffs, but explaining how they are impacting costs will help justify the price increase. In other words, don’t hide the fact that tariffs are impacting your costs.
  • Partner with Vendors: Ask vendors to work with you on a solution to lower costs that won’t hit their bottom lines. If you buy from a vendor every month, maybe it’s less expensive to buy the same amount but ship quarterly instead of monthly. Work with them to find creative ways to reduce costs. This can benefit everyone.
  • Improve Efficiency to Offset Costs: If you’ve thought about a way to improve a process or efficiency but haven’t acted on it, now may be the perfect time to do so. Sometimes being forced to do something can work in your favor. And be sure to share what you’re changing to help reduce costs. Customers may appreciate you even more.
  • Add Value Instead of Just Raising Prices: When price increases are unavoidable, find a way to justify the higher cost. It could include anything—enhanced customer service, a loyalty rewards program, a special promotion and more. Customers may accept paying more if they feel they are getting more value in return.

What NOT to do:

  • Don’t Take Advantage of Customer Panic: As I write this article, people are going to car dealerships to buy cars before the prices increase and finding that the dealers are selling above the retail sticker price because of the demand. Do you think a customer will forget they were “gouged” by a company taking advantage of them during tough times? (That’s a rhetorical question, but just in case you don’t know the answer … They won’t!)
  • Don’t Say, “It’s Not my Fault”: Even when price increases are beyond your control, don’t be defensive. This can give the impression of a lack of confidence and lack of control that can erode the trust you have with your customers.
  • Don’t Say, “It’s the Same Everywhere You Go”: If the customer understands tariffs, they already know this. Stating you have no choice isn’t going to make the customer feel good. Go back to the list of what you can do and find a way to avoid this and the “it’s not my fault” response.

Customers want to hear what you’re doing to help them. They also like to be educated. Knowledge can give the customer a sense of control. Demonstrating genuine concern for the situation and sharing what you’re doing to minimize the impact of tariff-related price increases builds trust that will pay dividends long after the current economic challenges have passed.

Image Credits: Unsplash, Shep Hyken

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Crisis Innovation Trap

Why Proactive Innovation Wins

LAST UPDATED: September 3, 2025 at 12:00PM
The Crisis Innovation Trap

by Braden Kelley and Art Inteligencia

In the narrative of business, we often romanticize the idea of “crisis innovation.” The sudden, high-stakes moment when a company, backed against a wall, unleashes a burst of creativity to survive. The pandemic, for instance, forced countless businesses to pivot their models overnight. While this showcases incredible human resilience, it also reveals a dangerous and costly trap: the belief that innovation is something you turn on only when there’s an emergency. As a human-centered change and innovation thought leader, I’ve seen firsthand that relying on crisis as a catalyst is a recipe for short-term fixes and long-term decline. True, sustainable innovation is not a reaction; it’s a proactive, continuous discipline.

The problem with waiting for a crisis is that by the time it hits, you’re operating from a position of weakness. You’re making decisions under immense pressure, with limited resources, and with a narrow focus on survival. This reactive approach rarely leads to truly transformative breakthroughs. Instead, it produces incremental changes and tactical adaptations—often at a steep price in terms of burnout, strategic coherence, and missed opportunities. The most successful organizations don’t innovate to escape a crisis; they innovate continuously to prevent one from ever happening.

The Cost of Crisis-Driven Innovation

Relying on crisis as your innovation driver comes with significant hidden costs:

  • Reactive vs. Strategic: Crisis innovation is inherently reactive. You’re fixing a symptom, not addressing the root cause. This prevents you from engaging in the deep, strategic thinking necessary for true market disruption.
  • Loss of Foresight: When you’re in a crisis, all attention is on the immediate threat. This short-term focus blinds you to emerging trends, shifting customer needs, and new market opportunities that could have been identified and acted upon proactively.
  • Burnout and Exhaustion: Innovation requires creative energy. Forcing your teams into a constant state of emergency to innovate leads to rapid burnout, high turnover, and a culture of fear, not creativity.
  • Suboptimal Outcomes: The solutions developed in a crisis are often rushed, inadequately tested, and sub-optimized. They are designed to solve an immediate problem, not to create a lasting competitive advantage.

“Crisis innovation is a sprint for survival. Proactive innovation is a marathon for market leadership. You can’t win a marathon by only practicing sprints when the gun goes off.”

Building a Culture of Proactive, Human-Centered Innovation

The alternative to the crisis innovation trap is to embed innovation into your organization’s DNA. This means creating a culture where curiosity, experimentation, and a deep understanding of human needs are constant, not sporadic. It’s about empowering your people to solve problems and create value every single day.

  1. Embrace Psychological Safety: Create an environment where employees feel safe to share half-formed ideas, question assumptions, and even fail. This is the single most important ingredient for continuous innovation.
  2. Allocate Dedicated Resources: Don’t expect innovation to happen in people’s spare time. Set aside dedicated time, budget, and talent for exploratory projects and initiatives that don’t have an immediate ROI.
  3. Focus on Human-Centered Design: Continuously engage with your customers and employees to understand their frustrations and aspirations. True innovation comes from solving real human problems, not just from internal brainstorming.
  4. Reward Curiosity, Not Just Results: Celebrate learning, even from failures. Recognize teams for their efforts in exploring new ideas and for the insights they gain, not just for the products they successfully launch.

Case Study 1: Blockbuster vs. Netflix – The Foresight Gap

The Challenge:

In the late 1990s, Blockbuster was the undisputed king of home video rentals. It had a massive physical footprint, brand recognition, and a highly profitable business model based on late fees. The crisis of digital disruption and streaming was not a sudden event; it was a slow-moving signal on the horizon.

The Reactive Approach (Blockbuster):

Blockbuster’s management was aware of the shift to digital, but they largely viewed it as a distant threat. They were so profitable from their existing model that they had no incentive to proactively innovate. When Netflix began gaining traction with its subscription-based, DVD-by-mail service, Blockbuster’s response was a reactive, half-hearted attempt to mimic it. They launched an online service but failed to integrate it with their core business, and their culture remained focused on the physical store model. They only truly panicked and began a desperate, large-scale innovation effort when it was already too late and the market had irreversibly shifted to streaming.

The Result:

Blockbuster’s crisis-driven innovation was a spectacular failure. By the time they were forced to act, they lacked the necessary strategic coherence, internal alignment, and cultural agility to compete. They didn’t innovate to get ahead; they innovated to survive, and they failed. They went from market leader to bankruptcy, a powerful lesson in the dangers of waiting for a crisis to force your hand.


Case Study 2: Lego’s Near-Death and Subsequent Reinvention

The Challenge:

In the early 2000s, Lego was on the brink of bankruptcy. The brand, once a global icon, had become a sprawling, unfocused company that was losing relevance with children increasingly drawn to video games and digital entertainment. The company’s crisis was not a sudden external shock, but a slow, painful internal decline caused by a lack of proactive innovation and a departure from its core values. They had innovated, but in a scattered, unfocused way that diluted the brand.

The Proactive Turnaround (Lego):

Lego’s new leadership realized that a reactive, last-ditch effort wouldn’t save them. They saw the crisis as a wake-up call to fundamentally reinvent how they innovate. Their strategy was not just to survive but to thrive by returning to a proactive, human-centered approach. They went back to their core product, the simple plastic brick, and focused on deeply understanding what their customers—both children and adult fans—wanted. They launched several initiatives:

  • Re-focus on the Core: They trimmed down their product lines and doubled down on what made Lego special—creativity and building.
  • Embracing the Community: They proactively engaged with their most passionate fans, the “AFOLs” (Adult Fans of Lego), and co-created new products like the highly successful Lego Architecture and Ideas series. This wasn’t a reaction to a trend; it was a strategic partnership.
  • Thoughtful Digital Integration: Instead of panicking and launching a thousand digital products, they carefully integrated their physical and digital worlds with games like Lego Star Wars and movies like The Lego Movie. These weren’t rushed reactions; they were part of a long-term, strategic vision.

The Result:

Lego’s transformation from a company on the brink to a global powerhouse is a powerful example of the superiority of proactive innovation. By not just reacting to their crisis but using it as a catalyst to build a continuous, human-centered innovation engine, they not only survived but flourished. They turned a painful crisis into a foundation for a new era of growth, proving that the best time to innovate is always, not just when you have no other choice.


Eight I's of Infinite Innovation

The Eight I’s of Infinite Innovation

Braden Kelley’s Eight I’s of Infinite Innovation provides a comprehensive framework for organizations seeking to embed continuous innovation into their DNA. The model starts with Ideation, the spark of new concepts, which must be followed by Inspiration—connecting those ideas to a compelling, human-centered vision. This vision is refined through Investigation, a process of deeply understanding customer needs and market dynamics, leading to the Iteration of prototypes and solutions based on real-world feedback. The framework then moves from development to delivery with Implementation, the critical step of bringing a viable product to market. This is not the end, however; it’s a feedback loop that requires Invention of new business models, a constant process of Improvement based on outcomes, and finally, the cultivation of an Innovation culture where the cycle can repeat infinitely. Each ‘I’ builds upon the last, creating a holistic and sustainable engine for growth.

Conclusion: The Time to Innovate is Now

The notion of “crisis innovation” is seductive because it offers a heroic narrative. But behind every such story is a cautionary tale of a company that let a problem fester for far too long. The most enduring, profitable, and relevant organizations don’t wait for a burning platform to jump; they are constantly building new platforms. They have embedded a culture of continuous, proactive innovation driven by a deep understanding of human needs. They innovate when times are good so they are prepared when times are tough.

The time to innovate is not when your stock price plummets or your competitor launches a new product. The time to innovate is now, and always. By making innovation a fundamental part of your business, you ensure your organization’s longevity and its ability to not just survive the future, but to shape it.

Image credit: Pixabay

Content Authenticity Statement: The topic area and the key elements to focus on were decisions made by Braden Kelley, with help from Google Gemini to shape the article and create the illustrative case studies.

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.