Category Archives: Technology

Technology Pushing Us into a New Ethical Universe

Technology Pushing Us into a New Ethical Universe

GUEST POST from Greg Satell

We take it for granted that we’re supposed to act ethically and, usually, that seems pretty simple. Don’t lie, cheat or steal, don’t hurt anybody on purpose and act with good intentions. In some professions, like law or medicine, the issues are somewhat more complex, and practitioners are trained to make good decisions.

Yet ethics in the more classical sense isn’t so much about doing what you know is right, but thinking seriously about what the right thing is. Unlike the classic “ten commandments” type of morality, there are many situations that arise in which determining the right action to take is far from obvious.

Today, as our technology becomes vastly more powerful and complex, ethical issues are increasingly rising to the fore. Over the next decade we will have to build some consensus on issues like what accountability a machine should have and to what extent we should alter the nature of life. The answers are far from clear-cut, but we desperately need to find them.

The Responsibility of Agency

For decades intellectuals have pondered an ethical dilemma known as the trolley problem. Imagine you see a trolley barreling down the tracks that is about to run over five people. The only way to save them is to pull a lever to switch the trolley to a different set of tracks, but if you do that, one person standing there will be killed. What should you do?

For the most part, the trolley problem has been a subject for freshman philosophy classes and avant garde cocktail parties, without any real bearing on actual decisions. However, with the rise of technologies like self-driving cars, decisions such as whether to protect the life of a passenger or a pedestrian will need to be explicitly encoded into the systems we create.

That’s just the start. It’s become increasingly clear that data bias can vastly distort decisions about everything from whether we are admitted to a school, get a job or even go to jail. Still, we’ve yet to achieve any real clarity about who should be held accountable for decisions an algorithm makes.

As we move forward, we need to give serious thought to the responsibility of agency. Who’s responsible for the decisions a machine makes? What should guide those decisions? What recourse should those affected by a machine’s decision have? These are no longer theoretical debates, but practical problems that need to be solved.

Evaluating Tradeoffs

“Now I am become Death, the destroyer of worlds,” said J. Robert Oppenheimer, quoting the Bhagavad Gita. upon witnessing the world’s first nuclear explosion as it shook the plains of New Mexico. It was clear that we had crossed a Rubicon. There was no turning back and Oppenheimer, as the leader of the project, felt an enormous sense of responsibility.

Yet the specter of nuclear Armageddon was only part of the story. In the decades that followed, nuclear medicine saved thousands, if not millions of lives. Mildly radioactive isotopes, which allow us to track molecules as they travel through a biological system, have also been a boon for medical research.

The truth is that every significant advancement has the potential for both harm and good. Consider CRISPR, the gene editing technology that vastly accelerates our ability to alter DNA. It has the potential to cure terrible diseases such as cancer and Multiple Sclerosis, but also raises troubling issues such as biohacking and designer babies.

In the case of nuclear technology many scientists, including Oppenheimer, became activists. They actively engaged with the wider public, including politicians, intellectuals and the media to raise awareness about the very real dangers of nuclear technology and work towards practical solutions.

Today, we need similar engagement between people who create technology and the public square to explore the implications of technologies like AI and CRISPR, but it has scarcely begun. That’s a real problem.

Building A Consensus Based on Transparency

It’s easy to paint pictures of technology going haywire. However, when you take a closer look, the problem isn’t so much with technological advancement, but ourselves. For example, the recent scandals involving Facebook were not about issues inherent to social media websites, but had more to do with an appalling breach of trust and lack of transparency. The company has paid dearly for it and those costs will most likely continue to pile up.

It doesn’t have to be that way. Consider the case of Paul Berg, a pioneer in the creation of recombinant DNA, for which he won the Nobel Prize. Unlike Zuckerberg, he recognized the gravity of the Pandora’s box he had opened and convened the Asilomar Conference to discuss the dangers, which resulted in the Berg Letter that called for a moratorium on the riskiest experiments until the implications were better understood.

In her book, A Crack in Creation, Jennifer Doudna, who made the pivotal discovery for CRISPR gene editing, points out that a key aspect of the Asilomar conference was that it included not only scientists, but also lawyers, government officials and media. It was the dialogue between a diverse set of stakeholders, and the sense of transparency it produced, that helped the field advance.

The philosopher Martin Heidegger argued that technological advancement is a process of revealing and building. We can’t control what we reveal through exploration and discovery, but we can—and should—be wise about what we build. If you just “move fast and break things,” don’t be surprised if you break something important.

Meeting New Standards

In Homo Deus, Yuval Noah Harari writes that the best reason to learn history is “not in order to predict, but to free yourself of the past and imagine alternative destinies.” As we have already seen, when we rush into technologies like nuclear power, we create problems like Chernobyl and Fukushima and reduce technology’s potential.

The issues we will have to grasp over the next few decades will be far more complex and consequential than anything we have faced before. Nuclear technology, while horrifying in its potential for destruction, requires a tremendous amount of scientific expertise to produce it. Even today, it remains confined to governments and large institutions.

New technologies, such as artificial intelligence and gene editing are far more accessible. Anybody with a modicum of expertise can go online and download powerful algorithms for free. High school kids can order CRISPR kits for a few hundred dollars and modify genes. We need to employ far better judgment than organizations like Facebook and Google have shown in the recent past.

Some seem to grasp this. Most of the major tech companies have joined with the ACLU, UNICEF and other stakeholders to form the Partnership On AI to create a forum that can develop sensible standards for artificial intelligence. Salesforce recently hired a Chief Ethical and Human Use Officer. Jennifer Doudna has begun a similar process for CRISPR at the Innovative Genomics Institute.

These are important developments, but they are little more than first steps. We need a more public dialogue about the technologies we are building to achieve some kind of consensus of what the risks are and what we as a society are willing to accept. If not, the consequences, financial and otherwise, may be catastrophic.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

4 Key Aspects of Robots Taking Our Jobs

4 Key Aspects of Robots Taking Our Jobs

GUEST POST from Greg Satell

A 2019 study by the Brookings Institution found that over 61% of jobs will be affected by automation. That comes on the heels of a 2017 report from the McKinsey Global Institute that found that 51% of total working hours and $2.7 trillion dollars in wages are highly susceptible to automation and a 2013 Oxford study that found 47% of jobs will be replaced.

The future looks pretty grim indeed until you start looking at jobs that have already been automated. Fly-by-wire was introduced in 1968, but today we’re facing a massive pilot shortage. The number of bank tellers has doubled since ATMs were introduced. Overall, the US is facing a massive labor shortage.

In fact, although the workforce has doubled since 1970, labor participation rates have risen by more than 10% since then. Everywhere you look, as automation increases, so does the demand for skilled humans. So the challenge ahead isn’t so much finding work for humans, but to prepare humans to do the types of work that will be in demand in the years to come.

1. Automation Doesn’t Replace Jobs, It Replaces Tasks

To understand the disconnect between all the studies that seem to be predicting the elimination of jobs and the increasingly dire labor shortage, it helps to look a little deeper at what those studies are actually measuring. The truth is that they don’t actually look at the rate of jobs being created or lost, but tasks that are being automated. That’s something very different.

To understand why, consider the legal industry, which is rapidly being automated. Basic activities like legal discovery are now largely done by algorithms. Services like LegalZoom automate basic filings. There are even artificial intelligence systems that can predict the outcome of a court case better than a human can.

So, it shouldn’t be surprising that many experts predict gloomy days ahead for lawyers. Yet the number of lawyers in the US has increased by 15% since 2008 and it’s not hard to see why. People don’t hire lawyers for their ability to hire cheap associates to do discovery, file basic documents or even, for the most part, to go to trial. In large part, they want someone they can trust to advise them.

In a similar way we don’t expect bank tellers to process transactions anymore, but to help us with things that we can’t do at an ATM. As the retail sector becomes more automated, demand for e-commerce workers is booming. Go to a highly automated Apple Store and you’ll find far more workers than at a traditional store, but we expect them to do more than just ring us up.

2. When Tasks Become Automated, The Become Commoditized

Let’s think back to what a traditional bank looked like before ATMs or the Internet. In a typical branch, you would see a long row of tellers there to process deposits and withdrawals. Often, especially on Fridays when workers typically got paid, you would expect to see long lines of people waiting to be served.

In those days, tellers needed to process transactions quickly or the people waiting in line would get annoyed. Good service was fast service. If a bank had slow tellers, people would leave and go to one where the lines moved faster. So training tellers to process transactions efficiently was a key competitive trait.

Today, however, nobody waits in line at the bank because processing transactions is highly automated. Our paychecks are usually sent electronically. We can pay bills online and get cash from an ATM. What’s more, these aren’t considered competitive traits, but commodity services. We expect them as a basic requisite of doing business.

In the same way, we don’t expect real estate agents to find us a house or travel agents to book us a flight or find us a hotel room. These are things that we used to happily pay for, but today we expect something more.

3. When Things Become Commodities, Value Shifts Elsewhere

In 1900, 30 million people in the United States were farmers, but by 1990 that number had fallen to under 3 million even as the population more than tripled. So, in a manner of speaking, 90% of American agriculture workers lost their jobs, mostly due to automation. Still, the twentieth century became an era of unprecedented prosperity.

We’re in the midst of a similar transformation today. Just as our ancestors toiled in the fields, many of us today spend much of our time doing rote, routine tasks. However, as two economists from MIT explain in a paper, the jobs of the future are not white collar or blue collar, but those focused on non-routine tasks, especially those that involve other humans.

Consider the case of bookstores. Clearly, by automating the book buying process, Amazon disrupted superstore book retailers like Barnes & Noble and Borders. Borders filed for bankruptcy in 2011 and was liquidated later that same year. Barnes & Noble managed to survive but has been declining for years.

Yet a study at Harvard Business School found that small independent bookstores are thriving by adding value elsewhere, such as providing community events, curating titles and offering personal recommendations to customers. These are things that are hard to do well at a big box retailer and virtually impossible to do online.

4. Value Is Shifting from Cognitive Skills to Social Skills

20 or 30 years ago, the world was very different. High value work generally involved retaining information and manipulating numbers. Perhaps not surprisingly, education and corporate training programs were focused on teaching those skills and people would build their careers on performing well on knowledge and quantitative tasks.

Today, however, an average teenager has more access to information and computing power than a typical large enterprise had a generation ago, so knowledge retention and quantitative ability have largely been automated and devalued. High value work has shifted from cognitive skills to social skills.

Consider that the journal Nature has found that the average scientific paper today has four times as many authors as one did in 1950, and the work they are doing is far more interdisciplinary and done at greater distances than in the past. So even in highly technical areas, the ability to communicate and collaborate effectively is becoming an important skill.

There are some things that a machine will never do. Machines will never strike out at a Little League game, have their hearts broken or see their children born. That makes it difficult, if not impossible, for machines to relate to humans as well as a human can. The future of work is humans collaborating with other humans to design work for machines.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

3 Steps to Find the Horse’s A** In Your Company (and Create Space for Innovation)

3 Steps to Find the Horse's A** In Your Company (and Create Space for Innovation)

GUEST POST from Robyn Bolton

Innovation thrives within constraints.

Constraints create the need for questions, creative thinking, and experiments.

But as real as constraints are and as helpful as they can be, don’t simply accept them. Instead, question them, push on them, and explore around them.

But first, find the horse’s a**

How Ancient Rome influenced the design of the Space Shuttle

In 1974, Thiokol, an aerospace and chemical manufacturing company, won the contract to build the solid rocket boosters (SRBs) for the Space Shuttle. The SRBs were to be built in a factory in Utah and transported to the launch site via train.

The train route ran through a mountain tunnel that was just barely wider than the tracks.

The standard width of railroad tracks (distance between the rails or the railroad gauge) in the US is 4 feet, 8.5 inches which means that Thiokol’s engineers needed to design SRBs that could fit through a tunnel that was slightly wider than 4 feet 8.5 inches.

4 feet 8.5 inches wide is a constraint. But where did such an oddly specific constraint come from?

The designers and builders of America’s first railroads were the same people and companies that built England’s tramways. Using the existing tramways tools and equipment to build railroads was more efficient and cost-effective, so railroads ended up with the same gauge as tramways – 4 feet 8.5 inches.

The designers and builders of England’s tramways were the same businesses that, for centuries, built wagons. Wanting to use their existing tools and equipment (it was more efficient and cost-effective, after all), the wagon builders built tramways with the exact distance between the rails as wagons had between wheels – 4 feet 8.5 inches.

Wagon wheels were 4 feet 8.5 inches apart to fit into the well-worn grooves in most old European roads. The Romans built those roads, and Roman chariots made those grooves, and a horses pulled those chariots, and the width of a horses was, you guessed it, 4 feet 8.5 inches.

To recap – the width of a horses’ a** (approximately 4 feet 8.5 inches) determined the distance between wheels on the Roman chariots that wore grooves into ancient roads. Those grooves ultimately dictated the width of wagon wheels, tramways, railroad ties, a mountain tunnel, and the Space Shuttle’s SRBs.

How to find the horse’s a**

When you understand the origin of a constraint, aka find the horse’s a**, it’s easier to find ways around it or to accept and work with it. You can also suddenly understand and even anticipate people’s reactions when you challenge the constraints.

Here’s how you do it – when someone offers a constraint:

  1. Thank them for being honest with you and for helping you work more efficiently
  2. Find the horse’s a** by asking questions to understand the constraint – why it exists, what it protects, the risk of ignoring it, who enforces it, and what happened to the last person who challenged it.
  3. Find your degrees of freedom by paying attention to their answers and how they give them. Do they roll their eyes in knowing exasperation? Shrug their shoulders in resignation? Become animated and dogmatic, agitated that someone would question something so obvious?

How to use the horse’s a** to innovate

You must do all three steps because stopping short of step 3 stops creativity in its tracks.

If you stop after Step 1 (which most people do), you only know the constraint, and you’ll probably be tempted to take it as fixed. But maybe it’s not. Perhaps it’s just a habit or heuristic waiting to be challenged.

If you do all three steps, however, you learn tons of information about the constraint, how people feel about it, and the data and evidence that could nudge or even eliminate it.

At the very least, you’ll understand the horse’s a** driving your company’s decisions.

Image credit: Pixabay


  1. To be very clear, the origin of the constraint is the horse’s a**. The person telling you about the constraint is NOT the horse’s a**.
  2. The truth is never as simple as the story and railroads used to come in different gauges. For a deeper dive into this “more true than not” story (and an alternative theory that it was the North’s triumph in the Civil War that influenced the design of the SRBs, click here

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Top 5 Tech Trends Artificial Intelligence is Monitoring

Top 5 Tech Trends Artificial Intelligence is Monitoring

GUEST POST from Art Inteligencia

Artificial Intelligence is constantly scanning the Internet to identify the technology trends that are the most interesting and potentially the most impactful. At present, according to artificial intelligence, the Top Five Technology Trends being tracked for futurology are:

1. Artificial Intelligence (AI): Artificial Intelligence is the development of computer systems that can perform tasks typically requiring human intelligence, such as visual perception, speech recognition, decision-making, and translation between languages. AI research is highly technical and specialized, and is deeply divided into subfields that often fail to communicate with each other.

2. Autonomous Vehicles: Autonomous vehicles are vehicles that can navigate without human input, relying instead on sensors, GPS, and computer technology to determine their location and trajectory. Autonomous vehicles are used in a variety of applications, from consumer transportation to military drones.

3. Virtual Reality (VR): Virtual reality is a computer-generated simulation of a three-dimensional environment that can be interacted with in a seemingly real or physical way by a person using special electronic equipment. VR uses technologies such as gesture control and stereoscopic displays to create immersive experiences for the user.

4. Augmented Reality (AR): Augmented reality is a technology that superimposes computer-generated content onto the real world to enhance or supplement a user’s physical experience. AR is used in a variety of contexts, from gaming to industrial design.

5. Internet of Things (IoT): The Internet of Things is the network of physical devices, vehicles, home appliances, and other items embedded with electronics, software, sensors, and connectivity that enable these objects to connect and exchange data. The IoT has the potential to revolutionize many aspects of our lives, from manufacturing and transportation to healthcare and energy management.

It’s obviously amusing that artificial intelligence considers artificial intelligence to be the number one technology trend at present in its futurology work. I would personally rank it number one, but I would rank autonomous vehicles and virtual reality lower. I would put augmented reality and IoT number two and number three respectively, but what do I know …

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

What Artificial Intelligence Predicts for 2023

What Artificial Intelligence Predicts for 2023

GUEST POST from Art Inteligencia

As we move into 2023 and beyond, the technology industry is making predictions about what the future of innovation holds for us. With the global pandemic accelerating the rate of digital transformation, it’s safe to say that the next few years will bring some major changes to the way we work and live. Here are some of the top innovation predictions generated by artificial intelligence for 2023:

1. Autonomous Delivery: Autonomous delivery systems are becoming more commonplace, and by 2023, we expect to see them become even more advanced. Autonomous delivery systems use advanced robotics and artificial intelligence to deliver packages to customers without the need for human involvement. This could significantly reduce costs and create greater efficiency in delivery services.

2. Augmented Reality: Augmented reality (AR) is rapidly growing in popularity and it’s expected to become even more pervasive by 2023. AR will be used in many industries, including education, healthcare and retail, to create interactive experiences. For example, in healthcare, AR can be used to provide surgeons with enhanced visuals during operations. In retail, AR can be used to give customers a more immersive shopping experience.

3. Quantum Computing: Quantum computing is a form of computing that uses quantum-mechanical phenomena, such as superposition and entanglement, to perform calculations. This form of computing has the potential to revolutionize the way we process and store data, and it’s expected to become more mainstream by 2023.

4. 5G Networks: The fifth generation of cellular networks, also known as 5G, is expected to become even more widespread by 2023. 5G networks have faster connection speeds, lower latency and greater reliability than their predecessors, which makes them ideal for a variety of applications, including autonomous vehicles, virtual reality and the Internet of Things.

5. Artificial Intelligence: Artificial intelligence (AI) is becoming increasingly prevalent in our lives. By 2023, we expect to see AI being used in a variety of applications, including automated customer service, natural language processing and personal assistants. AI has the potential to revolutionize the way we interact with technology and the world around us.

These are just a few of the many predictions for 2023 and beyond. As digital transformation continues to accelerate, we can expect to see even more innovation over the next few years. It’s an exciting time to be in the technology industry and we can’t wait to see what the future holds.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Shark Tanks are the Pumpkin Spice of Innovation

Shark Tanks are the Pumpkin Spice of Innovation

GUEST POST from Robyn Bolton

On August 27, Pumpkin Spice season began. It was the earliest ever launch of Starbucks’ Pumpkin Spice Latte and it kicked off a season in which everything from Cheerios to protein powder to dog shampoo promises the nostalgia of Grandma’s pumpkin pie.

Since its introduction in 2003, the Pumpkin Spice Latte has attracted its share of lovers and haters but, because it’s a seasonal offering, the hype fades almost as soon as it appears.

Sadly, the same cannot be said for its counterpart in corporate innovation — The Shark Tank/Hackathon/Lab Week.

It may seem unfair to declare Shark Tanks the Pumpkin Spice of corporate innovation, but consider the following:

  • They are events. There’s nothing wrong with seasonal flavors and events. After all, they create a sense of scarcity that spurs people to action and drives companies’ revenues. However, there IS a great deal wrong with believing that innovation is an event. Real innovation is not an event. It is a way of thinking and problem-solving, a habit of asking questions and seeking to do things better, and of doing the hard and unglamorous work of creating, learning, iterating, and testing required to bring innovation — something different that creates value — to life.
  • They appeal to our sense of nostalgia and connection. The smell and taste of Pumpkin Spice bring us back to simpler times, holidays with family, pie fresh and hot from the oven. Shark Tanks do the same. They remind us of the days when we believed that we could change the world (or at least fix our employers) and when we collaborated instead of competed. We feel warm fuzzies as we consume (or participate in) them, but the feelings are fleeting, and we return quickly to the real world.
  • They pretend to be something they’re not. Starbucks’ original Pumpkin Spice Latte was flavored by cinnamon, nutmeg, and clove. There was no pumpkin in the Pumpkin Spice. Similarly, Shark Tanks are innovation theater — events that give people an outlet for their ideas and an opportunity to feel innovation-y for a period of time before returning to their day-to-day work. The value that is created is a temporary blip, not lasting change that delivers real business value.

But it doesn’t have to be this way.

If you’re serious about walking the innovation talk, Shark Tanks can be a great way to initiate and accelerate building a culture and practice of innovation. But they must be developed and deployed in a thoughtful way that is consistent with your organization’s strategy and priorities.

  • Make Shark Tanks the START of an innovation effort, not a standalone event. Clearly establish the problems or organizational priorities you want participants to solve and the on-going investment (including dedicated time) that the company will make in the winners. Allocate an Executive Sponsor who meets with the team monthly and distribute quarterly updates to the company to share winners’ progress and learnings
  • Act with courage and commitment. Go beyond the innovation warm fuzzies and encourage people to push the boundaries of “what we usually do.” Reward and highlight participants that make courageous (i.e. risky) recommendations. Pursue ideas that feel a little uncomfortable because the best way to do something new that creates value (i.e. innovate) is to actually DO something NEW.
  • Develop a portfolio of innovation structures: Just as most companies use a portfolio of tools to grow their core businesses, they need a portfolio of tools to create new businesses. Use Shark Tanks to the surface and develop core or adjacent innovation AND establish incubators and accelerators to create and test radical innovations and business models AND fund a corporate VC to scout for new technologies and start-ups that can provide instant access to new markets.


Whether you love or hate Pumpkin Spice Lattes you can’t deny their impact. They are, after all, Starbucks’ highest-selling seasonal offering. But it’s hard to deny that they are increasingly the subject of mocking memes and eye rolls, a sign that their days, and value, maybe limited.

(Most) innovation events, like Pumpkin Spice, have a temporary effect. But not on the bottom-line. During these events, morale, and team energy spike. But, as the excitement fades and people realize that nothing happened once the event was over, innovation becomes a meaningless buzzword, evoking eye rolls and Dilbert cartoons.

Avoid this fate by making Shark Tanks a lasting part of your innovation menu — a portfolio of tools and structures that build and sustain a culture and practice of innovation, one that creates real financial and organizational value.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Crabby Innovation Opportunity

Crabby Innovation Opportunity

There are many foods that we no longer eat, but because we choose to, not because they have disappeared from nature. In fact, here is a list of 21 Once-Popular Foods That We All Stopped Eating, including:

  • Kool-Aid
  • Margarine
  • Pudding Pops
  • Candy Cigarettes
  • etc.

But today, we’re going to talk about a food that I personally love, but that I’ve always viewed as a bit of luxury – crab legs – that is in danger of disappearing off the face of the planet due to climate change and human effects. And we’re not just talking about King Crab, but we’re also talking about Snow Crab, and we’re talking about Dungeness Crab too. And this is a catastrophe not just for diners, but to an entire industry and the livelihood of too many families to count:

That’s more than a BILLION CRABS that none of us have had the pleasure of their deliciousness.

And given the magnitude of the die off, it is possible they might disappear completely, meaning we can’t enjoy and salivate at the thought of this popular commercial from the 80’s:

Climate change and global warming are real. If you don’t believe humans are the cause, that it’s naturally occurring, fine, it’s still happening.

There can be no debate other than surrounding the actions we take from this point forward.

And while the magnitude of the devastation of other animal species that humans are responsible for is debatable, we are failing in our duties as caretakers of the earth.

This brings me back to the title of the post and the missions of this blog – to promote human-centered change and innovation.

Because we have killed off one of our very tastiest treats (King, Snow and Dungeness Crabs), at least in the short-term (and possibly forever), there is a huge opportunity to do better than krab sticks or the Krabby Patties of SpongeBob SquarePants fame.

If crab legs are going to disappear from the menus of seafood restaurants across the United States, and possibly the world, can someone invent a tasty treat that equals or exceeds the satisfaction of wielding a crab cracker and a crab fork and extracting the white gold within to dip into some sweet and slippery lemon butter?

Who is going to be first to crack this problem?

Or who will be the first to find a way to bring the crabs back from extinction?

We’re not just talking about a food to fill our bellies with, we’re talking about a pleasurable dining experience that is going away – that I know someone can save!

And no Air Protein marketing gimmicks please!

Image credit:

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Kicking the Copier Won’t Fix Your Problem

Kicking the Copier Won't Fix Your Problem

GUEST POST from John Bessant

Have you ever felt the urge to kick the photocopier? Or worse? That time when you desperately needed to make sixty copies of a workshop handout five minutes before your session begins. Or when you needed a single copy of your passport or driving license, it’s the only way you can prove your identity to the man behind the desk about not to approve your visa application? Remember the awful day when you were struggling to print your boarding passes for the long-overdue holiday; that incident meant you ended up paying way over the odds at the airport?

The copiers may change, the locations and contexts may differ but underneath is one clear unifying thread. The machines are out to get you. Perhaps it’s just a random failure and you are just the unlucky one who keeps getting caught. Or maybe it’s more serious, they’ve started issuing them with an urgency sensor which detects how critical your making a copy is and then adjusts the machine’s behavior to match this by refusing to perform.

Whatever the trigger you can be sure that it won’t be a simple easy to fix error like ‘out of paper’ which you just might be able to do something about. No, the kind of roadblock these fiendish devices are likely to hurl on to your path will be couched in arcane language displayed on the interface as ‘Error code 3b76 — please consult technician’.

Given the number of photocopiers in the world and the fact that we are still far from being a paperless society in spite of our digital aspirations, it’s a little surprising that the law books don’t actually contain a section on xeroxicide — the attempt or execution of terminal damage to the lives of these machines.

Help is at hand. Because whilst we may still have the odd close and not very enjoyable encounter with these devices the reality is that they are getting better all the time. Not only through adding a bewildering range of functionality so that you can do almost anything with them apart from cook your breakfast, but also because they are becoming more reliable. And that is, in large measure, down to something called a community of practice. One of the most valuable resources we have in the innovation management toolkit.

The term was originally coined by Etienne Wenger and colleagues who used it to describe “groups of people who share a concern or a passion for something they do and learn how to do it better as they interact regularly.” Which is where the idea of communities of practice comes in. It’s a simple enough idea, based on the principle that we learn some things better when we act together.

Shared learning helps, not least in those situations where knowledge is not necessarily explicit and easily available for the finding. It’s a little like mining for precious metals; the really valuable stuff is often invisible inside clumps of otherwise useless rock. Tiny flecks on the surface might give us the clue to something valuable being contained therein but it’s going to take quite a lot of processing to extract it in shiny pure form.

Knowledge is the same; it’s often not available in easy reach or plain sight. Instead it’s what Michael Polanyi called tacit as opposed to explicit. We sometimes can’t even speak about it, we just know it because we do it.

Which brings us back to our photocopiers. And to the work of Julian Orr who worked in the 1990s as a field service engineer in a large corporation specializing in office equipment. He was an ethnographer, interested in understanding how communities of people interact, rather as an anthropologist might study lost tribes in the Amazon. Only his research was in California, down the road from Silicon Valley and he was carrying out research on how work was organized.

He worked with the customer service teams, the roving field service engineers who criss-cross the country trying to fix the broken machine which you’ve just encountered with its ‘Error code 3b76 — please consult technician’ message. Assuming you haven’t already disassembled the machine forcibly they are the ones who patiently diagnose and repair it so that it once again behaves in sweetly obedient and obliging fashion.

They do this through deploying their knowledge, some of which is contained in their manuals (or these days on the tablets they carry around). But that’s only the explicit knowledge, the accumulation of what’s known, the FAQs which represent the troubleshooting solutions the designers developed when creating the machines. Behind this is a much less well-defined set of knowledge which comes from encountering new problems in the field and working out solutions to them — innovating. Over time this tacit knowledge becomes explicit and shared and eventually finds its way into an updated service manual or taught on the new version of the training course.

Orr noticed that in the informal interactions of the team, the coming together and sharing of their experiences, a great deal of knowledge was being exchanged. And importantly that these conversations often led to new problems and solutions being shared and solved. These were not formal meetings and would often happen in temporary locations, like a Monday morning meet-up for breakfast before the teams went their separate ways on their service calls.

You can imagine the conversations taking place across the coffee and doughnuts, ranging from catching up on the weekend experience, discussing the sports results, recounting stories about recalcitrant offspring and so on. But woven through would also be a series of exchanges about their work — complaining about a particular problem that had led to one of them getting toner splashed all over their overalls, describing proudly a work-around they had come up with, sharing hacks and improvised solutions.

There’d be a healthy skepticism about the company’s official repair manual and a pride in keeping the machines working in spite of their design. More important the knowledge each of them encountered through these interactions would be elaborated and amplified, shared across the community. And much of it would eventually find its way back to the designers and the engineers responsible for the official manual.

Orr’s work influenced many people including John Seely Brown (who went on to be Chief Scientist at Xerox) and Paul Duguid who explored further this social dimension to knowledge creation and capture. Alongside formal research and development tools the storytelling across communities of practice like these becomes a key input to innovation, particularly the long-haul incremental improvements which lie at the heart of effective performance.

Tacit Explicit KnowledgeAn important theme which Japanese researchers Ikujiro Nonaka and Hirotaka Takeuchi were aware of and formalised in their seminal book about ‘the knowledge creating company’. They offered a simple model through which tacit knowledge is made explicit, shared and eventually embedded into practice, a process which helped explain the major advantages offered by engaging a workforce in high involvement innovation. Systems which became the ‘lean thinking’ model which is in widespread use today have their roots in this process, with teams of workers acting as communities of practice.

Their model has four key stages in a recurring cycle:

  • Socialization — in which empathy and shared experiences create tacit knowledge (for example, the storytelling in our field service engineer teams)
  • Externalization — in which the tacit knowledge becomes explicit, converted into ideas and insights which others can work with
  • Combination — in which the externalized knowledge is organized and added to the stock of existing explicit knowledge — for example embedding it in a revised version of the manual
  • Internalization — in which the new knowledge becomes part of ‘the way we do things around here’ and the platform for further journeys around the cycle

CoPs are of enormous value in innovation, something which has been recognized for a long time. Think back to the medieval Guilds; their system was based on sharing practice and building a community around that knowledge exchange process. CoPs are essentially ‘learning networks’. They may take the form of an informal social group meeting up where learning is a by-product of their being together; that’s the model which best describes our photocopier engineers and many other social groups at work. Members of such groups don’t all have to be from the same company; much of the power of industrial clusters lies in the way they achieve not only collective efficiency but also the way they share and accumulate knowledge.

Small firms co-operate to create capabilities far beyond the sum of their parts — and communities of practice form an excellent alternative to having formal R&D labs. John Seely Brown’s later research looked at, for example, the motorcycle cluster around the city of Chongquing in China whose products now dominate the world market. Success here is in no small measure due to the knowledge sharing which takes place within a geographically close community of practice.

CoPs can also be formally ‘engineered’ created for the primary purpose of sharing knowledge and improving practice. This can be done in a variety of ways — for example by organizing sector level opportunities and programs to share experience and move up an innovation trajectory. This model was used very successfully in, for example, the North Sea oil industry first to enable cost-reduction and efficiency improvements over a ten-year period in the CRINE (Cost reduction for a new era) program. It resulted in cumulative savings of over 30% on new project costs and as a result a similar model was deployed to explore new opportunities to deploy the sector’s services elsewhere in the world as the original North Sea work ran down.

It can work inside a supply network where the overall performance on key criteria like cost, quality and delivery time depends on fast diffusion of innovation amongst all its members. One of Toyota’s key success factors has been in the way in which it mobilizes learning networks across its supplier base and the model has been widely applied in other sectors, using communities of practice as a core tool.

CoPs have been used to help small firms share and learn around some of the challenges in growth through innovation — for example in the highly successful Profitnet program in the UK. It’s a model which underpins the start-up support culture where expert mentoring can be complemented by teams sharing experiences and trying to help each other in their learning journeys towards successful launch. And it’s being used extensively in the not-for-profit sector where working at the frontier of innovation to deal with some of the world’s biggest humanitarian and development challenges can be strengthened by sharing insights and experiences through formal communities of practice.

At heart the idea of a community of practice is simple though it deals with a complex problem. Innovation is all about knowledge creation and deployment and we’ve learned that this is primarily a social process. So, working with the grain of human interaction, bringing people together to share experiences and build up knowledge collectively, seems an eminently helpful approach.

Which suggests that next time you are thinking of taking a chainsaw to the photocopier you might like to pause — and maybe channel your energies into thinking of ways to innovate out of the situation. A useful first step might be to find others with similar frustrations and mobilize your own community of practice.

You can find a podcast version of this here

If you’d like more songs, stories and other resources on the innovation theme, check out my website here

And if you’d like to learn with me take a look at my online course here

Image credit: FreePik

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Unlocking the Power of Cause and Effect

Unlocking the Power of Cause and Effect

GUEST POST from Greg Satell

In 2011, IBM’s Watson system beat the best human players in the game show, Jeopardy! Since then, machines have shown that they can outperform skilled professionals in everything from basic legal work to diagnosing breast cancer. It seems that machines just get smarter and smarter all the time.

Yet that is largely an illusion. While even a very young human child understands the basic concept of cause and effect, computers rely on correlations. In effect, while a computer can associate the sun rising with the day breaking, it doesn’t understand that one causes the other, which limits how helpful computers can be.

That’s beginning to change. A group of researchers, led by artificial intelligence pioneer Judea Pearl, are working to help computers understand cause and effect based on a new causal calculus. The effort is still in its nascent stages, but if they’re successful we could be entering a new era in which machines not only answer questions, but help us pose new ones.

Observation and Association

Most of what we know comes from inductive reasoning. We make some observations and associate those observations with specific outcomes. For example, if we see animals going to a drink at a watering hole every morning, we would expect to see them at the same watering hole in the future. Many animals share this type of low-level reasoning and use it for hunting.

Over time, humans learned how to store these observations as data and that’s helped us make associations on a much larger scale. In the early years of data mining, data was used to make very basic types of predictions, such as the likelihood that somebody buying beer at a grocery store will also want to buy something else, like potato chips or diapers.

The achievement over the last decade or so is that advancements in algorithms, such as neural networks, have allowed us to make much more complex associations. To take one example, systems that have observed thousands of mammograms have learned to associate the ones that show a tumor with a very high degree of accuracy.

However, and this is a crucial point, the system that detects cancer doesn’t “know” it’s cancer. It doesn’t associate the mammogram with an underlying cause, such as a gene mutation or lifestyle choice, nor can it suggest a specific intervention, such as chemotherapy. Perhaps most importantly, it can’t imagine other possibilities and suggest alternative tests.

Confounding Intervention

The reason that correlation is often very different from causality is the presence of something called a confounding factor. For example, we might find a correlation between high readings on a thermometer and ice cream sales and conclude that if we put the thermometer next to a heater, we can raise sales of ice cream.

I know that seems silly, but problems with confounding factors arise in the real world all the time. Data bias is especially problematic. If we find a correlation between certain teachers and low test scores, we might assume that those teachers are causing the low test scores when, in actuality, they may be great teachers who work with problematic students.

Another example is the high degree of correlation between criminal activity and certain geographical areas, where poverty is a confounding factor. If we use zip codes to predict recidivism rates, we are likely to give longer sentences and deny parole to people because they are poor, while those with more privileged backgrounds get off easy.

These are not at all theoretical examples. In fact, they happen all the time, which is why caring, competent teachers can, and do, get fired for those particular qualities and people from disadvantaged backgrounds get mistreated by the justice system. Even worse, as we automate our systems, these mistaken interventions become embedded in our algorithms, which is why it’s so important that we design our systems to be auditable, explainable and transparent.

Imagining A Counterfactual

Another confusing thing about causation is that not all causes are the same. Some causes are sufficient in themselves to produce an effect, while others are necessary, but not sufficient. Obviously, if we intend to make some progress we need to figure out what type of cause we’re dealing with. The way to do that is by imagining a different set of facts.

Let’s return to the example of teachers and test scores. Once we have controlled for problematic students, we can begin to ask if lousy teachers are enough to produce poor test scores or if there are other necessary causes, such as poor materials, decrepit facilities, incompetent administrators and so on. We do this by imagining counterfactual, such as “What if there were better materials, facilities and administrators?”

Humans naturally imagine counterfactuals all the time. We wonder what would be different if we took another job, moved to a better neighborhood or ordered something else for lunch. Machines, however, have great difficulty with things like counterfactuals, confounders and other elements of causality because there’s been no standard way to express them mathematically.

That, in a nutshell, is what Judea Pearl and his colleagues have been working on over the past 25 years and many believe that the project is finally ready to bear fruit. Combining humans innate ability to imagine counterfactuals with machines’ ability to crunch almost limitless amounts of data can really be a game changer.

Moving Towards Smarter Machines

Make no mistake, AI systems’ ability to detect patterns has proven to be amazingly useful. In fields ranging from genomics to materials science, researchers can scour massive databases and identify associations that a human would be unlikely to detect manually. Those associations can then be studied further to validate whether they are useful or not.

Still, the fact that our machines don’t understand concepts like the fact that thermometers don’t increase ice cream sales limits their effectiveness. As we learn how to design our systems to detect confounders and imagine counterfactuals, we’ll be able to evaluate not only the effectiveness of interventions that have been tried, but also those that haven’t, which will help us come up with better solutions to important problems.

For example, in a 2019 study the Congressional Budget Office estimated that raising the national minimum wage to $15 per hour would result in a decrease in employment from zero to four million workers, based on a number of observational studies. That’s an enormous range. However, if we were able to identify and mitigate confounders, we could narrow down the possibilities and make better decisions.

While still nascent, the causal revolution in AI is already underway. McKinsey recently announced the launch of CausalNex, an open source library designed to identify cause and effect relationships in organizations, such as what makes salespeople more productive. Causal approaches to AI are also being deployed in healthcare to understand the causes of complex diseases such as cancer and evaluate which interventions may be the most effective.

Some look at the growing excitement around causal AI and scoff that it is just common sense. But that is exactly the point. Our historic inability to encode a basic understanding of cause and effect relationships into our algorithms has been a serious impediment to making machines truly smart. Clearly, we need to do better than merely fitting curves to data.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Why Amazon Wants to Sell You Robots

Why Amazon Wants to Sell You Robots

GUEST POST from Shep Hyken

It was recently announced that would be acquiring iRobot, the maker of the Roomba vacuum cleaner. There are still some “hoops” to jump through, such as shareholder and regulatory approval, but the deal looks promising. So, why does Amazon want to get into the vacuum cleaner business?

It doesn’t!

At least not for the purpose of simply selling vacuum cleaners. What it wants to do is to get further entrenched into the daily lives of its customers, and Amazon has done an excellent job of just that. There are more than 200 million Amazon Prime members, and 157.4 million of them are in the United States. According to an article in USA Today, written by David Chang of the Motley Fool, Amazon Prime members spend an average of $1,400 per year. Non-Amazon Prime members spend about $600 per year.

Want more numbers? According to a 2022 Feedvisor survey of 2,000-plus U.S. consumers, 56% visit Amazon daily or at least a few times a week, which is up from 47% in 2019. But visiting isn’t enough. Forty-seven percent of consumers make a purchase on Amazon at least once a week. Eight percent make purchases almost every day.

Amazon has become a major part of our lives. And does a vacuum cleaner company do this? Not really, unless it’s iRobot’s vacuum cleaner. A little history about iRobot might shed light on why Amazon is interested in this acquisition.

iRobot was founded in 1990 by three members of MIT’s Artificial Intelligence Lab. Originally their robots were used for space exploration and military defense. About ten years later, they moved into the consumer world with the Roomba vacuum cleaners. In 2016 they spun off the defense business and turned their focus to consumer products.

The iRobot Roomba is a smart vacuum cleaner that does the cleaning while the customer is away. The robotic vacuum cleaner moves around the home, working around obstacles such as couches, chairs, tables, etc. Over time, the Roomba, which has a computer with memory fueled by AI (artificial intelligence) learns about your home. And that means Amazon has the capability of learning about your home.

This is not all that different from how Alexa, Amazon’s smart device, learns about customers’ wants and needs. Just as Alexa remembers birthdays, shopping habits, favorite toppings on pizza, when to take medicine, what time to wake up and much more, the “smart vacuum cleaner” learns about a customer’s home. This is a natural extension of the capabilities found in Alexa, thereby giving Amazon the ability to offer better and more relevant services to its customers.

To make this work, Amazon will gain access to customers’ homes. No doubt, some customers may be uncomfortable with Amazon having that type of information, but let’s look at this realistically. If you are (or have been) one of the hundreds of millions of Amazon customers, it already has plenty of information about you. And if privacy is an issue, there will assuredly be regulations for Amazon to comply with. They already understand their customers almost better than anyone. This is just a small addition to what they already know and provides greater capability to deliver a very personalized experience.

And that is exactly what Amazon plans to do. Just as it has incorporated Alexa, Ring and eero Wi-Fi routers, the Roomba will add to the suite of connected capabilities from Amazon that makes life easier and more convenient for its customers.

If you take a look at the way Amazon has moved from selling books to practically everything else in the retail world, and you recognize its strategy to become part of the fabric of its customers’ lives, you’ll understand why vacuum cleaners, specifically iRobot’s machines, make sense.

This article originally appeared on Forbes

Image Credit: Shep Hyken

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.