Category Archives: Technology

How Engineered Living Therapeutics Are Redefining Healthcare

The Living Cure

LAST UPDATED: January 29, 2026 at 5:38 PM

How Engineered Living Therapeutics Are Redefining Healthcare

GUEST POST from Art Inteligencia

For centuries, medicine has been about chemistry — pills and potions designed to intervene in biological processes. But what if the medicine itself could think? What if it could adapt? What if it was alive? This isn’t science fiction; it’s the audacious promise of Engineered Living Therapeutics (ELTs), and it represents a paradigm shift in human-centered healthcare that will redefine our relationship with illness.

As a thought leader in human-centered change and innovation, I’ve seen countless industries disrupted by radical new approaches. Biotechnology is no exception. ELTs are not merely advanced drugs; they are biological systems, often engineered microbes or cells, programmed to perform specific therapeutic functions within the body. This is innovation at its most profound: leveraging the inherent intelligence and adaptability of life itself to heal.

Beyond the Pill: The Intelligence of Living Medicine

Traditional pharmaceuticals often act as blunt instruments, targeting specific pathways with limited specificity and potential side effects. ELTs, by contrast, offer a level of precision and dynamic response previously unimaginable. Imagine a therapy that can detect disease markers, produce therapeutic compounds only when needed, or even self-regulate its activity based on the body’s changing state. This intelligent adaptability is what makes ELTs a truly human-centered approach to healing, tailoring treatment to the unique, fluctuating biology of each individual.

“The future of medicine isn’t just about what we put into the body; it’s about what we awaken within it. Engineered Living Therapeutics aren’t just treatments; they’re collaborations with our own biology.”

— Braden Kelley

Case Study I: Reprogramming the Gut for Metabolic Health

A burgeoning area for ELTs lies within the human microbiome. Consider the challenge of chronic metabolic diseases like Type 2 Diabetes. Current treatments often manage symptoms without addressing underlying dysregulation. One biotech startup engineered a strain of probiotic bacteria to reside in the gut. This engineered bacterium was programmed to sense elevated glucose levels and, in response, produce and deliver an insulin-sensitizing peptide directly within the intestinal lumen.

This targeted, localized intervention offered a novel way to manage blood sugar, reducing the systemic side effects associated with orally administered drugs. The innovation here wasn’t just a new molecule, but a living delivery system that dynamically responded to the body’s needs, representing a truly personalized and responsive therapy.

Case Study II: Targeted Oncology with “Smart” Cells

Cancer treatment remains one of medicine’s most formidable challenges. While CAR T-cell therapy has revolutionized certain hematological cancers, ELTs are pushing the boundaries further. Imagine immune cells engineered not only to identify cancer cells but also to produce potent anti-cancer molecules directly at the tumor site, or even to activate other immune cells to join the fight.

One research initiative is exploring tumor-infiltrating lymphocytes (TILs) engineered to express specific receptors that bind to unique tumor antigens and simultaneously secrete localized immunomodulators. This approach aims to overcome the immunosuppressive microenvironment of solid tumors, a significant hurdle for many current immunotherapies. This represents a leap towards truly precision oncology, where the body’s own defenders are given a sophisticated, living upgrade.

Leading the Charge: Companies and Startups in the ELT Space

The ELT landscape is rapidly evolving, attracting significant investment and groundbreaking research. Established pharmaceutical giants like Novartis and Gilead Sciences (through Kite Pharma) are already active in the approved CAR T-cell therapy space, which serves as a foundational ELT. However, a vibrant ecosystem of innovative startups is pushing the frontier. Companies like Seres Therapeutics are leading with microbiome-based ELTs for infectious diseases. Synlogic is developing engineered bacteria for metabolic disorders and cancer. Ginkgo Bioworks, while not a therapeutic company itself, is a critical enabler, providing the foundational synthetic biology platform for engineering organisms. Additionally, numerous academic spin-offs and smaller biotechs are emerging, focusing on niche applications, advanced gene editing techniques within living cells, and novel delivery mechanisms, signaling a diverse and competitive future for ELTs.

Designing Trust in Living Systems

ELTs raise questions about control, persistence, and governance. Human-centered change demands proactive transparency, ethical foresight, and adaptive regulation.

The future of ELTs will be shaped as much by trust as by technology.

The Human-Centered Future of Living Therapies

Healthcare innovation has long been constrained by an assumption that treatment must be static to be safe. Engineered Living Therapeutics (ELTs) challenge that assumption by embracing biology’s native strength: adaptability.

ELTs are living systems intentionally designed to operate inside the human body. They sense, decide, and respond. In doing so, they force leaders, regulators, and innovators to rethink what medicine is and how it should behave.

“True healthcare innovation begins when we stop trying to control biology and start designing with it.”

— Braden Kelley

The journey with ELTs is just beginning. As with any transformative technology, there are ethical considerations, regulatory hurdles, and manufacturing complexities to navigate. However, the potential for these living medicines to offer durable, highly targeted, and adaptive treatments for a vast array of diseases — from cancer and autoimmune disorders to infectious diseases and chronic conditions — is immense. By placing the human at the center of this innovation, ensuring patient safety, accessibility, and shared understanding, we can unlock a future where our biology becomes an ally in healing, not just a battlefield.


Frequently Asked Questions

What are Engineered Living Therapeutics (ELTs)?ELTs are biological systems, typically engineered microbes (like bacteria) or human cells, programmed to perform specific therapeutic functions within the body to treat diseases.

How do ELTs differ from traditional drugs?Unlike static chemical drugs, ELTs are dynamic and can sense the body’s environment, adapt their function, and produce therapeutic effects precisely where and when needed, offering a more intelligent and targeted approach.

What types of diseases can ELTs potentially treat?ELTs show promise across a wide range of conditions, including cancer, autoimmune disorders, metabolic diseases (like diabetes), infectious diseases, and gastrointestinal disorders.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Win Your Way to an AI Job

Anduril’s AI Grand Prix: Racing for the Future of Work

LAST UPDATED: January 28, 2026 at 2:27 PM

Anduril's AI Grand Prix: Racing for the Future of Work

GUEST POST from Art Inteligencia

The traditional job interview is an antiquated artifact, a relic of a bygone industrial era. It often measures conformity, articulateness, and cultural fit more than actual capability or innovative potential. As we navigate the complexities of AI, automation, and rapid technological shifts, organizations are beginning to realize that to find truly exceptional talent, they need to look beyond resumes and carefully crafted answers. This is where companies like Anduril are not just iterating but innovating the very hiring process itself.

Anduril, a defense technology company known for its focus on AI-driven systems, recently announced its AI Grand Prix — a drone racing contest where the ultimate prize isn’t just glory, but a job offer. This isn’t merely a marketing gimmick; it’s a profound statement about their belief in demonstrated skill over credentialism, and a powerful strategy for identifying talent that can truly push the boundaries of autonomous systems. It epitomizes the shift from abstract evaluation to purposeful, real-world application, emphasizing hands-on capability over theoretical knowledge.

“The future of hiring isn’t about asking people what they can do; it’s about giving them a challenge and watching them show you.”

— Braden Kelley

Why Challenge-Based Hiring is the New Frontier

This approach addresses several critical pain points in traditional hiring:

  • Uncovering Latent Talent: Many brilliant minds don’t fit the mold of elite university degrees or polished corporate careers. Challenge-based hiring can surface individuals with raw, untapped potential who might otherwise be overlooked.
  • Assessing Practical Skills: In fields like AI, robotics, and advanced engineering, theoretical knowledge is insufficient. The ability to problem-solve under pressure, adapt to dynamic environments, and debug complex systems is paramount.
  • Cultural Alignment Through Action: Observing how candidates collaborate, manage stress, and iterate on solutions in a competitive yet supportive environment reveals more about their true cultural fit than any behavioral interview.
  • Building a Diverse Pipeline: By opening up contests to a wider audience, companies can bypass traditional biases inherent in resume screening, leading to a more diverse and innovative workforce.

Beyond Anduril: Other Pioneers of Performance-Based Hiring

Anduril isn’t alone in recognizing the power of real-world challenges to identify top talent. Several other forward-thinking organizations have adopted similar, albeit varied, approaches:

Google’s Code Jam and Hash Code

For years, Google has leveraged competitive programming contests like Code Jam and Hash Code to scout for software engineering talent globally. These contests present participants with complex algorithmic problems that test their coding speed, efficiency, and problem-solving abilities. While not always directly leading to a job offer for every participant, top performers are often fast-tracked through the interview process. This allows Google to identify engineers who can perform under pressure and think creatively, rather than just those who can ace a whiteboard interview. It’s a prime example of turning abstract coding prowess into a tangible demonstration of value.

Kaggle Competitions for Data Scientists

Kaggle, now a Google subsidiary, revolutionized how data scientists prove their worth. Through its platform, companies post real-world data science problems—from predicting housing prices to identifying medical conditions from images—and offer prize money, and often, connections to jobs, to the teams that develop the best models. This creates a meritocracy where the quality of one’s predictive model speaks louder than any resume. Many leading data scientists have launched their careers or been recruited directly from their performance in Kaggle competitions. It transforms theoretical data knowledge into demonstrable insights that directly impact business outcomes.

The Human Element in the Machine Age

What makes these initiatives truly human-centered? It’s the recognition that while AI and automation are transforming tasks, the human capacity for ingenuity, adaptation, and critical thinking remains irreplaceable. These contests aren’t about finding people who can simply operate machines; they’re about finding individuals who can teach the machines, design the next generation of algorithms, and solve problems that don’t yet exist. They foster an environment of continuous learning and application, perfectly aligning with the “purposeful learning” philosophy.

The Anduril AI Grand Prix, much like Google’s and Kaggle’s initiatives, de-risks the hiring process by creating a performance crucible. It’s a pragmatic, meritocratic, and ultimately more effective way to build the teams that will define the next era of technological advancement. As leaders, our challenge is to move beyond conventional wisdom and embrace these innovative models, ensuring we’re not just ready for the future of work, but actively shaping it.

Anduril Fury


Frequently Asked Questions

What is challenge-based hiring?

Challenge-based hiring is a recruitment strategy where candidates demonstrate their skills and problem-solving abilities by completing a real-world task, project, or competition, rather than relying solely on resumes and interviews.

What are the benefits of this approach for companies?

Companies can uncover hidden talent, assess practical skills, observe cultural fit in action, and build a more diverse talent pipeline by focusing on demonstrable performance.

How does this approach benefit candidates?

Candidates get a fair chance to showcase their true abilities regardless of traditional credentials, gain valuable experience, and often get direct access to influential companies and potential job offers based purely on merit.

To learn more about transforming your organization’s talent acquisition strategy, reach out to explore how human-centered innovation can reshape your hiring practices.

Image credits: Wikimedia Commons, Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

We Must Stop Fooling Ourselves and Get Our Facts Straight

We Must Stop Fooling Ourselves and Get Our Facts Straight

GUEST POST from Greg Satell

Mehdi Hasan’s brutal takedown of Matt Taibbi was almost painful to watch. Taibbi, a longtime muckraking journalist of some renown, was invited by Elon Musk to review internal communications that came to be known as the Twitter Files and made big headlines with accusations regarding government censorship of social media.

Yet as Hasan quickly revealed, Taibbi got basic facts wrong, either not understanding what he was looking at, doing sloppy work or just plainly being disingenuous. What Taibbi was reporting as censorship was, in fact, a normal, deliberative process for flagging problematic content, most of which was not taken down.

He looked foolish, but I could feel his pain. In both of my books, I had similarly foolish errors. The difference was that I sent out sections to be fact-checked by experts and people with first-hand knowledge of events before I published. The truth is that it’s not easy to get facts straight. It takes hard work and humility to get things right. We need to be careful.

A Stupid Mistake

Some of the most famous business stories we hear are simply not accurate. Gurus and pundits love to tell you that after inventing digital photography Kodak ignored the market. Nothing could be further from the truth. In fact, its EasyShare line of cameras were top sellers. It also made big investments in quality printing for digital photos. The problem was that it made most of its money on developing film, a business that completely disappeared.

Another popular fable is that Xerox failed to commercialize the technology developed at its Palo Alto Research Center (PARC), when in fact the laser printer developed there saved the company. What also conveniently gets left out is that Steve Jobs was able to get access to the company’s technology to build the Macintosh because Xerox had invested in Apple and then profited handsomely from that investment.

But my favorite mistold myth is that of Blockbuster, which supposedly ignored Netflix until it was too late. As Gina Keating, who covered the story for years at Reuters, explains in her book Netflixed, the video giant moved relatively quickly and came up with a successful strategy, but the CEO, John Antioco, left after a fight with investor Carl Icahn and the strategy was reversed.

Yet that’s not exactly how I told the story. For years I reported that Antioco was fired. I even wrote it up that way in my book Cascades until I contacted the former CEO to fact-check it. He was incredibly generous with his time, corrected me and then gave me additional insights that improved the book.

To this day, I don’t know exactly why I made the mistake. In fact, as soon as he pointed it out I knew I was wrong. Somehow the notion that he was fired got stuck in my head and, with no one to correct me, it just stayed there. We like to think that we remember things as they happened, but unfortunately our brains don’t work that way.

Why We Get Fooled

We tend to imagine that our minds are some sort of machines, recording what we see and hear, then storing those experiences away to be retrieved at a later time, but that’s not how our brains work at all. Humans have a need to build narratives. We like things to fit into neat patterns and fill in the gaps in our knowledge so that everything makes sense.

Psychologists often point to a halo effect, the tendency for an impression created in one area to influence opinion in another. For example, when someone is physically attractive, we tend to infer other good qualities and when a company is successful, we tend to think other good things about it.

The truth is that our thinking is riddled with subtle yet predictable biases. We are apt to be influenced not by the most rigorous information, but what we can most readily access. We make confounding errors that confuse correlation with causality and then look for information that confirms our judgments while discounting evidence to the contrary.

I’m sure that both Matt Taibbi and I fell into a number of these pitfalls. We observed a set of facts, perceived a pattern, built a narrative and then began filling in gaps with things that we thought we knew. As we looked for more evidence, we seized on what bolstered the stories we were telling ourselves, while ignoring contrary facts.

The difference, of course, is that I went and checked with a primary source, who immediately pointed out my error and, as soon as he did, it broke the spell. I immediately remembered reading in Keating’s book that he resigned and agreed to stay on for six months while a new CEO was being hired. Our brains do weird things.

How Our Errors Perpetuate

In addition to our own cognitive biases, there are a number of external factors that conspire to perpetuate our beliefs. The first is that we tend to embed ourselves in networks that have similar experiences and perspectives that we do. Scientific evidence shows that we conform to the views around us and that effect extends out to three degrees of relationships.

Once we find our tribe, we tend to view outsiders suspiciously and are less likely to scrutinize allies. In a study of adults that were randomly assigned to “leopards” and “tigers,” fMRI studies noted hostility to out-group members. Research from MIT suggests that when we are around people we expect to agree with us, we don’t check facts closely and are more likely to share false information.

In David McRraney’s new book, How to Change a Mind, he points out that people who are able to leave cults or reject long-held conspiracy theories first build alternative social networks. Our associations form an important part of our identity, so we are loath to change our opinions that signal inclusion into our tribe. There are deep evolutionary forces that drive us to be stalwart citizens of the communities we join.

Taibbi was, for years, a respected investigative journalist at Rolling Stone magazine. There, he had editors and fact checkers to answer to. Now, as an independent journalist, he has only the networks that he chooses to give him feedback and, being human like all of us, he subtly conforms to a set of dispositions and perspectives.

I probably fell prey to similar influences. As someone who researches innovation, I spend a lot of time with people who regard Netflix as a hero and Blockbuster as something of a bumbler. That probably affected how I perceived Antioco’s departure from the company. We all have blind spots and fall prey to the operational glitches in our brains. No one is immune.

Learning How To Not Fool Ourselves

In one of my favorite essays the physicist Richard Feynman wrote, “The first principle is that you must not fool yourself — and you are the easiest person to fool. So you have to be very careful about that,” He goes on further to say that simply being honest isn’t enough, you also need to “bend over backwards” to provide information so that others may prove you wrong.

So the first step is to be hyper-vigilant and aware that your brain has a tendency to fool you. It will quickly grasp on the most readily available data and detect patterns that may or may not be there. Then it will seek out other evidence that confirms those initial hunches while disregarding contrary evidence.

This is especially true of smart, accomplished people. Those who have been right in the past, who have proved the doubters wrong, are going to be less likely to see the warning signs. In many cases, they will even see opposition to their views as evidence they are on the right track. There’s a sucker born every minute and they’re usually the ones who think that they’re playing it smart.

Checking ourselves isn’t nearly enough, we need to actively seek out other views and perspectives. Some of this can be done with formal processes such as pre-mortems and red teams, but a lot of it is just acknowledging that we have blind spots, building the habit of reaching out to others and improving our listening skills.

Perhaps most of all, we need to have a sense of humility. It’s far too easy to be impressed with ourselves and far too difficult to see how we’re being led astray. There is often a negative correlation between our level of certainty and the likelihood of us being wrong. We all need to make an effort to believe less of what we think.

— Article courtesy of the Digital Tonto blog
— Image credit: 1 of 1,050+ FREE quotes for your meetings & presentations at http://misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Supercritical Water Oxidation (SCWO)

Designing the Future of Waste Destruction

LAST UPDATED: January 22, 2026 at 5:36 PM

Supercritical Water Oxidation (SCWO)

GUEST POST from Art Inteligencia

As we navigate the complexities of 2026, the global innovation community is increasingly focused on sustainable competitive advantage. But sustainability is no longer just a buzzword for the Environmental, Social, and Governance (ESG) report; it is a fundamental engineering and human-centered challenge. We are currently witnessing a paradigm shift in how we handle the “unhandleable” — toxic wastes like Per- and Polyfluoroalkyl Substances (PFAS), chemical agents, and industrial sludges. At the heart of this revolution is Supercritical Water Oxidation (SCWO).

Innovation, as I often say, is about increasing the probability of the impossible. For decades, the permanent destruction of “forever chemicals” felt like a biological and chemical impossibility. SCWO changes that math by leveraging the unique properties of water at its critical point — 374°C and 22.1 MPa — to create a “homogeneous” environment where organic waste is effectively incinerated without the flame, converting toxins into harmless water, carbon dioxide, and salts.

“Innovation transforms the useful seeds of invention into widely adopted solutions valued above every existing alternative. With SCWO, we aren’t just managing waste; we are redesigning our relationship with the environment by choosing permanent destruction over temporary storage.” — Braden Kelley

The Mechanism of Change

In a standard liquid state, water is a polar solvent. However, when pushed into a supercritical state, its dielectric constant drops, and it begins to behave like a nonpolar organic solvent. This allows oxygen and organic compounds to become completely miscible. The result? A rapid, high-efficiency oxidation reaction that happens in seconds. For the human-centered leader, this represents more than just a chemical reaction; it represents agility. It allows us to process waste on-site, reducing the carbon footprint and risk associated with transporting hazardous materials.

Case Study 1: Eliminating the “Forever” in PFAS

In a recent multi-provider demonstration involving 374Water, Battelle, and Aquarden, SCWO technology was tested against Aqueous Film-Forming Foam (AFFF) contaminated with high concentrations of PFAS. The results were staggering. The systems achieved a 99.99% reduction in total PFAS. By shifting from a “filtration and storage” mindset to a “destruction” mindset, these organizations proved that the technical debt of past industrial eras can be settled permanently. This is a classic example of using curiosity to solve a legacy problem that traditional ROI models would have ignored.

Market Leaders and The Innovation Ecosystem

The commercialization of SCWO is being driven by a dynamic ecosystem of established players and agile startups. 374Water (NASDAQ: SCWO) remains a prominent leader, recently expanding its board to accelerate the global rollout of its “AirSCWO” systems. Revive Environmental has also made significant waves by deploying its “PFAS Annihilator,” a mobile SCWO unit that can treat up to 500,000 gallons of landfill leachate daily. Other key innovators include Aquarden Technologies in Denmark, Battelle, and specialized engineering firms like Chematur Engineering AB. These companies aren’t just selling hardware; they are selling a future where waste management is a closed-loop system.

Case Study 2: Industrial Sludge and Energy Recovery

A European chemical manufacturing plant integrated a tubular SCWO reactor to handle hazardous organic sludges that previously required expensive off-site incineration. Not only did the SCWO process destroy 99.9% of the toxins, but the plant also implemented a heat recovery system. Because the oxidation reaction is exothermic, they were able to capture the excess heat to pre-heat the influent waste, significantly lowering operational costs. This transformation of a cost-center (waste disposal) into a self-sustaining utility is exactly the type of systemic innovation I encourage leaders to pursue.

Final Thoughts: The Curiosity Advantage

The half-life of our current waste management techniques is shrinking. Landfills are filling, and regulations are tightening. The organizations that thrive will be those that exercise the collective capacity for curiosity to adopt “future-present” technologies like SCWO. We must stop asking “How do we hide the waste?” and start asking “How do we unmake it?”


Supercritical Water Oxidation (SCWO) FAQ

What are the primary benefits of SCWO over traditional incineration?

SCWO operates in a closed system at lower temperatures than incineration, preventing the formation of harmful NOx, SOx, and dioxins. It also allows for higher destruction efficiency (often >99.99%) for persistent organic pollutants like PFAS.

Can SCWO systems recover energy from waste?

Yes. The oxidation process in SCWO is exothermic (it releases heat). Many modern commercial systems are designed to capture this energy to pre-heat the influent waste or generate steam for other industrial processes.

Is SCWO technology ready for large-scale industrial use?

While historically challenged by corrosion and salt buildup, 2026-era SCWO systems from leaders like 374Water and Revive Environmental use advanced materials and “transpiring wall” designs to handle these issues, making them viable for municipal and industrial scale-up.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

We Must Hold AI Accountable

We Must Hold AI Accountable

GUEST POST from Greg Satell

About ten years ago, IBM invited me to talk with some key members on the Watson team, when the triumph of creating a machine that could beat the best human players at the game show Jeopardy! was still fresh. I wrote in Forbes at the time that we were entering a new era of cognitive collaboration between humans, computers and other humans.

One thing that struck me was how similar the moment seemed to how aviation legend Chuck Yeager described the advent of flying-by-wire, four decades earlier, in which pilots no longer would operate aircraft, but interface with a computer that flew the plane. Many of the macho “flyboys” weren’t able to trust the machines and couldn’t adapt.

Now, with the launch of ChatGPT, Bill Gates has announced that the age of AI has begun and, much like those old flyboys, we’re all going to struggle to adapt. Our success will not only rely on our ability to learn new skills and work in new ways, but the extent to which we are able to trust our machine collaborators. To reach its potential, AI will need to become accountable.

Recognizing Data Bias

With humans, we work diligently to construct safe and constructive learning environments. We design curriculums, carefully selecting materials, instructors and students to try and get the right mix of information and social dynamics. We go to all this trouble because we understand that the environment we create greatly influences the learning experience.

Machines also have a learning environment called a “corpus.” If, for example, you want to teach an algorithm to recognize cats, you expose it to thousands of pictures of cats. In time, it figures out how to tell the difference between, say, a cat and a dog. Much like with human beings, it is through learning from these experiences that algorithms become useful.

However, the process can go horribly awry. A famous case is Microsoft’s Tay, a Twitter bot that the company unleashed on the microblogging platform in 2016. In under a day, Tay went from being friendly and casual (“humans are super cool”) to downright scary, (“Hitler was right and I hate Jews”). It was profoundly disturbing.

Bias in the learning corpus is far more common than we often realize. Do an image search for the word “professional haircut” and you will get almost exclusively pictures of white men. Do the same for “unprofessional haircut” and you will see much more racial and gender diversity.

It’s not hard to figure out why this happens. Editors writing articles about haircuts portray white men in one way and other genders and races in another. When we query machines, we inevitably find our own biases baked in.

Accounting For Algorithmic Bias

A second major source of bias results from how decision-making models are designed. Consider the case of Sarah Wysocki, a fifth grade teacher who — despite being lauded by parents, students, and administrators alike — was fired from the D.C. school district because an algorithm judged her performance to be sub-par. Why? It’s not exactly clear, because the system was too complex to be understood by those who fired her.

Yet it’s not hard to imagine how it could happen. If a teacher’s ability is evaluated based on test scores, then other aspects of performance, such as taking on children with learning differences or emotional problems, would fail to register, or even unfairly penalize them. Good human managers recognize outliers, algorithms generally aren’t designed that way.

In other cases, models are constructed according to what data is easiest to acquire or the model is overfit to a specific set of cases and is then applied too broadly. In 2013, Google Flu Trends predicted almost double as many cases there actually were. What appears to have happened is that increased media coverage about Google Flu Trends led to more searches by people who weren’t sick. The algorithm was never designed to take itself into account.

The simple fact is that an algorithm must be designed in one way or another. Every possible contingency cannot be pursued. Choices have to be made and bias will inevitably creep in. Mistakes happen. The key is not to eliminate error, but to make our systems accountable through, explainability, auditability and transparency.

To Build An Era Of Cognitive Collaboration We First Need To Build Trust

In 2020, Ofqual, the authority that administers A-Level college entrance exams in the UK, found itself mired in scandal. Unable to hold live exams because of Covid-19, it designed and employed an algorithm that based scores partly on the historical performance of the schools students attended with the unintended consequence that already disadvantaged students found themselves further penalized by artificially deflated scores.

The outcry was immediate, but in a sense the Ofqual case is a happy story. Because the agency was transparent about how the algorithm was constructed, the source of the bias was quickly revealed, corrective action was taken in a timely manner, and much of the damage was likely mitigated. As Linus’s Law advises, “given enough eyeballs, all bugs are shallow.”

The age of artificial intelligence requires us to collaborate with machines, leveraging their capabilities to better serve other humans. To make that collaboration successful, however, it needs to take place in an atmosphere of trust. Machines, just like humans, need to be held accountable, their decisions and insights can’t be a “black box.” We need to be able to understand where their judgments come from and how they’re decisions are being made.

Senator Schumer worked on legislation to promote more transparency in 2024, but that is only a start and the new administration has pushed the pause button on AI regulation. The real change has to come from within ourselves and how we see our relationships with the machines we create. Marshall McLuhan wrote that media are extensions of man and the same can be said for technology. Our machines inherit our human weaknesses and frailties. We need to make allowances for that.

— Article courtesy of the Digital Tonto blog
— Image credit: Flickr

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Mesh – Collaborative Sensing and the Future of Organizational Intelligence

LAST UPDATED: January 15, 2026 at 5:31 PM

The Mesh - Collaborative Sensing and the Future of Organizational Intelligence

GUEST POST from Art Inteligencia

For decades, organizations have operated like giant, slow-moving mammals with centralized nervous systems. Information traveled from the extremities (the employees and customers) up to the brain (management), where decisions were made and sent back down as commands. But in our hyper-connected, volatile world, this centralized model is failing. To thrive, we must evolve. We must move toward Collaborative Sensing — what I call The Mesh.

The Mesh is a paradigm shift where every person, every device, and every interaction becomes a sensor. It is a decentralized network of intelligence that allows an organization to sense, respond, and adapt in real-time. Instead of waiting for a quarterly report to tell you that a project is failing or a customer trend is shifting, The Mesh tells you the moment the first signal appears. This is human-centered innovation at its most agile.

“The smartest organizations of the future will not be those with the most powerful central computers, but those with the most sensitive and collaborative human-digital mesh. Intelligence is no longer something you possess; it is something you participate in.” — Braden Kelley

From Centralized Silos to Distributed Awareness

In a traditional hierarchy, silos prevent information from flowing horizontally. In a Mesh environment, data is shared peer-to-peer. Collaborative sensing leverages the wisdom of the crowd and the precision of the Internet of Things (IoT) to create a high-resolution picture of reality. This isn’t just about “big data”; it is about thick data — the qualitative, human context that explains the numbers.

When humans and machines collaborate in a sensing mesh, we achieve what I call Anticipatory Leadership. We stop reacting to the past and start shaping the future as it emerges. This requires a culture of radical transparency and psychological safety, where sharing a “negative” signal is seen as a contribution to the collective health of the mesh.

Leading the Charge: Companies and Startups in the Mesh

The landscape of collaborative sensing is being defined by a mix of established giants and disruptive startups. IBM and Cisco are laying the enterprise-grade foundation with their edge computing and industrial IoT frameworks, while Siemens is integrating collaborative sensing into the very fabric of smart cities and factories. On the startup front, companies like Helium are revolutionizing how decentralized wireless networks are built by incentivizing individuals to host “nodes.” Meanwhile, Nodle is creating a citizen-powered mesh network using Bluetooth on smartphones, and StreetLight Data is utilizing the mesh of mobile signals to transform urban planning. These players are proving that the most valuable data is distributed, not centralized.

Case Study 1: Transforming Safety in Industrial Environments

The Challenge

A global mining operation struggled with high rates of “near-miss” accidents. Traditional safety protocols relied on manual reporting after an incident occurred. By the time management reviewed the data, the conditions that caused the risk had often changed, making preventative action difficult.

The Mesh Solution

The company implemented a collaborative sensing mesh. Workers were equipped with wearable sensors that tracked environmental hazards (gas levels, heat) and physiological stress. Simultaneously, heavy machinery was outfitted with proximity sensors. These nodes communicated locally — machine to machine and machine to human.

The Human-Centered Result

The “sensing” happened at the edge. If a worker’s stress levels spiked while a vehicle was approaching an unsafe zone, the mesh triggered an immediate haptic alert to the worker and slowed the vehicle automatically. Over six months, near-misses dropped by 40%. The organization didn’t just get “safer”; it became a learning organization that used real-time data to redesign workflows around human limitations and strengths.

Case Study 2: Urban Resilience and Citizen Sensing

The Challenge

A coastal city prone to flash flooding relied on a few expensive, centralized weather stations. These stations often missed hyper-local rain events that flooded specific neighborhoods, leaving emergency services flat-footed.

The Mesh Solution

The city launched a Citizen Sensing initiative. They distributed low-cost, connected rain gauges to residents and integrated data from connected cars’ windshield wiper activity. This created a high-density sensing mesh across the entire geography.

The Human-Centered Result

Instead of one data point for the whole city, planners had thousands. When a localized cell hit a specific district, the mesh automatically updated digital signage to reroute traffic and alerted residents in that specific block minutes before the water rose. This moved the city from crisis management to collaborative resilience, empowering citizens to be active participants in their own safety.

Building Your Organizational Mesh

If you are looking to help your team navigate this transition, start by asking: Where is our organization currently numb? Where are the blind spots where information exists but isn’t being sensed or shared?

To build a successful Mesh, you must prioritize:

  • Interoperability: Ensuring different sensors and humans can “speak” to each other across platforms.
  • Privacy by Design: Ensuring the mesh protects individual identity while sharing collective insight.
  • Incentivization: Why should people participate? The mesh must provide value back to those who provide the data.

The Mesh is not just a technological infrastructure; it is a human-centered mindset. It is the realization that we are all nodes in a larger system of intelligence. When we sense together, we succeed together.

Frequently Asked Questions on Collaborative Sensing

Q: What is Collaborative Sensing or ‘The Mesh’?

A: Collaborative Sensing is a decentralized approach to intelligence where humans and IoT devices work in a networked “mesh” to share real-time data. Unlike top-down systems, it relies on distributed nodes to sense, process, and act on information locally and collectively.

Q: How does Collaborative Sensing benefit human-centered innovation?

A: It moves the focus from “big data” to “human context.” By sensing environmental and social signals in real-time, organizations can respond to human needs with greater empathy and precision, reducing friction in everything from city planning to workplace safety.

Q: What is the primary challenge in implementing a Mesh network?

A: The primary challenge is trust and data governance. For a mesh to work effectively, participants must be confident that their data is secure, anonymous where necessary, and used for collective benefit rather than invasive surveillance.

Disclaimer: This article speculates on the potential future applications of cutting-edge scientific research. While based on current scientific understanding, the practical realization of these concepts may vary in timeline and feasibility and are subject to ongoing research and development.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Humans Don’t Have to Perform Every Task

Humans Don't Have to Perform Every Task

GUEST POST from Shep Hyken

There seems to be a lot of controversy and questions surrounding artificial intelligence (AI) being used to support customers. The customer experience can be enhanced with AI, but it can also derail and cause customers to head to the competition.

Last week, I wrote an article titled Just Because You Can Use AI, Doesn’t Mean You Should. The gist of the article was that while AI has impressive capabilities, there are situations in which human-to-human interaction is still preferred, even necessary, especially for complex, sensitive or emotionally charged customer issues.

However, there is a flip side. Sometimes AI is the smart thing to use, and eliminating human-to-human interaction actually creates a better customer experience. The point is that just because a human could handle a task doesn’t mean they should. 

Before we go further, keep in mind that even if AI should handle an issue, my customer service and customer experience (CX) research finds almost seven out of 10 customers (68%) prefer the phone. So, there are some customers who, regardless of how good AI is, will only talk to a live human being.

Here’s a reality: When a customer simply wants to check their account balance, reset a password, track a package or any other routine, simple task or request, they don’t need to talk to someone. What they really want, even if they don’t realize it, is fast, accurate information and a convenient experience.

The key is recognizing when customers value efficiency over engagement. Even with 68% of customers preferring the phone, they also want convenience and speed. And sometimes, the most convenient experience is one that eliminates unnecessary human interaction.

Smart companies are learning to use both strategically. They are finding a balance. They’re using AI for routine, transactional interactions while making live agents available for situations requiring judgement, creativity or empathy.

The goal isn’t to replace humans with AI. It’s to use each where they excel most. That sometimes means letting technology do what it can do best, even if a human could technically do the job. The customer experience improves when you match the right resource to the customers’ specific need.

That’s why I advocate pushing the digital, AI-infused experience for the right reasons but always – and I emphasize the word always – giving the customer an easy way to connect to a human and continue the conversation.

In the end, most customers don’t care whether their problem is solved by a human or AI. They just want it solved well.

Image credits: Google Gemini, Shep Hyken

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

A New Era of Economic Warfare Arrives

Is Your Company Prepared?

LAST UPDATED: January 9, 2026 at 3:55PM

A New Era of Economic Warfare Arrives

GUEST POST from Art Inteligencia

Economic warfare rarely announces itself. It embeds quietly into systems designed for trust, openness, and speed. By the time damage becomes visible, advantage has already shifted.

This new era of conflict is not defined by tanks or tariffs alone, but by the strategic exploitation of interdependence — where innovation ecosystems, supply chains, data flows, and cultural platforms become contested terrain.

The most effective economic attacks do not destroy systems outright. They drain them slowly enough to avoid response.

Weaponizing Openness

For decades, the United States has benefited from a research and innovation model grounded in openness, collaboration, and academic freedom. Those same qualities, however, have been repeatedly exploited.

Publicly documented prosecutions, investigations, and corporate disclosures describe coordinated efforts to extract intellectual property from American universities, national laboratories, and private companies through undisclosed affiliations, parallel research pipelines, and cyber-enabled theft.

This is not opportunistic theft. It is strategic harvesting.

When innovation can be copied faster than it can be created, openness becomes a liability instead of a strength.

Cyber Persistence as Economic Strategy

Cyber operations today prioritize persistence over spectacle. Continuous access to sensitive systems allows competitors to shortcut development cycles, underprice rivals, and anticipate strategic moves.

The goal is not disruption — it is advantage.

Skydio and Supply Chain Chokepoints

The experience of American drone manufacturer Skydio illustrates how economic pressure can be applied without direct confrontation.

After achieving leadership through autonomy and software-driven innovation rather than low-cost manufacturing, Skydio encountered pressure through access constraints tied to upstream supply chains.

This was a calculated attack on a successful American business. It serves as a stark reminder: if you depend on a potential adversary for your components, your success is only permitted as long as it doesn’t challenge their dominance. We must decouple our innovation from external control, or we will remain permanently vulnerable.

When supply chains are weaponized, markets no longer reward the best ideas — only the most protected ones.

Agricultural and Biological Vulnerabilities

Incidents involving the unauthorized movement of biological materials related to agriculture and bioscience highlight a critical blind spot. Food systems are economic infrastructure.

Crop blight, livestock disease, and agricultural disruption do not need to be dramatic to be devastating. They only need to be targeted, deniable, and difficult to attribute.

Pandemics and Systemic Shock

The origins of COVID-19 remain contested, with investigations examining both natural spillover and laboratory-associated scenarios. From an economic warfare perspective, attribution matters less than exposure.

The pandemic revealed how research opacity, delayed disclosure, and global interdependence can cascade into economic devastation on a scale rivaling major wars.

Resilience must be designed for uncertainty, not certainty.

The Attention Economy as Strategic Terrain and Algorithmic Narcotic

Platforms such as TikTok represent a new form of economic influence: large-scale behavioral shaping.

Regulatory and academic concerns focus on data governance, algorithmic amplification, and the psychological impact on youth attention, agency, and civic engagement.

TikTok is not just a social media app; it is a cognitive weapon. In China, the algorithm pushes “Douyin” users toward educational content, engineering, and national achievement. In America, the algorithm pushes our youth toward mindless consumption, social fragmentation, and addictive cycles that weaken the mental resilience of the next generation. This is an intentional weakening of our human capital. By controlling the narrative and the attention of 170 million Americans, American children are part of a massive experiment in psychological warfare, designed to ensure that the next generation of Americans is too distracted to lead and too divided to innovate.

Whether intentional or emergent, influence over attention increasingly translates into long-term economic leverage.

The Human Cost of Invisible Conflict

Economic warfare succeeds because its consequences unfold slowly: hollowed industries, lost startups, diminished trust, and weakened social cohesion.

True resilience is not built by reacting to attacks, but by redesigning systems so exploitation becomes expensive and contribution becomes the easiest path forward.

Conclusion

This is not a call for isolation or paranoia. It is a call for strategic maturity.

Openness without safeguards is not virtue — it is exposure. Innovation without resilience is not leadership — it is extraction.

The era of complacency must end. We must treat economic security as national security. This means securing our universities, diversifying our supply chains, and demanding transparency in our digital and biological interactions. We have the power to stoke our own innovation bonfire, but only if we are willing to protect it from those who wish to extinguish it.

The next era of competition will reward nations and companies that design systems where trust is earned, reciprocity is enforced, and long-term value creation is protected.

Frequently Asked Questions

What is economic warfare?

Economic warfare refers to the use of non-military tools — such as intellectual property extraction, cyber operations, supply chain control, and influence platforms — to weaken a rival’s economic position and long-term competitiveness.

Is China the only country using these tactics?

No. Many nations engage in forms of economic competition that blur into coercion. The concern highlighted here is about scale, coordination, and the systematic exploitation of open systems.

How should the United States respond?

By strengthening resilience rather than retreating from openness — protecting critical research, diversifying supply chains, aligning innovation policy with national strategy, and designing systems that reward contribution over extraction.

How should your company protect itself?

Companies should identify their critical knowledge assets, limit unnecessary exposure, diversify suppliers, strengthen cybersecurity, enforce disclosure and governance standards, and design partnerships that balance collaboration with protection. Resilience should be treated as a strategic capability, not a compliance exercise.

Image credits: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Rearchitecting the Landscape of Knowledge Work

Rearchitecting the Landscape of Knowledge Work

GUEST POST from Geoffrey A. Moore

One thing the pandemic made clear to everyone involved with the knowledge-work profession is that daily commuting was a ludicrously excessive tax on their time. The amount of work they were able to get done remotely clearly exceeded what they were getting done previously, and the reduction in stress was both welcome and productive. So, let’s be clear, there is no “going back to the office.” What is possible, on the other hand, is going forward to the office, and that is what we are going to discuss in this blog post.

The point is, we need to rethink the landscape of knowledge work—what work is best done where, and why. Let’s start with remote. Routine task work of the sort that a professional is expected to complete on their own is ideally suited to remote working. It requires no supervision to speak of and little engagement with others except at assigned checkpoints. Those checkpoints can be managed easily through video conferencing combined with collaboration-enabling software like Slack or Teams. Productivity commitments are monitored in terms of the quality and quantity of received work. This is game-changing for everyone involved, and we would be crazy to forsake these gains simply to comply with a return-to-the-office mandate.

That said, there are many good reasons still to want a return. Before we dig into them, however, let’s spend a moment on the bad reasons first. First among them is what we might call “boomer executive control needs”—a carry-over from the days of hierarchical management structures that to this day still run most of our bureaucracies. Implicit in this model is the notion that everyone needs supervision all the time. Let me just say that if that is the case in your knowledge-work organization, you are in big trouble, and mandating everyone to come back to the office is not going to fix it. The fix needed is workforce engagement, and that requires personal intervention, not systemic enforcement. Yes, you want to do this in person, and yes, the office is typically the right place to do so, but no, you don’t need everyone to be there all the time to do it.

This same caveat applies to other reasons why enterprises are mandating a return. Knowledge work benefits from social interactions with colleagues. You get to float ideas, hear about new developments, learn from observing others, and the like. It is all good, and you do need to be collocated to do it—just not every day. What is required instead is a new cadence. People need an established routine to know when they are expected to show up, one they can plan around far in advance. In short, we need the discipline of office attendance, we just want it to be more respectful of our remote work. In that light, a good place to start is a 60/40 split—your call as to which is which. But for the days that are in office, attendance is expected, not optional. To do anything else is to disrespect your colleagues and to put your personal convenience above the best interests of the enterprise that is funding you.

So much for coping with some of the bad reasons. Now let’s look into five good ones.

  1. Customer-facing challenges. This includes sales, account management, and customer success (but not customer support or tech support). The point is, whenever things are up for grabs on the customer side, it takes a team to wrestle them down to earth, and the members of that team need to be in close communication to detect the signals, strategize the responses, and leverage each other’s relationships and expertise. You don’t get to say when this happens, so you have to show up every day ready to play (meaning 80/20 is probably a more effective in-office/out-of-office ratio).
  2. Onboarding, team building, and M&A integration. Things can also be up for grabs inside your own organization, particularly when you are adding new people, building a new team (or turning around an old one), or integrating an acquisition. In these kinds of fluid situations, there is a ton of non-verbal communication, both to detect and to project, and there is simply no substitute for collocation. By contrast, career development, mentoring, and performance reviews are best conducted one-on-one, and here modern video conferencing with its high-definition visuals and zero-latency audio can actually induce a more focused conversation.
  3. Mission-critical systems operations. This is just common sense—if the wheels start to come off, you do not want to lose time assembling the team. Cybersecurity attacks would be one good example. On the other hand, with proper IT infrastructure, routine system monitoring, and maintenance as well as standard end-user support can readily leverage remote expertise.
  4. In-house incubations. It is possible to do a remote-only start-up if you have most of the team in place from the beginning, leveraging time in collocation at a prior company, especially if the talent you need is super-scarce and geographically dispersed.

    But for public enterprises leveraging the Incubation Zone, as well as lines of business conducting nested incubation inside their own organizations, a cadence surrounding collocation is critical. The reason is that incubations call for agile decision-making, coordinated course corrections, fast failures, and even faster responses to them. You don’t have to be together every day—there is still plenty of individual knowledge work to be done, but you do need to keep in close formation, and that requires frequent unscripted connections.

  5. Cross-functional programs and projects. These are simply impossible to do on a remote basis. There are too many new relationships that must be established, too many informal negotiations to get resources assigned, too many group sessions to get people aligned, and too much lobbying to get the additional support you need. This is especially true when the team is led by a middle manager who has no direct authority over the team members, only their managers’ commitment and their own good will.

So, what’s the best in-office/remote ratio for your organization?

You might try doing a high-level inventory of all the work you do, calling out for each workload which mode of working is preferable, and totaling it up to get a first cut. You can be sure that whatever you come up with will be wrong, but that’s OK because your next step will be to socialize it. Once you get enough fingerprints on it, you will go live with it, only to confirm it is still wrong, but now with a coalition of the willing to make it right, if only to make themselves look better.

Ain’t management fun?

That’s what I think. What do you think?

Image Credit: Google Gemini

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Just Because You Can Use AI Doesn’t Mean You Should

Just Because You Can Use AI Doesn't Mean You Should

GUEST POST from Shep Hyken

I’m often asked, “What should AI be used for?” While there is much that AI can do to support businesses in general, it’s obvious that I’m being asked how it relates to customer service and customer experience (CX). The true meaning of the question is more about what tasks AI can do to support a customer, thereby potentially eliminating the need for a live agent who deals directly with customers.

First, as the title of this article implies, just because AI can do something, it doesn’t mean it should. Yes, AI can handle many customer support issues, but even if every customer were willing to accept that AI can deliver good support, there are some sensitive and complicated issues for which customers would prefer to talk to a human.

AI Shep Hyken Cartoon

Additionally, consider that, based on my annual customer experience research, 68% of customers (that’s almost seven out of 10) prefer the phone as their primary means of communication with a company or brand. However, another finding in the report is worth mentioning: 34% of customers stopped doing business with a company because self-service options were not provided. Some customers insist on the self-service option, but at the same time, they want to be transferred to a live agent when appropriate.

AI works well for simple issues, such as password resets, tracking orders, appointment scheduling and answering basic or frequently asked questions. Humans are better suited for handling complaints and issues that need empathy, complex problem-solving situations that require judgment calls and communicating bad news.

An AI-fueled chatbot can answer many questions, but when a medical patient contacts the doctor’s office about test results related to a serious issue, they will likely want to speak with a nurse or doctor, not a chatbot.

Consider These Questions Before Implementing AI For Customer Interactions

AI for addressing simple customer issues has become affordable for even the smallest businesses, and an increasing number of customers are willing to use AI-powered customer support for the right reasons. Consider these questions before implementing AI for customer interactions:

  1. Is the customer’s question routine or fact-based?
  2. Does it require empathy, emotion, understanding and/or judgment (emotional intelligence)?
  3. Could the wrong answer cause a problem or frustrate the customer?
  4. As you think about the reasons customers call, which ones would they feel comfortable having AI handle?
  5. Do you have an easy, seamless way for the customer to be transferred to a human when needed?

The point is, regardless of how capable the technology is, it doesn’t mean it is best suited to deliver what the customer wants. Live agents can “read the customer” and know how to effectively communicate and empathize with them. AI can’t do that … yet. The key isn’t choosing between AI and humans. It’s knowing when to use each one.

Image credits: Google Gemini, Shep Hyken

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.