Category Archives: Technology

DNA May Be the Next Frontier of Computing and Data Storage

DNA May Be the Next Frontier of Computing and Data Storage

GUEST POST from Greg Satell

Data, as many have noted, has become the new oil, meaning that we no longer regard the information we store as merely a cost of doing business, but a valuable asset and a potential source of competitive advantage. It has become the fuel that powers advanced technologies such as machine learning.

A problem that’s emerging, however, is that our ability to produce data is outstripping our ability to store it. In fact, an article in the journal Nature predicts that by 2040, data storage would consume 10–100 times the expected supply of microchip-grade silicon, using current technology. Clearly, we need a data storage breakthrough.

One potential solution is DNA, which is a million times more information dense than today’s flash drives. It also is more stable, more secure and uses minimal energy. The problem is that it is currently prohibitively expensive. However, a startup that has emerged out of MIT, called CATALOG, may have found the breakthrough we’re looking for: low-cost DNA Storage.

The Makings Of A Scientist-Entrepreneur

Growing up in his native Korea, Hyunjun Park never planned on a career in business, much less the technology business, but expected to become a biologist. He graduated with honors from Seoul National University and then went on to earn a PhD from the University of Wisconsin. Later he joined Tim Lu’s lab at MIT, which specializes in synthetic biology.

In an earlier time, he would have followed an established career path, from PhD to post-doc to assistant professor to tenure. These days, however, there is a growing trend for graduate students to get an entrepreneurial education in parallel with the traditional scientific curriculum. Park, for example, participated in both the Wisconsin Entrepreneurial Bootcamp and Start MIT.

He also met a kindred spirit in Nate Roquet, a PhD candidate who, about to finish his thesis, had started thinking about what to do next. Inspired by a talk from given by the Chief Science Officer at a seed fund, IndieBio, the two began to talk in earnest about starting a company together based on their work in synthetic biology.

As they batted around ideas, the subject of DNA storage came up. By this time, the advantages of the technology were well known but it was not considered practical, costing hundreds of thousands of dollars to store just a few hundred megabytes of data. However, the two did some back-of -the-envelope calculations and became convinced they could do it far more cheaply.

Moving From Idea To Product

The basic concept of DNA storage is simple. Essentially, you just encode the ones and zeros of digital code into the T, G, A and C’s of genetic code. However, stringing those genetic molecules together is tedious and expensive. The idea that Park and Roquet came up with was to use enzymes to alter strands of DNA, rather than building them up piece by piece.

Contrary to popular opinion, most traditional venture capital firms, such as those that populate Sand Hill Road in Silicon Valley, don’t invest in ideas. They invest in products. IndieBio, however, isn’t your typical investor. They give only give a small amount of seed capital, but offer other services, such as wet labs, entrepreneurial training and scientific mentorship. Park and Roquet reached out to them and found some interest.

“We invest in problems, not necessarily solutions,” Arvind Gupta, Founder at IndieBio told me. “Here the problem is massive. How do you keep the world’s knowledge safe? We know DNA can last thousands of years and can be replicated very inexpensively. That’s a really big deal and Hyunjun and Nate’s approach was incredibly exciting.”

Once the pair entered IndieBio’s four-month program, they found both promise and disappointment. Their approach could dramatically reduce the cost of storing information in DNA, but not nearly quickly enough to build a commercially viable product. They would need to pivot if they were going to turn their idea into an actual business.

Scaling To Market

One flaw in CATALOG’s approach was that the process was too complex to scale. Yet they found that by starting with just a few different DNA strands and attaching them together, much like a printing press pre-arranges words in a book, they could come up with something that was not only scalable, but commercially viable from a cost perspective.

The second problem was more thorny. Working with enzymes is incredibly labor intensive and, being biologists, Park and Roquet didn’t have the mechanical engineering expertise to make their process feasible. Fortunately, an advisor, Darren Link, connected the pair to Cambridge Consultants, an innovation consultancy that could help them.

“We started looking at the problem and it seemed that, on paper at least, we could make it work,” Richard Hammond, Technology Director and Head of Synthetic Biology at Cambridge Consultants, told me. “Now we’re about halfway through making the first prototype and we believe we can make it work and scale it significantly. We’re increasingly confident that we can solve the core technical challenges.”

In 2018 CATALOG introduced the world to Shannon, its prototype DNA writer. In 2022 CATALOG announced its DNA computation work at the HPC User Forum. But CATALOG isn’t without competition in the space. For example, Western Digital‘s LTO-9 from 2022, can store 18 TB per cartridge. CATALOG for its part is partnering with Seagate “on several initiatives to advance scalable and automated DNA-based storage and computation platforms, including making DNA-based platforms up to 1000 times smaller.” That should make the process competitive for archival storage, such as medical and legal records as well as storing film databases at movie studios.

“I think the fact that we’re inventing a completely new medium for data storage is really exciting,” Park told me. “I don’t think that we know yet what the true potential is because the biggest use cases probably don’t exist yet. What I do know is that our demand for data storage will soon outstrip our supply and we are thrilled about the possibility of solving that problem.”

Going Beyond Digital

A generation ago, the task of improving data storage would have been seen as solely a computer science problem. Yet today, the digital era is ending and we’re going to have to look further and wider for solutions to the problems we face. With the vast improvement in genomics, which is far outpacing Moore’s law these days, we can expect biology to increasingly play a role.

“Traditional, information technology has been strictly the realm of electrical engineers, physicists and coders,” Gupta of IndieBio told me. “What we’re increasingly finding is that biology, which has been honed for millions of years by evolution, can often point the way to solutions that are more robust and potentially, much cheaper and more efficient.”

Yet this phenomenon goes far beyond biology. We’re also seeing similar accelerations in other fields, such as materials science and space-related technologies. We’re also seeing a new breed of investors, like IndieBio, that focus specifically on scientist entrepreneurs. “I consider myself a product of the growing ecosystem for scientific entrepreneurs at universities and in the investor community,” Park told me.

Make no mistake. We are entering a new era of innovation and the traditional Silicon Valley approach will not get us where we need to go. Instead, we need to forge greater collaboration between the scientific community, the investor community and government agencies to solve problems that are increasingly complex and interdisciplinary.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Video Killed More Than the Radio Star

Video Killed More Than the Radio Star

by Braden Kelley

If you are a child of the eighties, you will remember when MTV went live 24 hours a day with music videos on cable television August 1, 1981 with the broadcast of “Video Killed the Radio Star” by the Buggles.

But I was thinking the other day about how video (or taken more broadly as streaming media – including television, movies, gaming, social media, and the internet) has killed far more things than just radio stars. Many activities have experienced substantial declines due to people staying home and engaging in these forms of entertainment – often by themselves – where in the past people would leave their homes to engage in more human-to-human-interactions.

The ten declines listed below have not only reshaped the American landscape – literally – but have also served to feed declines in the mental health of modern nations at the same time. Without further ado, here is the list

1. Bowling Alleys:

Bowling alleys, once bustling with players and leagues, have faced challenges in recent years. The communal experience of bowling has been replaced by digital alternatives, impacting the industry.

2. Roller Skating Rinks:

Roller skating rinks, which were once popular hangout spots for families and teens, have seen declining attendance. The allure of roller disco and skating parties has waned as people turn to other forms of entertainment.

3. Drive-In Movie Theaters:

Drive-in movie theaters, iconic symbols of mid-20th-century entertainment, have faced challenges in recent decades. While they once provided a unique way to watch films from the comfort of your car, changing lifestyles and technological advancements have impacted their popularity.

4. Arcade Game Centers:

In the ’80s and ’90s, video game arcades were buzzing hubs of entertainment. People flocked to play games like Pac-Man, Street Fighter, and Mortal Kombat. Traditional arcade game centers, filled with pinball machines, classic video games, and ticket redemption games, have struggled to compete with home gaming consoles and online multiplayer experiences. The convenience of playing video games at home has led to a decline in arcade visits. Nostalgia keeps some arcades alive, but they are no longer as prevalent as they once were.

5. Miniature Golf Courses:

Mini-golf courses, with their whimsical obstacles and family-friendly appeal, used to be popular weekend destinations. However, the rise of digital entertainment has impacted their attendance. The allure of playing a round of mini-golf under the sun has faded for many.

6. Indoor Trampoline Parks:

Indoor trampoline parks gained popularity as a fun and active way to spend time with friends and family. However, the pandemic and subsequent lockdowns forced many of these parks to close temporarily. Even before the pandemic, the availability of home trampolines and virtual fitness classes reduced the need for indoor trampoline parks. People can now bounce and exercise at home or virtually, without leaving their living rooms.

7. Live Music Venues:

Live music venues, including small clubs, concert halls, and outdoor amphitheaters, have struggled due to changing entertainment preferences. While some artists and bands continue to perform, the rise of virtual concerts and streaming services has affected attendance. People can now enjoy live music from the comfort of their homes, reducing the need to attend physical venues. The pandemic also disrupted live events, leading to further challenges for the industry.

8. Public Libraries (In-Person Visits):

Public libraries, once bustling with readers and community events, have seen a decline in in-person visits. E-books, audiobooks, and online research resources have made it easier for people to access information without physically visiting a library. While libraries continue to offer valuable services, their role has shifted from primarily physical spaces to digital hubs for learning and exploration – and a place for latchkey kids to go and wait for their parents to get off work.

10. Shopping Malls

Once bustling centers of retail and social activity, shopping malls have faced significant challenges in recent years. Various technological shifts have contributed to their decline, including e-commerce and online shopping, social media and influencer culture, changing demographics and urbanization. Shopping malls are yet another place that parents are no longer dropping off the younger generation at for the day.

And if that’s not enough, here is a bonus one for you:

11. Diners, Malt Shops, Coffee Shops, Dive Bars/Taverns, Neighborhood Pubs (UK) and Drive-In Burger Joints

If you’re a child of the seventies or eighties, no doubt you probably tuned to watch Richie, Potsie, Joanie, Fonsie and Ralph Malph gather every day at Al’s. Unfortunately, many of the more social and casual drinking and dining places are experiences declines as diet, habit and technology changes have kicked in. Demographic changes (aging out of nostalgia) and the rise of food delivery apps and takeout culture have helped to sign their death warrant.

Conclusion

In the ever-evolving landscape of entertainment, video and streaming media have reshaped our experiences and interactions. As we bid farewell to once-thriving institutions, we recognize both the convenience and the cost of this digital transformation. For example, the echoes of strikes and spares have faded as digital alternatives replace the communal joy of bowling. As we navigate this digital era, let us cherish what remains and adapt to what lies ahead. Video may have transformed our world, but the echoes of lost experiences linger, urging us to seek balance in our screens and our souls. As these once ubiquitous gathering places disappear, consumer tastes change and social isolation increases, will we as a society seek to reverse course or evolve to some new way of reconnecting as humans in person? And if so, how?

What other places and/or activities would you have added to the list?
(sound off in the comments)

p.s. Be sure and follow both my personal account and the Human-Centered Change and Innovation community on LinkedIn.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

References:
(1) Duwamish Drive-In was not really about the movies. https://mynorthwest.com/289708/duwamish-drive-in-not-really-about-the-movies/.
(3) How online gaming has become a social lifeline – BBC. https://www.bbc.com/worklife/article/20201215-how-online-gaming-has-become-a-social-lifeline.
(3) Social media brings benefits and risks to teens. Psychology can help …. https://www.apa.org/monitor/2023/09/protecting-teens-on-social-media.
(4) Frontiers | Social Connectedness, Excessive Screen Time During COVID-19 …. https://www.frontiersin.org/articles/10.3389/fhumd.2021.684137/full.

How I Use AI to Understand Humans

(and Cut Research Time by 80%)

How I Use AI to Understand Humans

GUEST POST from Robyn Bolton

AI is NOT a substitute for person-to-person discovery conversations or Jobs to be Done interviews.

But it is a freakin’ fantastic place to start…if you do the work before you start.

Get smart about what’s possible

When ChatGPT debuted, I had a lot of fun playing with it, but never once worried that it would replace qualitative research.  Deep insights, social and emotional Jobs to be Done, and game-changing surprises only ever emerge through personal conversation.  No matter how good the Large Language Model (LLM) is, it can’t tell you how feelings, aspirations, and motivations drive their decisions.

Then I watched JTBD Untangled’s video with Evan Shore, WalMart’s Senior Director of Product for Health & Wellness, sharing the tests, prompts, and results his team used to compare insights from AI and traditional research approaches.

In a few hours, he generated 80% of the insights that took nine months to gather using traditional methods.

Get clear about what you want and need.

Before getting sucked into the latest shiny AI tools, get clear about what you expect the tool to do for you.  For example:

  • Provide a starting point for research: I used the free version of ChatGPT to build JTBD Canvas 2.0 for four distinct consumer personas.  The results weren’t great, but they provided a helpful starting point.  I also like Perplexity because even the free version links to sources.
  • Conduct qualitative research for meI haven’t used it yet, but a trusted colleague recommended Outset.ai, a service that promises to get to the Why behind the What because of its ability to “conduct and synthesize video, audio, and text conversations.”
  • Synthesize my research and identify insights: An AI platform built explicitly for Jobs to be Done Research?  Yes, please!  That’s precisely what JobLens claims to be, and while I haven’t used it in a live research project, I’ve been impressed by the results of my experiments.  For non-JTBD research, Otter.ai is the original and still my favorite tool for recording, live transcription, and AI-generated summaries and key takeaways.
  • Visualize insights:  MuralMiro, and FigJam are the most widely known and used collaborative whiteboards, all offering hundreds of pre-formatted templates for personas, journey maps, and other consumer research templates.  Another colleague recently sang the praises of theydo, an AI tool designed specifically for customer journey mapping.

Practice your prompts

“Garbage in.  Garbage out.” Has never been truer than with AI.  Your prompts determine the accuracy and richness of the insights you’ll get, so don’t wait until you’ve started researching to hone them.  If you want to start from scratch, you can learn how to write super-effective prompts here and here.  If you’d rather build on someone else’s work, Brian at JobsLens has great prompt resources. 

Spend time testing and refining your prompts by using a previous project as a starting point.  Because you know what the output should be (or at least the output you got), you can keep refining until you get a prompt that returns what you expect.    It can take hours, days, or even weeks to craft effective prompts, but once you have them, you can re-use them for future projects.

Defend your budget

Using AI for customer research will save you time and money, but it is not free. It’s also not just the cost of the subscription or license for your chosen tool(s).  

Remember the 80% of insights that AI surfaced in the JTBD Untangled video?  The other 20% of insights came solely from in-person conversations but comprised almost 100% of the insights that inspired innovative products and services.

AI can only tell you what everyone already knows. You need to discover what no one knows, but everyone feels.  That still takes time, money, and the ability to connect with humans.

Run small experiments before making big promises

People react to change differently.  Some will love the idea of using AI for customer research, while others will resist with.  Everyone, however, will pounce on any evidence that they’re right.  So be prepared.  Take advantage of free trials to play with tools.  Test tools on friends, family, and colleagues.  Then under-promise and over-deliver.

AI is a starting point.  It is not the ending point. 

I’m curious, have you tried using AI for customer research?  What tools have you tried? Which ones do you recommend?

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Value Doesn’t Disappear

It Shifts From One Place to Another

Value Doesn't Disappear

GUEST POST from Greg Satell

A few years ago, I published an article about no-code software platforms, which was very well received. Before long, however, I began to get angry — and sometimes downright nasty — comments from software engineers who were horrified by the notion that you can produce software without actually understanding the code behind it.

Of course, no-code platforms don’t obviate the need for software engineers, but rather automate basic tasks so that amateurs can design applications by themselves. These platforms are, necessarily, limited but can increase productivity dramatically and help line managers customize technology to fit the task at hand.

Similarly, when FORTRAN, the first real computer language, was invented, many who wrote machine code objected, much like the software engineers did to my article. Yet Fortran didn’t destroy computer programming, but democratized and expanded it. The truth is that value never disappears. It just shifts to another place and that’s what we need to learn to focus on.

Why Robots Aren’t Taking Our Jobs

Ever since the financial crisis we’ve been hearing about robots taking our jobs. Yet just the opposite seems to be happening. In fact, we increasingly find ourselves in a labor shortage. Most tellingly, the shortage is especially acute in manufacturing, where automation is most pervasive. So what’s going on?

The fact is that automation doesn’t actually replace jobs, it replaces tasks. To understand how this works, think about the last time you walked into a highly automated Apple store, which actually employs more people than a typical retail location of the same size. They aren’t there to ring up your purchase any faster, but to do all the things that a machine can’t do, like answer your questions and solve your problems.

A few years ago I came across an even more stark example when I asked Vijay Mehta, Chief Innovation Officer for Consumer Information Services at Experian about the effect that shifting to the cloud had on his firm’s business. The first order effect was simple, they needed a lot less technicians to manage its infrastructure and those people could easily be laid off.

Yet they weren’t. Instead Experian shifted a lot of that talent and expertise to focus on creating new services for its customers. One of these, a cloud enabled “data on demand” platform called Ascend has since become one of the $4 billion company’s most profitable products.

Now think of what would have happened if Experian had merely seen cloud technology as an opportunity to cut costs. Sure, it would have fattened its profit margins temporarily, but as its competitors moved to the cloud that advantage would have soon been eroded and, without new products its business would soon decline.

The Outsourcing Dilemma

Another source of disruption in the job market has been outsourcing. While no one seemed to notice when large multinational corporations were outsourcing blue-collar jobs to low cost countries, now so-called “gig economy” sites like Upwork and Fiverr are doing the same thing for white collar professionals like graphic designers and web developers.

So you would expect to see a high degree of unemployment for those job categories, right? Actually no. The Bureau of Labor Statistics expects demand for graphic designers to increase 4% by 2026 and web developers to increase 15%. The site Mashable recently named web development as one of 8 skills you need to get hired in today’s economy.

It’s not hard to see why. While it is true that a skilled professional in a low-cost country can do small projects of the same caliber as those in high cost countries, those tasks do not constitute a whole job. For large, important projects, professionals must collaborate closely to solve complex problems. It’s hard to do that through text messages on a website.

So while it’s true that many tasks are being outsourced, the number of jobs has actually increased. Just like with automation, outsourcing doesn’t make value disappear, but shifts it somewhere else.

The Social Impact

None of this is to say that the effects of technology and globalization hasn’t been real. While it’s fine to speak analytically about value shifting here and there, if a task that you spent years to learn to do well becomes devalued, you take it hard. Economists have also found evidence that disruptions in the job market have contributed to political polarization.

The most obvious thing to do is retrain workers that have been displaced, but it turns out that’s not so simple. In Janesville, a book which chronicles a small town’s struggle to recover from the closing of a GM plant, author Amy Goldstein found that the workers that sought retraining actually did worse than those that didn’t.

When someone loses their job, they don’t need training. They need another job and removing yourself from the job market to take training courses can have serious costs. Work relationships begin to decay and there is no guarantee that the new skills you learn will be in any more demand than the old ones you already had.

In fact, Peter Capelli at the Wharton School argues that the entire notion of a skills gap in America is largely a myth. One reason that there is such a mismatch between the rhetoric about skills and the data is that the most effective training often comes on the job from an employer. It is augmenting skills, not replacing them that creates value.

At the same time, increased complexity in the economy is making collaboration more important, so often the most important skills workers need to learn are soft skills, like writing, listening and being a better team player.

You Can’t Compete With A Robot By Acting Like One

The future is always hard to predict. While it was easy to see that Amazon posed a real problem for large chain bookstores like Barnes & Noble and Borders, it was much less obvious that small independent bookstores would thrive. In much the same way, few saw that ten years after the launch of the Kindle that paper books would surge amid a decline in e-books.

The one overriding trend over the past 50 years or so is that the future is always more human. In Dan Schawbel’s recent book, Back to Human, the author finds that the antidote for our overly automated age is deeper personal relationships. Things like trust, empathy and caring can’t be automated or outsourced.

There are some things a machine will never do. It will never strike out in a little league game, have its heart broken or see its child born. That makes it hard — impossible really — for a machine ever to work effectively with humans as a real person would. The work of humans is increasingly to work with other humans to design work for machines.

That why perhaps the biggest shift in value is from cognitive to social skills. The high paying jobs today have less to do with the ability to retain facts or manipulate numbers (we now use a computer for those things), but require more deep collaboration, teamwork and emotional intelligence.

So while even the most technically inept line manager can now easily produce an application that it would have once required a highly skilled software engineer, to design the next generation of technology, we need engineers and line managers to work more closely together.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Don’t Blame Technology When Innovation Goes Wrong

Don't Blame Technology When Innovation Goes Wrong

GUEST POST from Greg Satell

When I speak at conferences, I’ve noticed that people are increasingly asking me about the unintended consequences of technological advance. As our technology becomes almost unimaginably powerful, there is growing apprehension and fear that we will be unable to control what we create.

This, of course, isn’t anything new. When trains first appeared, many worried that human bodies would melt at the high speeds. In ancient Greece, Plato argued that the invention of writing would destroy conversation. None of these things ever came to pass, of course, but clearly technology has changed the world for good and bad.

The truth is that we can’t fully control technology any more than we can fully control nature or each other. The emergence of significant new technologies unleash forces we can’t hope to understand at the outset and struggle to deal with long after. Yet the most significant issues are most likely to be social in nature and those are the ones we desperately need to focus on.

The Frankenstein Archetype

It’s no accident that Mary Shelley’s novel Frankenstein was published at roughly the same time as the Luddite movement was in full swing. As cottage industries were replaced by smoke belching factories, the sense that man’s creations could turn against him was palpable and the gruesome tale, considered by many to be the first true work of science fiction, touched a nerve.

In many ways, trepidation about technology can be healthy. Concern about industrialization led to social policies that helped mitigate its worst effects. In much the same way, scientists concerned about the threat of nuclear Armageddon did much to help establish policies that would prevent it.

Yet the initial fears almost always prove to be unfounded. While the Luddites burned mills and smashed machines to prevent their economic disenfranchisement, the industrial age led to a rise in the living standards of working people. In a similar vein, more advanced weapons has coincided with a reduction of violent deaths throughout history.

On the other hand, the most challenging aspects of technological advance are often things that we do not expect. While industrialization led to rising incomes, it also led to climate change, something neither the fears of the Luddites nor the creative brilliance of Shelley could have ever conceived of.

The New Frankensteins

Today, the technologies we create will shape the world as never before. Artificially intelligent systems are automating not only physical, but cognitive labor. Gene editing techniques, such as CRISPR, are enabling us to re-engineer life itself. Digital and social media have reshaped human discourse.

So it’s not surprising that there are newfound fears about where it’s all going. A study at Oxford found that 47% of US jobs are at risk of being automated over the next 20 years. The speed and ease of gene editing raises the possibility of biohackers wreaking havoc and the rise of social media has coincided with a disturbing rise of authoritarianism around the globe.

Yet I suspect these fears are mostly misplaced. Instead of massive unemployment, we find ourselves in a labor shortage. While it is true that the biohacking is a real possibility, our increased ability to cure disease will most probably greatly exceed the threat. The increased velocity of information also allows good ideas to travel faster and farther.

On the other hand, these technologies will undoubtedly unleash new challenges that we are only beginning to understand. Artificial intelligence raises disturbing questions about what it means to be human, just as the power of genomics will force us to grapple with questions about the nature of the individual and social media forces us to define the meaning of truth.

Revealing And Building

Clearly, Shelly and the Luddites were very different. While Shelley was an aristocratic intellectual, the Luddites were working class weavers. Yet both saw the rise of technology as the end to a way of life and, in that way, both were right. Technology, if nothing else, forces us to adapt, often in ways we don’t expect.

In his 1954 essay, The Question Concerning Technology the German philosopher Martin Heidegger sheds some light on these issues. He described technology as akin to art, in that it reveals truths about the nature of the world, brings them forth and puts them to some specific use. In the process, human nature and its capacity for good and evil is also revealed.

He gives the example of a hydroelectric dam, which reveals the energy of a river and puts it to use making electricity. In much the same sense, Mark Zuckerberg did not “build” a social network at Facebook, but took natural human tendencies and channeled them in a particular way. After all, we go online not for bits or electrons, but to connect with each other.

Yet in another essay, Building Dwelling Thinking, he explains that building also plays an important role, because to build for the world, we first must understand what it means to live in it. The revealing power of technology forces us to rethink old truths and re-imagine new societal norms. That, more than anything else, is where the challenges lie.

Learning To Ask The Hard Questions

We are now nearing the end of the digital age and entering a new era of innovation which will likely be more impactful than anything we’ve seen since the rise of electricity and internal combustion a century ago. This, in turn, will initiate a new cycle of revealing and building that will be as challenging as anything humanity has ever faced.

So while it is unlikely that we will ever face a robot uprising, artificial intelligence does pose a number of troubling questions. Should safety systems in a car prioritize the life of a passenger or a pedestrian? Who is accountable for the decisions an automated system makes? We worry about who is teaching our children, but scarcely stop to think about who is training our algorithms.

These are all questions that need answers within the next decade. Beyond that, we will have further quandaries to unravel, such as what is the nature of work and how do we value it? How should we deal with the rising inequality that automation creates? Who should benefit from technological breakthroughs?

The unintentional consequences of technology have less to do with the relationship between us and our inventions than it does between us and each other. Every technological shift brings about a societal shift that reshapes values and norms. Clearly, we are not helpless, but we are responsible. These are very difficult questions and we need to start asking them. Only then can we begin the cycle of revealing truths and building a better future.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Humans Are Not as Different from AI as We Think

Humans Are Not as Different from AI as We Think

GUEST POST from Geoffrey A. Moore

By now you have heard that GenAI’s natural language conversational abilities are anchored in what one wag has termed “auto-correct on steroids.” That is, by ingesting as much text as it can possibly hoover up, and by calculating the probability that any given sequence of words will be followed by a specific next word, it mimics human speech in a truly remarkable way. But, do you know why that is so?

The answer is, because that is exactly what we humans do as well.

Think about how you converse. Where do your words come from? Oh, when you are being deliberate, you can indeed choose your words, but most of the time that is not what you are doing. Instead, you are riding a conversational impulse and just going with the flow. If you had to inspect every word before you said it, you could not possibly converse. Indeed, you spout entire paragraphs that are largely pre-constructed, something like the shticks that comedians perform.

Of course, sometimes you really are being more deliberate, especially when you are working out an idea and choosing your words carefully. But have you ever wondered where those candidate words you are choosing come from? They come from your very own LLM (Large Language Model) even though, compared to ChatGPT’s, it probably should be called a TWLM (Teeny Weeny Language Model).

The point is, for most of our conversational time, we are in the realm of rhetoric, not logic. We are using words to express our feelings and to influence our listeners. We’re not arguing before the Supreme Court (although even there we would be drawing on many of the same skills). Rhetoric is more like an athletic performance than a logical analysis would be. You stay in the moment, read and react, and rely heavily on instinct—there just isn’t time for anything else.

So, if all this is the case, then how are we not like GenAI? The answer here is pretty straightforward as well. We use concepts. It doesn’t.

Concepts are a, well, a pretty abstract concept, so what are we really talking about here? Concepts start with nouns. Every noun we use represents a body of forces that in some way is relevant to life in this world. Water makes us wet. It helps us clean things. It relieves thirst. It will drown a mammal but keep a fish alive. We know a lot about water. Same thing with rock, paper, and scissors. Same thing with cars, clothes, and cash. Same thing with love, languor, and loneliness.

All of our knowledge of the world aggregates around nouns and noun-like phrases. To these, we attach verbs and verb-like phrases that show how these forces act out in the world and what changes they create. And we add modifiers to tease out the nuances and differences among similar forces acting in similar ways. Altogether, we are creating ideas—concepts—which we can link up in increasingly complex structures through the fourth and final word type, conjunctions.

Now, from the time you were an infant, your brain has been working out all the permutations you could imagine that arise from combining two or more forces. It might have begun with you discovering what happens when you put your finger in your eye, or when you burp, or when your mother smiles at you. Anyway, over the years you have developed a remarkable inventory of what is usually called common sense, as in be careful not to touch a hot stove, or chew with your mouth closed, or don’t accept rides from strangers.

The point is you have the ability to take any two nouns at random and imagine how they might interact with one another, and from that effort, you can draw practical conclusions about experiences you have never actually undergone. You can imagine exception conditions—you can touch a hot stove if you are wearing an oven mitt, you can chew bubble gum at a baseball game with your mouth open, and you can use Uber.

You may not think this is amazing, but I assure you that every AI scientist does. That’s because none of them have come close (as yet) to duplicating what you do automatically. GenAI doesn’t even try. Indeed, its crowning success is due directly to the fact that it doesn’t even try. By contrast, all the work that has gone into GOFAI (Good Old-Fashioned AI) has been devoted precisely to the task of conceptualizing, typically as a prelude to planning and then acting, and to date, it has come up painfully short.

So, yes GenAI is amazing. But so are you.

That’s what I think. What do you think?

Image Credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Technical, Market and Emotional Risks

Technical, Market and Emotional Risks

GUEST POST from Mike Shipulski

Technical risk – Will it work?
Market risk – Will they buy it?
Emotional risk – Will people laugh at your crazy idea?

Technical risk – Test it in the lab.
Market risk – Test it with the customer.
Emotional risk – Try it with a friend.

Technical risk – Define the right test.
Market risk – Define the right customer.
Emotional risk – Define the right friend.

Technical risk – Define the minimum acceptable performance criteria.
Market risk – Define the minimum acceptable response from the customer.
Emotional risk – Define the minimum acceptable criticism from your friend.

Technical risk – Can you manufacture it?
Market risk – Can you sell it?
Emotional risk – Can you act on your crazy idea?

Technical risk – How sure are you that you can manufacture it?
Market risk – How sure are you that you can sell it?
Emotional risk – How sure are you that you can act on your crazy idea?

Technical risk – When the VP says it can’t be manufactured, what do you do?
Market risk – When the VP says it can’t be sold, what do you do?
Emotional risk – When the VP says your idea is too crazy, what do you do?

Technical risk – When you knew the technical risk was too high, what did you do?
Market risk – When you knew the market risk was too high, what did you do?
Emotional risk – When you knew someone’s emotional risk was going to be too high, what did you do?

Technical risk – Can you teach others to reduce technical risk? How about increasing it?
Market risk – Can you teach others to reduce market risk? How about increasing it?
Emotional risk – Can you teach others to reduce emotional risk? How about increasing it?

Technical risk – What does it look like when technical risk is too low? And the consequences?
Market risk – What does it look like when market risk is too low? And the consequences?
Emotional risk – What does it look like when emotional risk is too low? And the consequences?

We are most aware of technical risk and spend most of our time trying to reduce it. We have the mindset and toolset to reduce it. We know how to do it. But we were not taught to recognize when technical risk is too low. And if we do recognize it’s too low, we don’t know how to articulate the negative consequences. With all this said, market risk is far more dangerous.

We’re unfamiliar with the toolset and mindset to reduce market risk. Where we can change the design, run the test, and reduce technical risk, market risk is not like that. It’s difficult to understand what drives the customers’ buying decision and it’s difficult to directly (and quickly) change their buying decision. In short, it’s difficult to know what to change so they make a different buying decision. And if they don’t buy, you don’t sell. And that’s a big problem. With that said, emotional risk is far more debilitating.

When a culture creates high emotional risk, people keep their best ideas to themselves. They don’t want to be laughed at or ridiculed, so their best ideas don’t see the light of day. The result is a collection of wonderful ideas known only to the underground Trust Network. A culture that creates high emotional risk has insufficient technical and market risk because everyone is afraid of the consequences of doing something new and different. The result – the company with high emotional risk follows the same old script and does what it did last time. And this works well, right up until it doesn’t.

Here’s a three-pronged approach that may help.

  1. Continue to reduce technical risk.
  2. Learn to reduce market risk early in a project.
  3. And behave in a way that reduces emotional risk so you’ll have the opportunity to reduce technical and market risk.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Why Materials Science is the Most Important Technology of This Decade

Why Materials Science is the Most Important Technology of This Decade

GUEST POST from Greg Satell

Think of just about any major challenge we will face over the next decade and materials are at the center of it. To build a new clean energy future, we need more efficient solar panels, wind turbines and batteries. Manufacturers need new materials to create more advanced products. We also need to replace materials subject to supply disruptions, like rare earth elements.

Traditionally, developing new materials has been a slow, painstaking process. To find the properties they’re looking for, researchers would often have to test hundreds — or even thousands — of materials one by one. That made materials research prohibitively expensive for most industries.

Yet today, we’re in the midst of a materials revolution. Scientists are using powerful simulation techniques, as well as machine learning algorithms, to propel innovation forward at blazing speed and even point them toward possibilities they had never considered. Over the next decade, the rapid advancement in materials science will have a massive impact.

The Seeds Of The Materials Revolution

In 2005, Gerd Ceder was a Professor of Materials Science at MIT working on computational methods to predict new materials. Traditionally, materials scientists worked mostly through trial and error, working to identify materials that had properties which would be commercially valuable. Gerd was working to automate that process using sophisticated computer models that simulate the physics of materials.

Things took a turn when an executive at Duracell, then a division of Procter & Gamble, asked if Ceder could use the methods he was developing to explore possibilities on a large scale to discover and design new materials for alkaline batteries. So he put together a team of a half dozen “young guns” and formed a company to execute the vision.

The first project went well and the team was able to patent a number of new materials that hadn’t existed before. Then another company came calling, which led to another project and more after that. Yet despite the initial success, Ceder began to realize that there was a problem. Although the team’s projects were successful, the overall impact was limited.

“We began to realize we’re generating all this valuable data and it’s being locked away in corporate vaults. We wanted to do something in a more public way,” Ceder told me. As luck would have it, it was just then that one of the team members was leaving MIT for family reasons and that chance event would propel the project to new heights.

The Birth Of The Materials Project

In 2008, Kristin Persson’s husband took a job in California, so she left Ceder’s group at MIT and joined Lawrence Berkeley National Laboratory (LBL) as a research scientist. Yet rather than mourn the loss of a key colleague, the team saw the move as an opportunity to shift their work into high gear.

“At MIT, we pretty much hacked everything together,” Ceder explains. “It all worked, but it was a bit buggy and would have never scaled beyond our small team. At a National Lab, however, they had the resources to build it out properly and create a platform that could really drive things forward.” So Persson hit the ground running, got a small grant and stitched together a team to combine the materials work with the high performance supercomputing done at the lab.

“At LBL there were world class computing people,” Persson told me. “So we began an active collaboration with people that were on the cutting edge of computer science, but didn’t know anything about materials and our little band of ‘materials hackers’. It was that interdisciplinary collaboration that was really the secret sauce and helped us gain ground quickly.”

Traditional, materials science could take a class of alloys for use in, say, the auto industry and calculate things like weight vs. tensile strength. There might be a few hundred of those materials in the literature. But with the system they built at LBL, they could calculate thousands. That meant engineers could identify candidate materials exponentially faster, test them in the real world and create better products.

Yet again, they felt that the impact of their work was limited. After all, not many engineers from private industry spend time at National Laboratories. “Our earlier work convinced us that we were on the cusp of something much bigger,” Persson remembers. That’s what led them to create The Materials Project, a massive online database that anyone in the world can access.

A Massive Materials Initiative

The Materials Project went online early in 2011 and drew a few thousand people. From there it grew like a virus and today has more than 50,000 users, a number that grows by about 50-100 per day. Yet its impact has become even greater than that. The success of the project caught the attention of Tom Kalil, then Deputy Director at the White House Office of Science and Technology Policy, who saw the potential to create a much wider initiative.

In the summer of 2011, the Obama administration announced the Materials Genome Initiative (MGI) to coordinate work across agencies such as the Department of Energy, NASA, the Department of Energy and others to expand and complement the work being done at LBL. These efforts, taken together, are creating a revolution in materials science and the impacts are just beginning to be felt by private industry.

The MGI is based on three basic pillars. The first is computational approaches that can accurately predict materials properties, like the ones Gerd Ceder’s team pioneered. The second is high throughput experimentation to expand materials libraries and the third are programs that mine existing materials in the scientific literature and promote the sharing of materials data.

For example, one project applied machine learning algorithms to experimental materials data to identify forms of a super strong alloy called metallic glass. While scientists have long recognized its value as an alternative to steel and as a protective coating, it is so rare that relatively few forms of it were known. Using the new methods, however, researchers were able to perform the work 200 times faster and identify 20,000 in a single year!

Creating A True Materials Revolution

Thomas Edison famously remarked that if he tried 10,000 experiments that failed, he didn’t actually consider it a failure, but found 10,000 things that didn’t work. That’s true, but it’s also incredibly tedious, time consuming and expensive. The new methods, however, have the potential to automate those 10,000 failures, which is creating a revolution in materials science.

For example, at the Joint Center for Energy Storage Research (JCESR), a US government initiative to create the next generation of advanced batteries, the major challenge now is not so much to identify potential battery chemistries, but that the materials to make those chemistries work don’t exist yet. Historically, that would have been an insurmountable problem, but not anymore.

“Using high performance computing simulations, materials genomes and other techniques that have been developed over the last decade or so, we can often eliminate as much as 99% of the possibilities that won’t work,” George Crabtree, Director at JCESR told me. “That means we can focus our efforts on the remaining 1% that may have serious potential, and we can advance much farther, much faster for far less money.”

The work is also quickly making an impact on Industry. Greg Mulholland, President of Citrine Informatics, a firm that applies machine learning to materials development, told me, “We’ve seen a huge broadening of companies and industries that are contacting us and a new sense of urgency. For companies that historically invested in materials research, they want everything yesterday. For others that haven’t, they are racing to get up to speed.”

Jim Warren, a Director at the Materials Genome Initiative, thinks that is just the start. “When you can discover new materials for hundreds of thousands or millions dollars rather than tens or hundreds of millions you are going to see a vast expansion of use cases and industries that benefit,” he told me.

As we have learned from the digital revolution, any time you get a 10x improvement in efficiency, you end up with a transformative commercial impact. Just about everybody I’ve talked to working in materials thinks that pace of advancement is easily achievable over the next decade. Welcome to the materials revolution.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Dall-E on Bing

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Can You Become the Earth’s Most Customer-Centric Company?

Can You Become the Earth's Most Customer-Centric Company?

GUEST POST from Shep Hyken

If I asked 10 people who they thought could be planet Earth’s most customer-centric company, I bet a majority would have the same answer. I’ll share that company’s name at the end of this article. For now, you can guess.

Cindy, from my office, had a customer service issue. Here are the steps she took to resolve the problem:

  1. She went to the company’s website and clicked on customer support.
  2. She answered a few questions, and once the technology identified her problem, a chatbot popped up.
  3. After interacting with the chatbot briefly, the bot wrote, “Let me transfer you to an agent,” moving from a chatbot to live chat.
  4. At some point, the agent suggested getting on the phone, and rather than have Cindy call, she asked for Cindy’s number. Once Cindy shared it, the phone rang almost instantly.
  5. From there, the agent carried out a conversation that eventually resolved Cindy’s problem.

I asked Cindy how she liked that experience, and she quickly answered, “Amazing!”

Just a few minutes later, Cindy received a short survey asking for her feedback with the message:

Your feedback is helping us build Earth’s Most Customer-Centric Company.

With that in mind, let’s look at some lessons we can learn from the company that aspires to be the most customer-centric company on the planet:

  1. Digital First – The company made it easy to start the customer support process with a digital self-service solution. While there was a live agent option, it wasn’t presented until later. Cindy had to answer a few questions and click a few boxes before moving on. And this part is important. The process was easy and intuitive. She was digitally “hand-held” through the process, which included the chatbot.
  2. The Human Backup – The chatbot was programmed to understand when it wasn’t getting Cindy’s answer, and it immediately transferred her to a live chat with a customer support agent. Eventually, the live online chat turned into a phone call when the agent wanted more details and knew it would be easier to talk than text. Rather than Cindy calling the company, she simply had to enter her phone number into the chat, and within seconds, the phone rang, and she was talking to the customer support agent.
  3. A Seamless Omni-Channel Experience – The definition of an omni-channel experience is a continuous conversation moving from one form of communication to the next. Cindy went from answering questions on the website to a chatbot, to live chat, and then to the phone. All was seamless, and the “conversation” continued rather than forcing Cindy to tell her story repeatedly. The agent on the phone picked up where the chat ended and quickly solved her problem. This is the way omni-channel is supposed to work.

This is a perfect example of the modern customer support experience. And did you guess what company this article is about? If you said Amazon, you’re absolutely right!

Image Credits: Shep Hyken, Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Powering the Google Innovation Machine with the World’s Top Minds

Powering the Google Innovation Machine with the World's Top Minds

GUEST POST from Greg Satell

It’s no secret that Google is one of the most innovative companies on the planet. Besides pioneering and then dominating the search industry, it has also become a leader in developing futuristic technologies such as artificial intelligence, driverless cars and quantum computing. It has even launched a life science company.

What makes Google so successful is not one particular process, but how it integrates multiple strategies into a seamless whole. For example, Google Brain started out as a 20% time project, then migrated out to its “X” Division to accelerate development and finally came back to the mothership, where it now collaborates closely with engineering teams to build new products.

Yet perhaps its most important strategy, in fact the one that makes much of the rest possible, is how it partners with top scientists in the academic world. This is no “quick hit,” but a well thought out, long-term game plan designed to establish deep relationships based on cutting edge science and embed that knowledge deeply into just about everything Google does.

Building Deep Relationships to the Academic Community

“We design a variety programs that widen and deepen our relationships with academic scientists,” Maggie Johnson, who heads up University Relations at Google, told me. In fact, there are three distinct ways that Google engages directly with scientists beyond the typical research partnerships with universities.

The first is its Faculty Research Awards program, which are small one-year grants, usually to graduate students or postdocs whose work may be of interest to Google. These are unrestricted gifts, although recipients are highly encouraged to publish their work publicly, that allow the company to develop relationships with young talent at the beginning of their careers.

While anybody can apply for a Faculty Research Award, Focused Research Awards are only available by invitation. Typically, these are awarded to more senior researchers that Google has already had some contact with and last two to three years. However, they are also unrestricted grants that researchers can use as they see fit.

The third way that Google engages with scientists to to proactively engage leaders in a particular field of interest. Geoffrey Hinton, for example, is a pioneer in neural networks and widely considered one of the top AI experts in the world. He splits his time between his faculty position at the University of Toronto and working on Google Brain.

“Spinning In” World Class Scientists

The academic research programs provide many benefits to Google as a company. They give access to the most promising students for recruiting, allow it to help shape university curriculums and keep it connected to breakthrough research in important fields. However, the most direct benefits probably come inviting researchers to spend a sabbatical year at Google, which it calls its Visiting Faculty Program.

For example, Andrew Ng, a top AI researcher, decided to spend a year working at Google and quickly formed a close working relationship with two of the company’s brightest minds, Greg Corrado and Jeff Dean, who were interested in what was then a new brand of artificial intelligence called deep learning. Their collaboration became the Google Brain project.

The Visiting Faculty Program touches on everything Google does. Recently they’ve had people visiting the company like John Canny at UC Berkeley, who helped with the development of TPU’s, chips specialized to run Google’s AI algorithms and Michael Rabin, a Turing Award winning mathematician who was working on auction algorithms. For every Google priority, at least one of the world’s top minds is working with the company on it.

What makes the sabbatical program unusual is how deeply it is integrated into everyday work at the company. “In most cases, these scientists have already been working with our teams through one of our other programs, so the groundwork for a productive relationship has already been laid,” Maggie Johnson told me.

Developing “Win-Win” Relationships

One of the things that makes Google’s outreach to researchers work so well is that it is truly a win-win arrangement. Yes, the company gets top experts in important fields to work on its problems, but the researchers themselves get to work with unparalleled tools and data sets. They also get a much better sense of what problems are considered important in a commercial environment.

Katya Scheinberg, a Professor at Lehigh University who focuses on optimization problems, found working at Google to be a logical extension of her earlier collaboration with the company. “I had been working on large-scale machine learning problems and had some connections with Google scientists. So spending part of my sabbatical year at the company seemed fairly natural. I learned a lot about the practical problems that private sector researchers are working on,” she told me.

Since leaving Google, she’s found that her time at the company has shifted the focus of her research. “Working at Google got me interested in some different problems and alerted me to the possibility of applying some approaches I had worked on before to different fields of application.”

Sometimes scholars stay for longer and can have a transformative impact on the company. As noted above, Andrew Ng spent several years at the company. Andrew Moore, a renowned computer scientist and a former Dean of Carnegie Mellon’s computer program, took a leave of absence from his university to set up Google’s Research Center in Pittsburgh. Lasting relationships like these are rare in industry, but incredibly valuable.

Connecting to Discovery Is Something Anyone Can Do, But You Have to Make the Effort

Clearly, Google is an unusual company. There’s not many places that can attract the type of talent that it can. However, just about any business can, for example, support the work of a young graduate student or postdoc at a local university. In much the same way, inviting even a senior researcher to come for a short time is not prohibitively expensive.

Innovation is never a single event, but a process of discovery, engineering and transformation. It is by connecting to discovery that businesses can truly see into the future and develop the next generation of breakthrough products. Unfortunately, few businesses realize the importance of connecting with the academic world.

Make no mistake, if you don’t discover, you won’t invent and if you don’t invent you will be disrupted eventually. It’s just a matter of time. However, you can’t just show up one day and decide you want to work with the world’s greatest minds. Even Google, with all its resources and acumen, has had to work really hard at it.

It’s made these investments in time, focus and resources because it understands that the search business, as great as it is, won’t deliver outsized profits forever. Today, we no longer have the luxury to manage for stability, but must prepare for disruption.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credit: Dall-E on Bing

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.