Category Archives: Technology

Innovation and the Silicon Valley Bank Collapse

Why It’s Bad News and Good News for Corporate Innovation

Innovation and the Silicon Valley Bank Collapse

GUEST POST from Robyn Bolton

Last week, as news of Silicon Valley Bank’s losses and eventual collapse, took over the news cycle, attention understandably turned to the devastating impact on the startup ecosystem.

Prospects brightened a bit on Monday with news that the federal government would make all depositors whole. Startups, VCs, and others in the ecosystem would be able to continue operations and make payroll, and SVB’s collapse would be just another cautionary tale.

But the impact of SVB’s collapse isn’t confined to the startup ecosystem or the banking industry.

Its impact (should have) struck fear and excitement into the hearts of every executive tasked with growing their business.

Your Portfolio’s Risk Profile Just Changed

The early 2000s were the heyday of innovation teams and skunkworks, but as these internal efforts struggled to produce significant results, companies started looking beyond their walls for innovation. Thus began the era of Corporate Venture Capital (CVC).

Innovation, companies realized, didn’t need to be incubated. It could be purchased.

Often at a lower price than the cost of an in-house team.

And it felt less risky. After all, other companies were doing it and it was a hot topic in the business press. Plus, making investments felt much more familiar and comfortable than running small-scale experiments and questioning the status quo.

Between 2010 and 2020, the number of corporate investors increased more than 6x to over 4,000, investment ballooned to nearly $170B in 2021 (up 142% from 2020), and 1,317 CVC-backed deals were closed in Q1 of 2020.

But, with SVB’s collapse, the perceived risk of startup investing suddenly changed.

Now startups feel riskier. Venture Capital firms are pulling back, and traditional banks are prohibited from stepping forward to provide the venture debt many startups rely on. While some see this as an opportunity for CVC to step up, that optimism ignores the fact that companies are, by nature and necessity, risk averse and more likely to follow the herd than lead it.

Why This is Bad News

As CVC, Open Innovation, and joint ventures became the preferred path to innovation and growth, internal innovation shifted to events – hackathons, shark tanks, and Silicon Valley field trips.

Employees were given the “freedom” to innovate within a set time and maybe even some training on tools like Design Thinking and Lean Startup. But behind closed doors, executives spoke of these events as employee retention efforts, not serious efforts to grow the business or advance critical strategies.

Employees eventually saw these events for what they were – innovation theater, activities designed to appease them and create feel-good stories for investors. In response, employees either left for places where innovation (or at least the curiosity and questions required) was welcomed, or they stayed, wiser and more cynical about management’s true intentions.

Then came the pandemic and a recession. Companies retreated further into themselves, focused more on core operations, and cut anything that wouldn’t generate financial results in 12 months or less.

Innovation muscles atrophied.

Just at the moment they need to be flexed most.

Why This is Good News

As the risk of investment in external innovation increases, companies will start looking for other ways to innovate and grow. Ways that feel less risky and give them more control.

They’ll rediscover Internal Innovation.

This is the silver lining of the dark SVB cloud – renewed investment in innovation, not as an event or activity to appease employees, but as a strategic tool critical to delivering strategic priorities and accelerating growth.

And, because this is our 2nd time around, we know it’s not about internal innovation teams OR external partners/investments. It’s about internal innovation teams AND external partners/investments.

Both are needed, and both can be successful if they:

  1. Are critical enablers of strategic priorities
  2. Pursue realistic goals (stretch, don’t splatter!)
  3. Receive the people and resources required to deliver against those goals
  4. Are empowered to choose progress over process
  5. Are supported by senior leaders with words AND actions

What To Do Now

When it comes to corporate innovation teams, many companies are starting from nothing. Some companies have files and playbooks they can dust off. A few have 1 or 2 people already working.

Whatever your starting point is, start now.

Just do me one favor. When you start pulling the team together, remember LL Cool J, “Don’t call it a comeback, I been here for years.”

Image credit: Wikimedia Commons

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Just Because We Can, Doesn’t Mean That We Should!

Just Because We Can, Doesn’t Mean That We Should!

GUEST POST from Pete Foley

An article on innovation from the BBC caught my eye this week. https://www.bbc.com/news/science-environment-64814781. After extensive research and experimentation, a group in Spain has worked out how to farm octopus. It’s clever innovation, but also comes with some ethical questions. The solution involves forcing highly intelligent, sentient animals together in unnatural environments, and then killing them in a slow, likely highly stressful way. And that triggers something that I believe we need to always keep front and center in innovation: Just Because We Can, Doesn’t Mean That We Should!

Pandora’s Box

It’s a conundrum for many innovations. Change opens Pandora’s Box, and with new possibilities come unknowns, new questions, new risks and sometimes, new moral dilemmas. And because our modern world is so complex, interdependent, and evolves so quickly, we can rarely fully anticipate all of these consequences at conception.

Scenario Planning

In most fields we routinely try and anticipate technical challenges, and run all sorts of stress, stability and consumer tests in an effort to anticipate potential problems. We often still miss stuff, especially when it’s difficult to place prototypes into realistic situations. Phones still catch fire, Hyundai’s can be surprisingly easy to steal, and airbags sometimes do more harm than good. But experienced innovators, while not perfect, tend to be pretty good at catching many of the worst technical issues.

Another Innovators Dilemma

Octopus farming doesn’t, as far as I know, have technical issues, but it does raise serious ethical questions. And these can sometimes be hard to spot, especially if we are very focused on technical challenges. I doubt that the innovators involved in octopus farming are intrinsically bad people intent on imposing suffering on innocent animals. But innovation requires passion, focus and ownership. Love is Blind, and innovators who’ve invested themselves into a project are inevitably biased, and often struggle to objectively view the downsides of their invention.

And this of course has far broader implications than octopus farming. The moral dilemma of innovation and unintended consequences has of course been brought into sharp focus with recent advances in AI.  In this case the stakes are much higher. Stephen Hawking and many others expressed concerns that while AI has the potential to provide incalculable benefits, it also has the potential to end the human race. While I personally don’t see CHATgpt as Armageddon, it is certainly evidence that Pandora’s Box is open, and none of us really knows how it will evolve, for better or worse.

What are our Solutions

So what can we do to try and avoid doing more harm than good? Do we need an innovator’s equivalent of the Hippocratic Oath? Should we as a community commit to do no harm, and somehow hold ourselves accountable? Not a bad idea in theory, but how could we practically do that? Innovation and risk go hand in hand, and in reality we often don’t know how an innovation will operate in the real world, and often don’t fully recognize the killer application associated with a new technology. And if we were to eliminate most risk from innovation, we’d also eliminate most progress. This said, I do believe how we balance progress and risk is something we need to discuss more, especially in light of the extraordinary rate of technological innovation we are experiencing, the potential size of its impact, and the increasing challenges associated with predicting outcomes as the pace of change accelerates.

Can We Ever Go Back?

Another issue is that often the choice is not simply ‘do we do it or not’, but instead ‘who does it first’? Frequently it’s not so much our ‘brilliance’ that creates innovation. Instead, it’s simply that all the pieces have just fallen into place and are waiting for someone to see the pattern. From calculus onwards, the history of innovation is replete with examples of parallel discovery, where independent groups draw the same conclusions from emerging data at about the same time.

So parallel to the question of ‘should we do it’ is ‘can we afford not to?’ Perhaps the most dramatic example of this was the nuclear bomb. For the team working the Manhattan Project it must have been ethically agonizing to create something that could cause so much human suffering. But context matters, and the Allies at the time were in a tight race with the Nazi’s to create the first nuclear bomb, the path to which was already sketched out by discoveries in physics earlier that century. The potential consequences of not succeeding were even more horrific than those of winning the race. An ethical dilemma of brutal proportions.

Today, as the pace of change accelerates, we face a raft of rapidly evolving technologies with potential for enormous good or catastrophic damage, and where Pandoras Box is already cracked open. Of course AI is one, but there are so many others. On the technical side we have bio-engineering, gene manipulation, ecological manipulation, blockchain and even space innovation. All of these have potential to do both great good and great harm. And to add to the conundrum, even if we were to decide to shut down risky avenues of innovation, there is zero guarantee that others would not pursue them. On the contrary, as bad players are more likely to pursue ethically dubious avenues of research.

Behavioral Science

And this conundrum is not limited to technical innovations. We are also making huge strides in understanding how people think and make decisions. This is superficially more subtle than AI or bio-manipulation, but as a field I’m close to, it’s also deeply concerning, and carries similar potential to do both great good or cause great harm. Public opinion is one of the few tools we have to help curb mis-use of technology, especially in democracies. But Behavioral Science gives us increasingly effective ways to influence and nudge human choices, often without people being aware they are being nudged. In parallel, technology has given us unprecedented capability to leverage that knowledge, via the internet and social media. There has always been a potential moral dilemma associated with manipulating human behavior, especially below the threshold of consciousness. It’s been a concern since the idea of subliminal advertising emerged in the 1950’s. But technical innovation has created a potentially far more influential infrastructure than the 1950’s movie theater.   We now spend a significant portion of our lives on line, and techniques such as memes, framing, managed choice architecture and leveraging mere exposure provide the potential to manipulate opinions and emotional engagement more profoundly than ever before. And the stakes have gotten higher, with political advertising, at least in the USA, often eclipsing more traditional consumer goods marketing in sheer volume.   It’s one thing to nudge someone between Coke and Pepsi, but quite another to use unconscious manipulation to drive preference in narrowly contested political races that have significant socio-political implications. There is no doubt we can use behavioral science for good, whether it’s helping people eat better, save better for retirement, drive more carefully or many other situations where the benefit/paternalism equation is pretty clear. But especially in socio-political contexts, where do we draw the line, and who decides where that line is? In our increasingly polarized society, without some oversight, it’s all too easy for well intentioned and passionate people to go too far, and in the worst case flirt with propaganda, and thus potentially enable damaging or even dangerous policy.

What Can or Should We Do?

We spend a great deal of energy and money trying to find better ways to research and anticipate both the effectiveness and potential unintended consequences of new technology. But with a few exceptions, we tend to spend less time discussing the moral implications of what we do. As the pace of innovations accelerates, does the innovation community need to adopt some form of ‘do no harm’ Hippocratic Oath? Or do we need to think more about educating, training, and putting processes in place to try and anticipate the ethical downsides of technology?

Of course, we’ll never anticipate everything. We didn’t have the background knowledge to anticipate that the invention of the internal combustion engine would seriously impact the world’s climate. Instead we were mostly just relieved that projections of cities buried under horse poop would no longer come to fruition.

But other innovations brought issues we might have seen coming with a bit more scenario-planning? Air bags initially increased deaths of children in automobile accidents, while prohibition in the US increased both crime and alcoholism. Hindsight is of course very clear, but could a little more foresight have anticipated these? Perhaps my favorite example unintended consequences is the ‘Cobra Effect’. The British in India were worried about the number of venomous cobra snakes, and so introduced a bounty for every dead cobra. Initially successful, this ultimately led to the breeding of cobras for bounty payments. On learning this, the Brits scrapped the reward. Cobra breeders then set the now-worthless snakes free. The result was more cobras than the original start-point. It’s amusing now, but it also illustrates the often significant gap between foresight and hindsight.

I certainly don’t have the answers. But as we start to stack up world changing technologies in increasingly complex, dynamic and unpredictable contexts, and as financial rewards often favor speed over caution, do we as an innovation community need to start thinking more about societal and moral risk? And if so, how could, or should we go about it?

I’d love to hear the opinions of the innovation community!

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Life of a Corporate Innovator

As Told in Three Sonnets

The Life of a Corporate Innovator

GUEST POST from Robyn Bolton

Day 1

Oh innovation, a journey just begun

A bold quest filled with challenges, risks, and dreams,

A path of creativity, knowledge and fun,

That will bring change, growth and a brighter scene.

Do not be afraid, though unknowns abound,

For greatness starts with small unsteady steps

Take courage and embrace each change that’s found,

And trust that success will be the final event.

Remember, every challenge is a chance,

To learn, grow, and shape thy future bright,

And every obstacle a valuable dance,

That helps thee forge a path that’s just and right.

So go forth, my friend, and boldly strive,

To make innovation flourish and thrive.

The Abyss (Death and Rebirth)

Fight on corporate innovator, who art so bold

And brave despite the trials that thou hast,

Thou hast persevered through promises cold,

And fought through budget cuts that came so fast.

Thou hast not faltered, nor did thou despair,

Despite the lack of resources at thy door,

Thou hast with passion, worked beyond repair,

And shown a steel spine that’s hard to ignore.

Thou art a shining example to us all,

A beacon of hope in times that are so bleak,

Thou art a hero, standing tall and strong,

And leading us to victories that we seek.

So let us celebrate thy unwavering faith,

And honor thee, innovator of great grace.

The Triumph

My dear intrapreneur, well done,

The launch of thy innovation is a feat,

A result of years of hard work, and fun,

That sets a shining example for all to meet.

Thou hast persevered through many a trial,

With unwavering determination and drive,

And now, thy hard work doth make thee smile,

As thy business doth grow and thrive.

This triumph is a testament to thee,

Of thy creativity, passion, and might,

And serves as a reminder of what can be,

When we pour our hearts into what is right.

So let us raise a glass and celebrate,

Thy success, and the joy innovation hath created!

These sonnets were created with the help of ChatGPT

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Artificial Intelligence is Forcing Us to Answer Some Very Human Questions

Artificial Intelligence is Forcing Us to Answer Some Very Human Questions

GUEST POST from Greg Satell

Chris Dixon, who invested early in companies ranging from Warby Parker to Kickstarter, once wrote that the next big thing always starts out looking like a toy. That’s certainly true of artificial intelligence, which started out playing games like chess, go and playing humans on the game show Jeopardy!

Yet today, AI has become so pervasive we often don’t even recognize it anymore. Besides enabling us to speak to our phones and get answers back, intelligent algorithms are often working in the background, providing things like predictive maintenance for machinery and automating basic software tasks.

As the technology becomes more powerful, it’s also forcing us to ask some uncomfortable questions that were once more in the realm of science fiction or late-night dorm room discussions. When machines start doing things traditionally considered to be uniquely human, we need to reevaluate what it means to be human and what is to be a machine.

What Is Original and Creative?

There is an old literary concept called the Infinite Monkey Theorem. The basic idea is that if you had an infinite amount of monkeys pecking away an infinite amount of keyboards, they would, in time, produce the complete works of Shakespeare or Tolstoy or any other literary masterpiece.

Today, our technology is powerful enough to simulate infinite monkeys and produce something that looks a whole lot like original work. Music scholar and composer David Cope has been able to create algorithms that produce original works of music which are so good that even experts can’t tell the difference. Companies like Narrative Science are able to produce coherent documents from raw data this way.

So there’s an interesting philosophical discussion to be had about what what qualifies as true creation and what’s merely curation. If an algorithm produces War and Peace randomly, does it retain the same meaning? Or is the intent of the author a crucial component of what creativity is about? Reasonable people can disagree.

However, as AI technology becomes more common and pervasive, some very practical issues are arising. For example, Amazon’s Audible unit has created a new captions feature for audio books. Publishers sued, saying it’s a violation of copyright, but Amazon claims that because the captions are created with artificial intelligence, it is essentially a new work.

When machines can create does that qualify as an original, creative intent? Under what circumstances can a work be considered new and original? We are going to have to decide.

Bias And Transparency

We generally accept that humans have biases. In fact, Wikipedia lists over 100 documented biases that affect our judgments. Marketers and salespeople try to exploit these biases to influence our decisions. At the same time, professional training is supposed to mitigate them. To make good decisions, we need to conquer our tendency for bias.

Yet however much we strive to minimize bias, we cannot eliminate it, which is why transparency is so crucial for any system to work. When a CEO is hired to run a corporation, for example, he or she can’t just make decisions willy nilly, but is held accountable to a board of directors who represent shareholders. Records are kept and audited to ensure transparency.

Machines also have biases which are just as pervasive and difficult to root out. Amazon had to scrap an AI system that analyzed resumes because it was biased against female candidates. Google’s algorithm designed to detect hate speech was found to be racially biased. If two of the most sophisticated firms on the planet are unable to eliminate bias, what hope is there for the rest of us?

So, we need to start asking the same questions of machine-based decisions as we do of human ones. What information was used to make a decision? On what basis was a judgment made? How much oversight should be required and by whom? We all worry about who and what are influencing our children, we need to ask the same questions about our algorithms.

The Problem of Moral Agency

For centuries, philosophers have debated the issue of what constitutes a moral agent, meaning to what extent someone is able to make and be held responsible for moral judgments. For example, we generally do not consider those who are insane to be moral agents. Minors under the age of eighteen are also not fully held responsible for their actions.

Yet sometimes the issue of moral agency isn’t so clear. Consider a moral dilemma known as the trolley problem. Imagine you see a trolley barreling down the tracks that is about to run over five people. The only way to save them is to pull a lever to switch the trolley to a different set of tracks, but if you do one person standing there will be killed. What should you do?

For the most part, the trolley problem has been a subject for freshman philosophy classes and avant-garde cocktail parties, without any real bearing on actual decisions. However, with the rise of technologies like self-driving cars, decisions such as whether to protect the life of a passenger or a pedestrian will need to be explicitly encoded into the systems we create.

On a more basic level, we need to ask who is responsible for a decision an algorithm makes, especially since AI systems are increasingly capable of making judgments humans can’t understand. Who is culpable for an algorithmically driven decision gone bad? By what standard should they be evaluated?

Working Towards Human-Machine Coevolution

Before the industrial revolution, most people earned their living through physical labor. Much like today, tradesman saw mechanization as a threat — and indeed it was. There’s not much work for blacksmiths or loom weavers these days. What wasn’t clear at the time was that industrialization would create a knowledge economy and demand for higher paid cognitive work.

Today, we’re going through a similar shift, but now machines are taking over cognitive tasks. Just as the industrial revolution devalued certain skills and increased the value of others, the age of thinking machines is catalyzing a shift from cognitive skills to social skills. The future will be driven by humans collaborating with other humans to design work for machines that creates value for other humans.

Technology is, as Marshal McLuhan pointed out long ago, an extension of man. We are constantly coevolving with our creations. Value never really disappears, it just shifts to another place. So, when we use technology to automate a particular task, humans must find a way to create value elsewhere, which creates an opportunity to create new technologies.

This is how humans and machines coevolve. The dilemma that confronts us now is that when machines replace tasks that were once thought of as innately human, we must redefine ourselves and that raises thorny questions about our relationship to the moral universe. When men become gods, the only thing that remains to conquer is ourselves.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The AI Apocalypse is Here

3 Reasons You Should Celebrate!

The AI Apocalypse is Here

GUEST POST from Robyn Bolton

Whelp, the apocalypse is upon us. Again.

This time the end of the world is brought to you by AI.

How else do you explain the unending stream of headlines declaring that AI will eliminate jobsdestroy the education system, and rip the heart and soul out of culture and the arts? What more proof do you need of our imminent demise than that AI is as intelligent as a Wharton MBA?

We are doomed!

(Deep breath)

Did you get the panic out of your system? Feel better?

Good.

Because AI is also creating incredible opportunities for you, as a leader and innovator, to break through the inertia of the status quo, drive meaningful change, and create enormous value.

Here are just three of the ways AI will help you achieve your innovation goals:

1. Surface and question assumptions

Every company has assumptions that have been held and believed for so long that they hardened into fact. Questioning these assumptions is akin to heresy and done only by people without regard for job security or their professional reputation.

My favorite example of an assumption comes from the NYC public school district whose spokesperson explained the decision to ban ChatGPT by saying, “While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,”

Buried just under the surface of this statement is the assumption that current teaching methods, specifically essays, do build critical thinking and problem-solving skills.

But is that true?

Or have we gotten so used to believing that essays demonstrate critical thinking and problem-solving that we’ve become blind to the fact that most students (yes, even, and maybe especially, the best students) follow the recipe that produces an essay that mirrors teachers’ expectations?

Before ChatGPT, only the bravest teachers questioned the value of essays as a barometer of critical thinking and problem-solving. After ChatGPT, scores of teachers took to Tik Tok and other social media platforms to share how they’re embracing the tool, using it alongside traditional tools like essays, to help their students build skills “essential for academic and lifelong success.”

2. EQ, not IQ, drives success

When all you need to do is type a question into a chatbot, and the world’s knowledge is synthesized and fed back to you in a conversational tone (or any tone you prefer), it’s easier to be the smartest person in the room.

Yes, there will always be a need for deep subject-matter experts, academics, and researchers who can push our knowledge beyond its current frontiers. But most people in most companies don’t need that depth of expertise.

Instead, you need to know enough to evaluate the options in front of you, make intelligent decisions, and communicate those decisions to others in a way that (ideally) inspires them to follow.

It’s that last step that creates an incredible opportunity for you. If facts and knowledge were all people needed to act, we would all be fit, healthy, and have absolutely no bad habits.

For example, the first question I asked ChatGPT was, “Why is it hard for big companies to innovate?” When it finished typing its 7-point answer, I nodded and thought, “Yep, that’s exactly right.”

The same thing happened when I asked the next question, “What should big companies do to be more innovative?”  I burst out laughing when the answer started with “It depends” and then nodded at the rest of its extremely accurate response.

It would be easy (and not entirely untrue) to say that this is the beginning of the end of consultants, but ChatGPT didn’t write anything that wasn’t already written in thousands of articles, books, and research papers.

Change doesn’t happen just because you know the answer. Change happens when you believe the answer and trust the people leading and walking alongside you on the journey.

3. Eliminate the Suck

Years ago, I spoke with Michael. B Jordan, Pixar’s Head of R&D, and he said something I’ll never forget – “Pain is temporary. Suck is forever.”

He meant this, of course, in the context of making a movie. There are periods of pain in movie-making – long days and nights, times when vast swaths of work get thrown out, moments of brutal and public feedback – but that pain is temporary. The movie you make is forever. And if it sucks, it sucks forever,

Sometimes the work we do is painful but temporary. Sometimes doing the work sucks, and we will need to keep doing it forever. Expense reports. Weekly update emails. Timesheets. These things suck. But they must be done.

Let AI do them and free yourself up to do things that don’t suck. Imagine the conversations you could have, ideas you could try, experiments you could run, and people you could meet if you no longer have to do things that suck.

Change is coming. And that’s good news.

Change can be scary, and it can be difficult. There will be people who lose more than they gain. But, overall, we will gain far more than we lose because of this new technology.

If you have any more doubts, I double-checked with an expert.

“ChatGPT is not a sign of the apocalypse. It is a tool created by humans to assist with language-based tasks. While artificial intelligence and other advanced technologies can bring about significant changes in the way we live and work, they do not necessarily signal the end of the world.”

ChatGPT in response to “Is ChatGPT a sign of the apocalypse?”

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Coming Innovation Slowdown

The Coming Innovation Slowdown

GUEST POST from Greg Satell

Take a moment to think about what the world must have looked like to J.P. Morgan a century ago, in 1919. He was not only an immensely powerful financier with access to the great industrialists of the day, but also an early adopter of new technologies. One of the first electric generators was installed at his home.

The disruptive technologies of the day, electricity and internal combustion, were already almost 40 years old, but had little measurable economic impact. Life largely went on as it always had. That would quickly change over the next decade when those technologies would drive a 50-year boom in productivity unlike anything the world had ever seen before.

It is very likely that we are at a similar point now. Despite significant advances in technology, productivity growth has been depressed for most of the last 50 years. Over the next ten years, however, we’re likely to see that change as nascent technologies hit their stride and create completely new industries. Here’s what you’ll need to know to compete in the new era.

1. Value Will Shift from Bits to Atoms

Over the past few decades, innovation has become almost synonymous with digital technology. Every 18 months or so, semiconductor manufacturers would bring out a new generation of processors that were twice as powerful as what came before. These, in turn, would allow entrepreneurs to imagine completely new possibilities.

However, while the digital revolution has given us snazzy new gadgets, the impact has been muted. Sure, we have hundreds of TV channels and we’re able to talk to our machines and get coherent answers back, but even at this late stage, information and communication technologies make up only about 6% of GDP in advanced countries.

At first, that sounds improbable. How could so much change produce so little effect? But think about going to a typical household in 1960, before the digital revolution took hold. You would likely see a TV, a phone, household appliances and a car in the garage. Now think of a typical household in 1910, with no electricity or running water. Even simple chores like cooking and cleaning took hours of backbreaking labor.

The truth is that much of our economy is still based on what we eat, wear and live in, which is why it’s important that the nascent technologies of today, such as synthetic biology and materials science, are rooted in the physical world. Over the next generation, we can expect innovation to shift from bits back to atoms.

2. Innovation Will Slow Down

We’ve come to take it for granted that things always accelerate because that’s what has happened for the past 30 years or so. So we’ve learned to deliberate less, to rapidly prototype and iterate and to “move fast and break things” because, during the digital revolution, that’s what you needed to do to compete effectively.

Yet microchips are a very old technology that we’ve come to understand very, very well. When a new generation of chips came off the line, they were faster and better, but worked the same way as earlier versions. That won’t be true with new computing architectures such as quantum and neuromorphic computing. We’ll have to learn how to use them first.

In other cases, such as genomics and artificial intelligence, there are serious ethical issues to consider. Under what conditions is it okay to permanently alter the germ line of a species. Who is accountable for the decisions and algorithm makes? On what basis should those decisions be made? To what extent do they need to be explainable and auditable?

Innovation is a process of discovery, engineering and transformation. At the moment, we find ourselves at the end of one transformational phase and about to enter a new one. It will take a decade or so to understand these new technologies enough to begin to accelerate again. We need to do so carefully. As we have seen over the past few years, when you move fast and break things, you run the risk of breaking something important.

3. Ecosystems Will Drive Technology

Let’s return to J.P. Morgan in 1919 and ask ourselves why electricity and internal combustion had so little impact up to that point. Automobiles and electric lights had been around a long time, but adoption takes time. It takes a while to build roads, to string wires and to train technicians to service new inventions reliably.

As economist Paul David pointed out in his classic paper, The Dynamo and the Computer, it takes time for people to learn how to use new technologies. Habits and routines need to change to take full advantage of new technologies. For example, in factories, the biggest benefit electricity provided was through enabling changes in workflow.

The biggest impacts come from secondary and tertiary technologies, such as home appliances in the case of electricity. Automobiles did more than provide transportation, but enables a shift from corner stores to supermarkets and, eventually, shopping malls. Refrigerated railroad cars revolutionized food distribution. Supply chains were transformed. Radios, and later TV, reshaped entertainment.

Nobody, not even someone like J.P. Morgan could have predicted all that in 1919, because it’s ecosystems, not inventions, that drive transformation and ecosystems are non-linear. We can’t simply extrapolate out from the present and get a clear future of what the future is going to look like.

4. You Need to Start Now

The changes that will take place over the next decade or so are likely to be just as transformative—and possibly even more so—than those that happened in the 1920s and 30s. We are on the brink of a new era of innovation that will see the creation of entirely new industries and business models.

Yet the technologies that will drive the 21st century are still mostly in the discovery and engineering phases, so they’re easy to miss. Once the transformation begins in earnest, however, it will likely be too late to adapt. In areas like genomics, materials science, quantum computing and artificial intelligence, if you get a few years behind, you may never catch up.

So the time to start exploring these new technologies is now and there are ample opportunities to do so. The Manufacturing USA Institutes are driving advancement in areas as diverse as bio-fabrication, additive manufacturing and composite materials. IBM has created its Q Network to help companies get up to speed on quantum computing and the Internet of Things Consortium is doing the same thing in that space.

Make no mistake, if you don’t explore, you won’t discover. If you don’t discover you won’t invent. And if you don’t invent, you will be disrupted eventually, it’s just a matter of time. It’s always better to prepare than to adapt and the time to start doing that is now.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Rise of the Prompt Engineer

Rise of the Prompt Engineer

GUEST POST from Art Inteligencia

The world of tech is ever-evolving, and the rise of the prompt engineer is just the latest development. Prompt engineers are software developers who specialize in building natural language processing (NLP) systems, like voice assistants and chatbots, to enable users to interact with computer systems using spoken or written language. This burgeoning field is quickly becoming essential for businesses of all sizes, from startups to large enterprises, to remain competitive.

Five Skills to Look for When Hiring a Prompt Engineer

But with the rapid growth of the prompt engineer field, it can be difficult to hire the right candidate. To ensure you’re getting the best engineer for your project, there are a few key skills you should look for:

1. Technical Knowledge: A competent prompt engineer should have a deep understanding of the underlying technologies used to create NLP systems, such as machine learning, natural language processing, and speech recognition. They should also have experience developing complex algorithms and working with big data.

2. Problem-Solving: Prompt engineering is a highly creative field, so the ideal candidate should have the ability to think outside the box and come up with innovative solutions to problems.

3. Communication: A prompt engineer should be able to effectively communicate their ideas to both technical and non-technical audiences in both written and verbal formats.

4. Flexibility: With the ever-changing landscape of the tech world, prompt engineers should be comfortable working in an environment of constant change and innovation.

5. Time Management: Prompt engineers are often involved in multiple projects at once, so they should be able to manage their own time efficiently.

These are just a few of the skills to look for when hiring a prompt engineer. The right candidate will be able to combine these skills to create effective and user-friendly natural language processing systems that will help your business stay ahead of the competition.

But what if you want or need to build your own artificial intelligence queries without the assistance of a professional prompt engineer?

Four Secrets of Writing a Good AI Prompt

As AI technology continues to advance, it is important to understand how to write a good prompt for AI to ensure that it produces accurate and meaningful results. Here are some of the secrets to writing a good prompt for AI.

1. Start with a clear goal: Before you begin writing a prompt for AI, it is important to have a clear goal in mind. What are you trying to accomplish with the AI? What kind of outcome do you hope to achieve? Knowing the answers to these questions will help you write a prompt that is focused and effective.

2. Keep it simple: AI prompts should be as straightforward and simple as possible. Avoid using jargon or complicated language that could confuse the AI. Also, try to keep the prompt as short as possible so that it is easier for the AI to understand.

3. Be specific: To get the most accurate results from your AI, you should provide a specific prompt that clearly outlines what you are asking. You should also provide any relevant information, such as the data or information that the AI needs to work with.

4. Test your prompt: Before you use your AI prompt in a real-world situation, it is important to test it to make sure that it produces the results that you are expecting. This will help you identify any issues with the prompt or the AI itself and make the necessary adjustments.

By following these tips, you can ensure that your AI prompt is effective and produces the results that you are looking for. Writing a good prompt for AI is a skill that takes practice, but by following these secrets you can improve your results.

So, whether you look to write your own AI prompts or feel the need to hire a professional prompt engineer, now you are equipped to be successful either way!

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI is a Powerful New Tool for Entrepreneurs

AI is a Powerful New Tool for Entrepreneurs

by Braden Kelley

In today’s digital, always connected world, Google too often stands as a gatekeeper between entrepreneurs and small businesses and financial success. Ranking well in the search engines requires time and expertise that many entrepreneurs and small business owners don’t have, because their focus must be on fine tuning the value proposition and operations of their business.

The day after Google was invented, the search engine marketing firm was probably created to make money off of hard working entrepreneurs and small businesses owners trying to make the most of their investment in a web site through search engine optimization (SEO), keyword advertising, and social media strategies.

According to IBISWorld the market size of the SEO & Internet Marketing Consulting industry is $75.0 Billion. Yes, that’s billion with a ‘b’.

Creating content for web sites is an even bigger market. According to Technavio the global content marketing size is estimated to INCREASE by $584.0 Billion between 2022 and 2027. This is the growth number. The market itself is MUCH larger.

The introduction of ChatGPT threatens to upend these markets, to the detriment of this group of businesses, but to the benefit to the nearly 200,000 dentists in the United States, more than 100,000 plumbers, million and a half real estate agents, and numerous other categories of small businesses.

Many of these content marketing businesses create a number of different types of content for the tens of millions of small businesses in the United States, from blog articles to tweets to Facebook pages and everything in-between. The content marketing agencies that small businesses hire recent college graduates or offshore resources in places like the Philippines, India, Pakistan, Ecuador, Romania, and lots of other locations around the world and bill their work to their clients at a much higher rate.

Outsourcing content creation has been a great way for small businesses to leverage external resources so they can focus on the business, but now may be the time to bring some of this content creation work back in house. Particularly where the content is pretty straightforward and informational for an average visitor to the web site.

With ChatGPT you can ask it to “write me an article on how to brush your teeth” or “write me ten tweets on teethbrushing” or “write me a facebook post on the most common reasons a toilet won’t flush.”

I asked it to do the last one for me and here is what it came up with:

Continue reading the rest of this article on CustomerThink (including the ChatGPT results)

Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Struggling to Innovate? Try This Instead

Struggling to Innovate? Try This Instead

GUEST POST from Robyn Bolton

Everyone is an innovator on January 1.

That’s the day when each of us resolves to do something new that creates value.

  • Start working out so I lose weight, look better, and feel healthier.
  • Stop smoking, so I live longer.
  • Turn off my computer and phone at 6:00 pm so I focus on family.

Only 20% of people are innovators on February 1. The rest of us gave up our resolutions and decided to keep doing the same things that create (good enough) value.

Your business is no different.

At the start of the fiscal year, you resolve to innovate!

  • Explore new offerings, customers, and business models
  • Experiment with new ways to get things done
  • Enter new markets

Then something goes wrong, and you divert some people (not everyone!) from innovating to fixing an operational problem.

Then the first quarter starts coming in below expectations, and you cut budgets to stay on track to deliver the bottom line.

Then something else happens, and something else, and something else, and soon it’s “February 1,” and, for excellent and logical reasons, you give up your resolution to innovate and focus all your resources on operating and hitting your KPIs.

Resolve to Revive.

Innovation is something NEW that creates value.

New is hard. It’s difficult to start something new, and it’s challenging to continue doing it when things inevitably go awry. Investing in something uncertain is risky, primarily when more “certain” investment opportunities exist. It’s why New Year’s resolutions and Innovation strategies don’t stick.

Revival is the creation of new value from OLD.

When you work on Revival, you go back to the old things, the things you explored, tried, implemented, or even launched years ago that didn’t work then but could create more value than anything you’re doing today.

Your business is filled with Revival opportunities.

How to Reveal Revivals

Ask, “What did we do before…?”

Everything we do now – research, development, marketing, sales, communication, M&A – was done before smartphones, laptops, desktops, and even mainframes. Often new technology makes our work easier or more efficient. But sometimes, it just creates work and bad habits.

If you are trying to make Zoom/Teams calls less exhausting and more productive, try to remember meetings before Zoom/Teams. They were conference calls. So, next time you need to meet, revive and schedule a phone conference (or a cameras-off Zoom/Teams call).

Find the failures

Most companies are highly skilled at hiding any evidence of failure. But the memories and stories live on in the people who worked on them. Talk to them, and you may discover a blockbuster idea that failed for reasons you can quickly address.

Like Post-It Notes.

While some parts of the Post-Its story are true – the adhesive was discovered by accident and first used to bookmark pages in a hymnal, most people don’t know that 10 YEARS passed between hymnal use and market success. In that decade, the project was shelved twice, failed in a test market, and given away as free samples before it became successful.

Resurrect the Dead

The decision to exit a market or discontinue a product is never easy or done lightly. And once management makes the decision, people operate under the assumption that the company should never consider returning. But that belief can sometimes be wrong.

Consider Yuengling, America’s oldest brewery and one of its old ice cream shops.

In 1829, David G. Yuengling founded Eagle Brewing in Pottsville, PA. The business did well until, you guessed it, Prohibition. In 1920, D.G. Yuengling & Sons (formerly Eagle Brewing) built a plant across the street from their brewery and began producing ice cream. When Prohibition ends, brewing restarts, and ice cream production continues. Until 1985, when a new generation takes the helm at Yuengling and, under the guise of operational efficiency and business optimization, shut down the ice cream business to focus on beer. TWENTY-NINE YEARS later, executives looking for growth opportunities remembered the ice cream business and re-launched the product to overwhelming customer demand.

Just because you need growth doesn’t mean you need New.

Innovation is something new that creates value. But it doesn’t have to be new to the world.

Tremendous value can be created and captured by doing old things in new ways, markets, or eras.

After all, everything old is new again.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Globalization and Technology Have Failed Us

Globalization and Technology Have Failed Us

GUEST POST from Greg Satell

In November 1989, there were two watershed events that would change the course of world history. The fall of the Berlin Wall would end the Cold War and open up markets across the world. That very same month, Tim Berners-Lee would create the World Wide Web and usher in a new technological era of networked computing.

It was a time of great optimism. Books like Francis Fukayama’s The End of History predicted a capitalist, democratic utopia, while pundits gushed over the seemingly neverending parade of “killer apps,” from email and e-commerce to social media and the mobile web. The onward march of history seemed unstoppable.

Today, 30 years on, it’s time to take stock and the picture is somewhat bleak. Instead of a global technological utopia, there are a number of worrying signs ranging from income inequality to the rise of popular authoritarianism. The fact is that technology and globalization have failed us. It’s time to address some very real problems.

Where’s the Productivity?

Think back, if you’re old enough, to before this all started. Life before 1989 was certainly less modern prior to 1989, we didn’t have mobile phones or the Internet, but for the most part it was fairly similar to today. We rode in cars and airplanes, watched TV and movies, and enjoyed the benefits of home appliances and air conditioners.

Now try to imagine what life was like in 1900, before electricity and internal combustion gained wide adoption. Even doing a simple task like cooking a meal or cleaning the house took hours of backbreaking labor to haul wood and water. While going back to living in the 1980s would involve some inconvenience, we would struggle to survive before 1920.

The productivity numbers bear out this simple observation. The widespread adoption of electricity and internal combustion led to a 50-year boom in productivity between 1920 and 1970. The digital revolution, on the other hand, created only an 8-year blip between 1996 and 2004. Even today, with artificial intelligence on the rise, productivity remains depressed.

At this point, we have to conclude that despite all the happy talk and grand promises of “changing the world,” the digital revolution has been a huge disappointment. While Silicon Valley has minted billionaires at record rates, digital technology has not made most of us measurably better off economically.

Winners Taking All

The increase of globalization and the rise of digital commerce was supposed to be a democratizing force, increasing competition and breaking the institutional monopoly on power. Yet just the opposite seems to have happened, with a relatively small global elite grabbing more money and more power.

Consider market consolidation. An analysis published in the Harvard Business Review showed that from airlines to hospitals to beer, market share is increasingly concentrated in just a handful of firms. As more expansive study of 900 industries conducted by The Economist found that two thirds have become more dominated by larger players.

Perhaps not surprisingly, we see the same trends in households as we do with businesses. The OECD reports that income inequality is at its highest level in over 50 years. Even in emerging markets, where millions have been lifted out of poverty, most of the benefits have gone to a small few.

The consequences of growing inequality are concrete and stark. Social mobility has been declining in America for decades, transforming the “land of opportunity” into what is increasingly a caste system. Anxiety and depression are rising to epidemic levels. Life expectancy for the white working class is actually declining, mostly due to “deaths of despair” due to drugs, alcohol and suicide. The overall picture is dim and seemingly getting worse.

The Failure Of Freedom

Probably the biggest source of optimism in the 1990s was the end of the Cold War. Capitalism was triumphant and many of the corrupt, authoritarian societies of the former Soviet Union began embracing democracy and markets. Expansion of NATO and the EU brought new hope to more than a hundred million people. China began to truly embrace markets as well.

I moved to Eastern Europe in the late 1990s and was able to observe this amazing transformation for myself. Living in Poland, it seemed like the entire country was advancing through a lens of time-lapse photography. Old, gray concrete building gave way to modern offices and apartment buildings. A prosperous middle class began to emerge.

Yet here as well things now seem to be going the other way. Anti-democratic regimes are winning elections across Europe while rising resentment against immigrant populations take hold throughout the western world. In America, we are increasingly mired in a growing constitutional crisis.

What is perhaps most surprising about the retreat of democracy is that it is happening not in the midst of some sort of global depression, but during a period of relative prosperity and low unemployment. Nevertheless, positive economic data cannot mask the basic truth that a significant portion of the population feels that the system doesn’t work for them.

It’s Time To Start Taking Responsibility For A Messy World

Looking back, it’s hard to see how an era that began with such promise turned out so badly. Yes, we’ve got cooler gadgets and streaming video. There have also been impressive gains in the developing world. Yet in so-called advanced economies, we seem to be worse off. It didn’t have to turn out this way. Our current predicament is the result of choices that we made.

Put simply, we have the problems we have today because they are the problems we have chosen not to solve. While the achievements of technology and globalization are real, they have also left far too many behind. We focused on simple metrics like GDP and shareholder value, but unfortunately the world is not so elegant. It’s a messy place and doesn’t yield so easily to reductionist measures and strategies.

There has, however, been some progress. The Business Roundtable, an influential group of almost 200 CEOs of America’s largest companies, in 2019 issued a statement that discarded the old notion that the sole purpose of a business is to provide value to shareholders. There are also a number of efforts underway to come up with broader measures of well being to replace GDP.

Yet we still need to learn an important lesson: technology alone will not save us. To solve complex challenges like inequality, climate change and the rise of authoritarianism we need to take a complex, network based approach. We need to build ecosystems of talent, technology and information. That won’t happen by itself, we have to make better choices.

— Article courtesy of the Digital Tonto blog
— Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.