Category Archives: Technology

How Has Innovation Changed Since the Pandemic?

The Answer in Three Charts

How Has Innovation Changed Since the Pandemic?

GUEST POST from Robyn Bolton

“Everything changed since the pandemic.”

At this point, my husband, a Navy veteran, is very likely to moo (yes, like a cow). It’s a habit he picked up as a submarine officer, something the crew would do whenever someone said something blindingly obvious because “moo” is not just a noise. It’s an acronym – Master Of the Obvious.

But HOW did things change?

From what, to what?

So what?

It can be hard to see the changes when you’re living and working in the midst of them. This is why I found “Benchmarking Innovation Impact, from InnoLead,” a new report from InnoLead and KPMG US, so interesting, insightful, and helpful.

There’s lots of great stuff in the report (and no, this is not a sponsored post though I am a member), so I limited myself to the three charts that answer executives’ most frequently asked innovation questions.

Innovation Leader Research 2023 Chart 1

Question #1: What type of innovation should I pursue?

2023 Answer: Companies are investing more than half of their resources in incremental innovation

So What?:  I may very well be alone in this opinion, but I think this is great news for several reasons:

  1. Some innovation is better than none – Companies shifting their innovation spending to safer, shorter-term bets is infinitely better than shutting down all innovation, which is what usually happens during economic uncertainty
  2. Play to your strengths – Established companies are, on average, better at incremental and adjacent innovation because they have the experience, expertise, resources, and culture required to do those well and other ways (e.g., corporate venture capital, joint ventures) to pursue Transformational innovation.
  3. Adjacent Innovation is increasing –This is the sweet spot for corporate innovation (I may also be biased because Swiffer is an adjacent innovation) because it stretches the business into new customers, offerings, and/or business models without breaking the company or executives’ identities.

Innovation Leader Research 2023 Chart 2

Question #2: Is innovation really a leadership problem (or do you just have issues with authority)?

2023 Answer: Yes (and it depends on the situation). “Lack of Executive Support” is the #6 biggest challenge to innovation, up from #8 in 2020.

So What?: This is a good news/bad news chart.

The good news is that fewer companies are experiencing the top 5 challenges to innovation. Of course, leadership is central to fostering/eliminating turf wars, setting culture, acting on signals, allocating budgets, and setting strategy. Hence, leadership has a role in resolving these issues, too.

The bad news is that MORE innovators are experiencing a lack of executive support (24.3% vs. 19.7% in 2020) and “Other” challenges (17.3% vs. 16.4%), including:

  • Different agendas held by certain leadership as to how to measure innovation and therefore how we go after innovation. Also, the time it takes to ‘sell’ an innovative idea or opportunity into the business; corporate bureaucracy.”
  • Lack of actual strategy. Often, goals or visions are treated as strategy, which results in frustration with the organization’s ability to advance viable work and creates an unnecessary churn, resulting in confused decision-making.”
  • “Innovations are stalling after piloting due to lack of funding and executive support in order to shift to scaling. Many are just happy with PR innovation.”

Innovation Leader Research 2023 Chart 3

Question #3: How much should I invest in innovation?

2023 Answer: Most companies are maintaining past years’ budgets and team sizes.

So What?:  This is another good news/bad news set of charts.

The good news is that investment is staying steady. Companies that cut back or kill innovation investments due to economic uncertainty often find that they are behind competitors when the economy improves. Even worse, it takes longer than expected to catch up because they are starting from scratch regarding talent, strategy, and a pipeline.

The bad news is that investment is staying steady. If you want different results, you need to take different actions. And I don’t know any company that is thrilled with the results of its innovation efforts. Indeed, companies can do different things with existing budgets and teams, but there needs to be flexibility and a willingness to grow the budget and the team as projects progress closer to launch and scale-up.

Not MOO

Yes, everything has changed since the pandemic, but not as much as we think.

Companies are still investing in incremental, adjacent, and transformational innovation. They’re just investing more in incremental innovation.

Innovation is still a leadership problem, but leadership is less of a problem (congrats!)

Investment is still happening, but it’s holding steady rather than increasing.

And that is nothing to “moo” at.

Image credits: Pixabay, InnoLead

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Our Innovation is All on Tape

Why Old Technologies Are Sometimes Still the Best Ones

Our Innovation is All on Tape

GUEST POST from John Bessant

Close your eyes and imagine for a moment a computer room in the early days of the industry. Chances are you’ll picture large wardrobe-sized metal cabinets whirring away with white-coated attendants tending to the machines. And it won’t be long before your gaze lands on the ubiquitous spools of tape being loaded and unloaded.

Which might give us a smug feeling as we look at the storage options for our current generation of computers — probably based on some incredibly fast access high-capacity solid state flash drive. It’s been quite a journey — the arc stretches a long way back from the recent years of USB sticks and SD cards, external HDDs and then the wonderful world of floppy discs, getting larger and more rigid as we go back in time. The clunky 1980s when our home computers rode on cassette drives, right back to the prehistoric days where the high priests of mini and mainframes tended their storage flock of tapes.

Ancient history — except that the tape drive hasn’t gone away. In fact it’s alive and well and backing up our most precious memories. Look inside the huge data farms operated by Google, Apple, Amazon, Microsoft Azure or anyone else and you’ll find large computers — and lots of tape. Thousands of kilometres of it, containing everything from your precious family photos to email backups to data from research projects like the Large Hadron Collider.

It turns out that tape is still an incredibly reliable medium — and it has the considerable advantage of being cheap. The alternative would be buying lots of hard drives — something which increasingly matters as the volume of data we are storing is growing. Think about the internet of things — all those intelligent devices, whether security cameras or mobile phones, manufacturing performance data loggers or hospital diagnostic equipment, are generating data which needs secure long-term storage. We’ve moved long past the era of measuring storage in kilobytes or megabytes; now we’re into zettabytes, each one the equivalent of to 250billion DVDs. In 2020 estimates suggest we produced close to 59Zb of data, projected to rise to 175zb by 2025! Fortunately IBM scientist Mark Lantz , an expert in storage, suggests that we can keep scaling tape and doubling capacity every 2.5 years for the next 20 years.

Plus tape offers a number of other advantages, not least in terms of security. Most of the time a tape cartridge is not plugged in to a computer and so is pretty immune to visiting viruses and malware.

In fact the market for magnetic tape storage is in robust health; it’s currently worth nearly $5bn and is expected to grow to double that size by 2030. Not bad for a technology coming up on its hundredth anniversary. Making all of this possible is, of course, our old friend innovation. It’s been a classic journey of incremental improvement, doing what we do but better, punctuated with the occasional breakthrough.

It started in 1877 when “Mary Had a Little Lamb” was recorded and played on Thomas Edison’s first experimental talking machine called a phonograph; the sounds were stored on wax cylinders and severely limited in capacity. The first tape recorder was developed in 1886 by Alexander Graham Bell in his labs using paper with beeswax coated on it. This patented approach never really took off because the sound reproduction was inferior to Edison’s wax cylinders.

Others soon explored alternatives; for example Franklin C. Goodale adapted movie film for analogue audio recording, receiving a patent for his invention in 1909. His film used a stylus to record and play back, essentially mimicking Edison’s approach but allowing for much more storage.

But in parallel with the wax-based approach another strand emerged in 1898, with the work of Voldemar Poulsen, a Danish scientist who built on an idea originally suggested ten years earlier by Oberlin Smith. This used the concept of a wire (which could be spooled) on which information was encoded magnetically. Poulsen’s model used cotton thread, steel sawdust and metal wire and was effectively the world’s first tape recorder; he called it a ‘telegraphone’.

Which brings us to another common innovation theme — convergence. If we fast forward (itself a term which originated in the word of tape recording!) to the 1930s we can see these two strands come together; German scientists working for the giant BASF company built on a patent registered to Fritz Pfleumer in 1928. They developed a magnetic tape using metal oxide coated on plastic tape which could be used in recording sound on a commercial basis; in 1934 they delivered the first 50,000 metres of it to the giant electronics corporation AEG.

The big advantage of magnetic recording was that it didn’t rely on a physical analogue being etched into wax or other medium; instead the patterns could be encoded and read as electrical signals. It wasn’t long before tape recording took over as the dominant design — and one of the early entrants was the 3M company in the USA. They had a long history of coating surfaces with particles, having begun life making sandpaper and moved on to create a successful business out of first adhesive masking tape and then the ubiquitous Scotch tape. Coating metal oxide on to tape was an obvious move and they quickly became a key player in the industry.

Innovation is always about the interplay between needs and means and the tape recording business received a fillip from the growing radio industry in the 1940s. Tape offered to simplify and speed up the recording process and an early fan was Bing Crosby. He’d become fed up with the heavy schedule of live broadcasting which kept him away from his beloved golf course and so was drawn to the idea of pre-recording his shows. But the early disc-based technology wasn’t really up to the task, filled with hisses and scratches and poor sound quality. Crosby’s sound engineer had come across the idea of tape recording and worked with 3M to refine the technology.

The very first radio show, anywhere in the world, to be recorded directly on magnetic tape was broadcast on 1 October 1947 featuring Crosby. It not only opened up a profitable line of new business for 3M, it also did its bit for changing the way the world consumed entertainment, be it drama, music hall or news. (It was also a shrewd investment for Crosby who became one of the emerging industry’s backers)

Which brings us to another kind of innovation interplay, this time between different approaches being taken in the worlds of consumer entertainment and industrial computing. Ever since Marconi, Tesla and others had worked on radio there had been a growing interest in consumer applications which could exploit the technology. And with the grandchildren of Edison’s gramophone and in the 1940s the work on television, the home became an increasingly interesting space for electronics entrepreneurs.

But as the domestic market for fixed appliances grew saturated so the search began for mobile solutions. Portability became an important driver for the industry and gave rise to the transistor radio; it wasn’t long before the in car entertainment market began to take off. An early entrant from the tape playback side was the 8-track cartridge in the mid-1960s which allowed you to listen to your favorite tracks without lugging a portable gramophone with you. Philips’ development of the compact cassette (and its free licensing of the idea to promote rapid and widespread adoption) led to an explosion in demand (over 100 billion cassette tapes were eventually sold worldwide) and eventually to the idea of the Walkman as the first portable personal device for recorded and recording music.

Without which we’d be a little less satisfied. Specifically we’d never been introduced to one of the Rolling Stones’ greatest hits; as guitarist Keith Richards explained in his 2010 autobiography:

“I wrote the song ‘Satisfaction’ in my sleep. I didn’t know at all that I had recorded it, the song only exists, thank God, to the little Philips cassette recorder. I looked at it in the morning — I knew I had put a new tape in the night before — but it was at the very end. Apparently, I had recorded something. I rewound and then ‘Satisfaction’ sounded … and then 40 minutes of snoring!”

Meanwhile back in the emerging computer industry of the 1950s there was a growing demand for storage media for which magnetic tape seemed well suited. Cue the images we imagined in the opening paragraph, acolytes dutifully tending the vast mainframe machines.

Early computers had used punched cards and then paper tape but these soon reached the limit of their usefulness; instead the industry began exploring magnetic audio tape.

IBM’s team under the leadership of Wayne Winger developed digital tape-based storage; of particular importance was finding ways to encode the 1s and 0s of binary patterns onto the tape. They introduced the commercial digital tape recorder in 1952, and it could store what was (for its time) an impressive 2mB of data on a reel.

Not everyone was convinced; as Winger recalled, “A white-haired IBM veteran in Poughkeepsie pulled a few of us aside and told us, ‘You young fellows remember, IBM was built on punched cards, and our foundation will always be punched cards.’ Fortunately Tom Watson Jnr, son of the company founder became a champion and the project went ahead.

But while tape dominated in the short term another parallel trajectory was soon established, replacing tapes and reels with disc drives whose big advantage was the ability to randomly access data rather than wait for the tape to arrive at the right place on the playback head. IBM once again led the way with its launch in 1956 of the hard disc drive and began a steady stream of innovation in which storage volumes and density increased while the size decreased. The landscape moved through various generations of external drives until the advent of personal computers where the drives migrated inside the box and became increasingly small (and floppy).

These developments were taken up by the consumer electronics industry with the growing use of discs as an alternative recording and playback medium, spanning various formats but also decreasing in size. Which of course opened the way for more portability with Sony and Sharp launching mini-disc players in the early 1980s.

All good news for the personal audio experience but less so for the rapidly expanding information technology industry. While new media storage technology continued to improve it came at a cost and with the exponential increase in volumes of data needing to be stored came a renewed interest in alternative (and cheaper) solutions. The road was leading back to good old-fashioned tape.

Its potential was in long-term storage and retrieval of so-called ‘cold data’. Most of what is stored in the cloud today is this kind — images, emails, all sorts of backup files. And while these need to be around they don’t have to be accessed instantly. And that’s where tape has come back into its own. Today’s tapes have moved on somewhat from IBM’s 1952 limited 2mB of capacity version. They are smaller on the outside but their capacity has grown enormously — they can now hold 20Tb or even if compressed 60pTb — that’s a 10 millionfold increase in 70 years. The tapes are not wound by hand on to capstans but instead loaded into cartridges, each of which hold around a kilometer of tape; companies use libraries containing tens of thousands of these cartridges which can be mounted via automated systems deploying robots. This process takes around 90 seconds to locate a cartridge and access and load the tape, so you could be forgiven for thinking that it’s a bit slow compared to your flash drive which has an access time measured in milliseconds.

There’s a pattern here — established and once important technologies giving way to the new kids on the block with their apparently superior performance. We’ve learned that we shouldn’t necessarily write the old technologies off — at the minimum there is often a niche for them amongst enthusiasts. Think about vinyl, about the anti-mp3 backlash from hi-fi fans or more recently photography using film and plates rather than their digital counterparts.

But it’s more than just nostalgia which drives this persistence of the old. Sometimes — like our magnetic tape — there are performance features which are worth holding on to — trading speed for security and lower storage cost, for example. Sometimes there is a particular performance niche which the new technology cannot enter competitively — for example the persistence of fax machines in healthcare where they offer a secure and reliable way of transmitting sensitive information. At the limit we might argue that neither cash nor physical books are as ‘good’ as their digital rivals but their persistence points to other attributes which people continue to find valuable.

And sometimes it is about the underlying accumulated knowledge which the old technology represents — and which might be redeployed to advantage in a different field. Think of Fujifilm’s resurgence as a cosmetics and pharmaceuticals company on the back of its deep knowledge of emulsions and coatings. Technologies which it originally mastered in the now largely disappeared world of film photography. Or Kodak’s ability to offer high speed high quality printing on the back of knowledge it originally acquired in the same old industry — that of accurately spraying and targeting millions of droplets on to a surface. And it was 3M’s deep understanding of how to coat materials on to tapes gained originally from selling masking tape to the paint shops of Detroit which helped it move so effectively into the field of magnetic tape.

Keeping these technologies alive isn’t about putting them on life support; as the IBM example demonstrates it needs a commitment to incremental innovation, driving and optimising performance. And there’s still room for breakthroughs within those trajectories; in the case of magnetic tape storage it came in 2010 in the form of the Linear Tape File System (LTFS) open standard. This allowed tape drives to emulate the random access capabilities of their hard disk competitors, using metadata about the location of data stored on the tapes.

Whichever way you look at it there’s a need for innovation, whether bringing a breakthrough to an existing field or helping sustain a particular niche for the long haul. And we shouldn’t be too quick to write off ‘old’ technologies as new ones emerge which appear superior. It’s worth remembering that the arrival of the steamship didn’t wipe out the shipyards building sailing ships around the world; it actually spurred them on to a golden era of performance imporvement which it took steampships a long time to catch up with.

So, there’s often a lot of life left in old dogs, especially when we can teach them some new innovative tricks.

You can find a podcast version of this here and a video version here

And if you’d like to learn with me take a look at my online course here

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Humans, Not Technology, Drive Business Success

Humans, Not Technology, Drive Business Success

GUEST POST from Greg Satell

Silicon Valley is often known as a cut-throat, technocratic place where the efficiency of algorithms often define success. Competition is ferocious and the pace of disruption and change can be dizzying. It’s not the type of environment where soft skills are valued particularly highly or even at all.

So, it’s somewhat ironic that Bill Campbell became a Silicon Valley legend by giving hugs and professing love to those he worked with. As coach to executives ranging from Steve Jobs to the entire Google executive team, Campbell preached and practiced a very personal style of business.

Yet while I was reading Trillion Dollar Coach in which former Google executives explain Campbell’s leadership principles, it became clear why he had such an impact. Even in Silicon Valley, technology will only take you so far. The success of a business ultimately depends on the success of the people in it. To compete over the long haul, that’s where you need to focus.

The Efficiency Paradox

In 1911, Frederick Winslow Taylor published The Principles of Scientific Management, based on his experience as a manager in a steel factory. It took aim at traditional management methods and suggested a more disciplined approach. Rather than have workers pursue tasks in their own manner, he sought to find “the one best way” and train accordingly.

Taylor wrote, “It is only through enforced standardization of methods, enforced adoption of the best implements and working conditions, and enforced cooperation that this faster work can be assured. And the duty of enforcing the adoption of standards and enforcing this cooperation rests with management alone.”

Before long, Taylor’s ideas became gospel, spawning offshoots such as scientific marketing, financial engineering and the Six Sigma movement. It was no longer enough to simply work hard, you had to measure, analyze and optimize everything. Over the years these ideas have become so central to business thinking that they are rarely questioned.

Yet management guru Henry Mintzberg has pointed out how a “by-the-numbers” depersonalized approach can often backfire. “Managing without soul has become an epidemic in society. Many managers these days seem to specialize in killing cultures, at the expense of human engagement.”

The evidence would seem to back him up. One study found that of 58 large companies that have announced Six Sigma programs, 91 percent trailed the S&P 500 in stock performance. That, in essence, is the efficiency paradox. When you manage only what you can measure, you end up ignoring key factors to success.

How Generosity Drives Innovation

While researching my book, Mapping Innovation, I interviewed dozens of top innovators. Some were world class scientists and engineers. Others were high level executives at large corporations. Still others were highly successful entrepreneurs. Overall, it was a pretty intimidating group.

So, I was surprised to find that, with few exceptions, they were some of the kindest and most generous people I have ever met. The behavior was so consistent that I felt that it couldn’t be an accident. So I began to research the matter further and found that when it comes to innovation, generosity really is a competitive advantage.

For example, one study of star engineers at Bell Labs found that the best performers were not the ones with the best academic credentials, but those with the best professional networks. A similar study of the design firm IDEO found that great innovators essentially act as brokers able to access a diverse array of useful sources.

A third study helps explain why knowledge brokering is so important. Analyzing 17.9 million papers, the researchers found that the most highly cited work tended to be largely rooted within a traditional field, but with just a smidgen of insight taken from some unconventional place. Breakthrough creativity occurs at the nexus of conventionality and novelty.

The truth is that the more you share with others, the more they’ll be willing to share with you and that makes it much more likely you’ll come across that random piece of information or insight that will allow you to crack a really tough problem.

People As Profit Centers

For many, the idea that innovation is a human centered activity is intuitively obvious. So it makes sense that the high-tech companies that Bill Campbell was involved in would work hard to create environments to attract the best and the brightest people. However, most businesses have much lower margins and have to keep a close eye on the bottom line.

Yet here too there is significant evidence that a human-focused approach to management can yield better results. In The Good Jobs Strategy MIT’s Zeynep Ton found that investing more in well-trained employees can actually lower costs and drive sales. A dedicated and skilled workforce results in less turnover, better customer service and greater efficiency.

For example, when the recession hit in 2008, Mercadona, Spain’s leading discount retailer, needed to cut costs. But rather than cutting wages or reducing staff, it asked its employees to contribute ideas. The result was that it managed to reduce prices by 10% and increased its market share from 15% in 2008 to 20% in 2012.

Its competitors maintained the traditional mindset. They reduced cut wages and employee hours, which saved them some money, but customers found poorly maintained stores with few people to help them, which damaged their brand long-term. The cost savings Mercadona’s employees identified, on the other hand, in many cases improved service and productivity and these gains persisted long after the crisis was over.

Management Beyond Metrics

The truth is that it’s easy to talk about putting people first, but much harder to do it in practice. Research suggests that once a group goes much beyond 200 people social relationships break down, so once a business gets beyond that point, it becomes natural to depersonalize management and focus on metrics.

Yet the best managers understand that it’s the people that drive the numbers. As legendary IBM CEO Lou Gerstner once put it, “Culture isn’t just one aspect of the game… It is the game. What does the culture reward and punish – individual achievement or team play, risk taking or consensus building?”

In other words, culture is about values. The innovators I interviewed for my book valued solving problems, so were enthusiastic about sharing their knowledge and expertise with others, who happily reciprocated. Mercadona valued its people, so when it asked them to find ways to save money during the financial crisis, they did so enthusiastically.

That’s why today, three years after his death, Bill Campbell remains a revered figure in Silicon Valley, because he valued people so highly and helped them learn to value each other. Management is not an algorithm. It is, in the final analysis, an intensely human activity and to do it well, you need to put people first.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Innovation and the Silicon Valley Bank Collapse

Why It’s Bad News and Good News for Corporate Innovation

Innovation and the Silicon Valley Bank Collapse

GUEST POST from Robyn Bolton

Last week, as news of Silicon Valley Bank’s losses and eventual collapse, took over the news cycle, attention understandably turned to the devastating impact on the startup ecosystem.

Prospects brightened a bit on Monday with news that the federal government would make all depositors whole. Startups, VCs, and others in the ecosystem would be able to continue operations and make payroll, and SVB’s collapse would be just another cautionary tale.

But the impact of SVB’s collapse isn’t confined to the startup ecosystem or the banking industry.

Its impact (should have) struck fear and excitement into the hearts of every executive tasked with growing their business.

Your Portfolio’s Risk Profile Just Changed

The early 2000s were the heyday of innovation teams and skunkworks, but as these internal efforts struggled to produce significant results, companies started looking beyond their walls for innovation. Thus began the era of Corporate Venture Capital (CVC).

Innovation, companies realized, didn’t need to be incubated. It could be purchased.

Often at a lower price than the cost of an in-house team.

And it felt less risky. After all, other companies were doing it and it was a hot topic in the business press. Plus, making investments felt much more familiar and comfortable than running small-scale experiments and questioning the status quo.

Between 2010 and 2020, the number of corporate investors increased more than 6x to over 4,000, investment ballooned to nearly $170B in 2021 (up 142% from 2020), and 1,317 CVC-backed deals were closed in Q1 of 2020.

But, with SVB’s collapse, the perceived risk of startup investing suddenly changed.

Now startups feel riskier. Venture Capital firms are pulling back, and traditional banks are prohibited from stepping forward to provide the venture debt many startups rely on. While some see this as an opportunity for CVC to step up, that optimism ignores the fact that companies are, by nature and necessity, risk averse and more likely to follow the herd than lead it.

Why This is Bad News

As CVC, Open Innovation, and joint ventures became the preferred path to innovation and growth, internal innovation shifted to events – hackathons, shark tanks, and Silicon Valley field trips.

Employees were given the “freedom” to innovate within a set time and maybe even some training on tools like Design Thinking and Lean Startup. But behind closed doors, executives spoke of these events as employee retention efforts, not serious efforts to grow the business or advance critical strategies.

Employees eventually saw these events for what they were – innovation theater, activities designed to appease them and create feel-good stories for investors. In response, employees either left for places where innovation (or at least the curiosity and questions required) was welcomed, or they stayed, wiser and more cynical about management’s true intentions.

Then came the pandemic and a recession. Companies retreated further into themselves, focused more on core operations, and cut anything that wouldn’t generate financial results in 12 months or less.

Innovation muscles atrophied.

Just at the moment they need to be flexed most.

Why This is Good News

As the risk of investment in external innovation increases, companies will start looking for other ways to innovate and grow. Ways that feel less risky and give them more control.

They’ll rediscover Internal Innovation.

This is the silver lining of the dark SVB cloud – renewed investment in innovation, not as an event or activity to appease employees, but as a strategic tool critical to delivering strategic priorities and accelerating growth.

And, because this is our 2nd time around, we know it’s not about internal innovation teams OR external partners/investments. It’s about internal innovation teams AND external partners/investments.

Both are needed, and both can be successful if they:

  1. Are critical enablers of strategic priorities
  2. Pursue realistic goals (stretch, don’t splatter!)
  3. Receive the people and resources required to deliver against those goals
  4. Are empowered to choose progress over process
  5. Are supported by senior leaders with words AND actions

What To Do Now

When it comes to corporate innovation teams, many companies are starting from nothing. Some companies have files and playbooks they can dust off. A few have 1 or 2 people already working.

Whatever your starting point is, start now.

Just do me one favor. When you start pulling the team together, remember LL Cool J, “Don’t call it a comeback, I been here for years.”

Image credit: Wikimedia Commons

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Just Because We Can, Doesn’t Mean That We Should!

Just Because We Can, Doesn’t Mean That We Should!

GUEST POST from Pete Foley

An article on innovation from the BBC caught my eye this week. https://www.bbc.com/news/science-environment-64814781. After extensive research and experimentation, a group in Spain has worked out how to farm octopus. It’s clever innovation, but also comes with some ethical questions. The solution involves forcing highly intelligent, sentient animals together in unnatural environments, and then killing them in a slow, likely highly stressful way. And that triggers something that I believe we need to always keep front and center in innovation: Just Because We Can, Doesn’t Mean That We Should!

Pandora’s Box

It’s a conundrum for many innovations. Change opens Pandora’s Box, and with new possibilities come unknowns, new questions, new risks and sometimes, new moral dilemmas. And because our modern world is so complex, interdependent, and evolves so quickly, we can rarely fully anticipate all of these consequences at conception.

Scenario Planning

In most fields we routinely try and anticipate technical challenges, and run all sorts of stress, stability and consumer tests in an effort to anticipate potential problems. We often still miss stuff, especially when it’s difficult to place prototypes into realistic situations. Phones still catch fire, Hyundai’s can be surprisingly easy to steal, and airbags sometimes do more harm than good. But experienced innovators, while not perfect, tend to be pretty good at catching many of the worst technical issues.

Another Innovators Dilemma

Octopus farming doesn’t, as far as I know, have technical issues, but it does raise serious ethical questions. And these can sometimes be hard to spot, especially if we are very focused on technical challenges. I doubt that the innovators involved in octopus farming are intrinsically bad people intent on imposing suffering on innocent animals. But innovation requires passion, focus and ownership. Love is Blind, and innovators who’ve invested themselves into a project are inevitably biased, and often struggle to objectively view the downsides of their invention.

And this of course has far broader implications than octopus farming. The moral dilemma of innovation and unintended consequences has of course been brought into sharp focus with recent advances in AI.  In this case the stakes are much higher. Stephen Hawking and many others expressed concerns that while AI has the potential to provide incalculable benefits, it also has the potential to end the human race. While I personally don’t see CHATgpt as Armageddon, it is certainly evidence that Pandora’s Box is open, and none of us really knows how it will evolve, for better or worse.

What are our Solutions

So what can we do to try and avoid doing more harm than good? Do we need an innovator’s equivalent of the Hippocratic Oath? Should we as a community commit to do no harm, and somehow hold ourselves accountable? Not a bad idea in theory, but how could we practically do that? Innovation and risk go hand in hand, and in reality we often don’t know how an innovation will operate in the real world, and often don’t fully recognize the killer application associated with a new technology. And if we were to eliminate most risk from innovation, we’d also eliminate most progress. This said, I do believe how we balance progress and risk is something we need to discuss more, especially in light of the extraordinary rate of technological innovation we are experiencing, the potential size of its impact, and the increasing challenges associated with predicting outcomes as the pace of change accelerates.

Can We Ever Go Back?

Another issue is that often the choice is not simply ‘do we do it or not’, but instead ‘who does it first’? Frequently it’s not so much our ‘brilliance’ that creates innovation. Instead, it’s simply that all the pieces have just fallen into place and are waiting for someone to see the pattern. From calculus onwards, the history of innovation is replete with examples of parallel discovery, where independent groups draw the same conclusions from emerging data at about the same time.

So parallel to the question of ‘should we do it’ is ‘can we afford not to?’ Perhaps the most dramatic example of this was the nuclear bomb. For the team working the Manhattan Project it must have been ethically agonizing to create something that could cause so much human suffering. But context matters, and the Allies at the time were in a tight race with the Nazi’s to create the first nuclear bomb, the path to which was already sketched out by discoveries in physics earlier that century. The potential consequences of not succeeding were even more horrific than those of winning the race. An ethical dilemma of brutal proportions.

Today, as the pace of change accelerates, we face a raft of rapidly evolving technologies with potential for enormous good or catastrophic damage, and where Pandoras Box is already cracked open. Of course AI is one, but there are so many others. On the technical side we have bio-engineering, gene manipulation, ecological manipulation, blockchain and even space innovation. All of these have potential to do both great good and great harm. And to add to the conundrum, even if we were to decide to shut down risky avenues of innovation, there is zero guarantee that others would not pursue them. On the contrary, as bad players are more likely to pursue ethically dubious avenues of research.

Behavioral Science

And this conundrum is not limited to technical innovations. We are also making huge strides in understanding how people think and make decisions. This is superficially more subtle than AI or bio-manipulation, but as a field I’m close to, it’s also deeply concerning, and carries similar potential to do both great good or cause great harm. Public opinion is one of the few tools we have to help curb mis-use of technology, especially in democracies. But Behavioral Science gives us increasingly effective ways to influence and nudge human choices, often without people being aware they are being nudged. In parallel, technology has given us unprecedented capability to leverage that knowledge, via the internet and social media. There has always been a potential moral dilemma associated with manipulating human behavior, especially below the threshold of consciousness. It’s been a concern since the idea of subliminal advertising emerged in the 1950’s. But technical innovation has created a potentially far more influential infrastructure than the 1950’s movie theater.   We now spend a significant portion of our lives on line, and techniques such as memes, framing, managed choice architecture and leveraging mere exposure provide the potential to manipulate opinions and emotional engagement more profoundly than ever before. And the stakes have gotten higher, with political advertising, at least in the USA, often eclipsing more traditional consumer goods marketing in sheer volume.   It’s one thing to nudge someone between Coke and Pepsi, but quite another to use unconscious manipulation to drive preference in narrowly contested political races that have significant socio-political implications. There is no doubt we can use behavioral science for good, whether it’s helping people eat better, save better for retirement, drive more carefully or many other situations where the benefit/paternalism equation is pretty clear. But especially in socio-political contexts, where do we draw the line, and who decides where that line is? In our increasingly polarized society, without some oversight, it’s all too easy for well intentioned and passionate people to go too far, and in the worst case flirt with propaganda, and thus potentially enable damaging or even dangerous policy.

What Can or Should We Do?

We spend a great deal of energy and money trying to find better ways to research and anticipate both the effectiveness and potential unintended consequences of new technology. But with a few exceptions, we tend to spend less time discussing the moral implications of what we do. As the pace of innovations accelerates, does the innovation community need to adopt some form of ‘do no harm’ Hippocratic Oath? Or do we need to think more about educating, training, and putting processes in place to try and anticipate the ethical downsides of technology?

Of course, we’ll never anticipate everything. We didn’t have the background knowledge to anticipate that the invention of the internal combustion engine would seriously impact the world’s climate. Instead we were mostly just relieved that projections of cities buried under horse poop would no longer come to fruition.

But other innovations brought issues we might have seen coming with a bit more scenario-planning? Air bags initially increased deaths of children in automobile accidents, while prohibition in the US increased both crime and alcoholism. Hindsight is of course very clear, but could a little more foresight have anticipated these? Perhaps my favorite example unintended consequences is the ‘Cobra Effect’. The British in India were worried about the number of venomous cobra snakes, and so introduced a bounty for every dead cobra. Initially successful, this ultimately led to the breeding of cobras for bounty payments. On learning this, the Brits scrapped the reward. Cobra breeders then set the now-worthless snakes free. The result was more cobras than the original start-point. It’s amusing now, but it also illustrates the often significant gap between foresight and hindsight.

I certainly don’t have the answers. But as we start to stack up world changing technologies in increasingly complex, dynamic and unpredictable contexts, and as financial rewards often favor speed over caution, do we as an innovation community need to start thinking more about societal and moral risk? And if so, how could, or should we go about it?

I’d love to hear the opinions of the innovation community!

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Life of a Corporate Innovator

As Told in Three Sonnets

The Life of a Corporate Innovator

GUEST POST from Robyn Bolton

Day 1

Oh innovation, a journey just begun

A bold quest filled with challenges, risks, and dreams,

A path of creativity, knowledge and fun,

That will bring change, growth and a brighter scene.

Do not be afraid, though unknowns abound,

For greatness starts with small unsteady steps

Take courage and embrace each change that’s found,

And trust that success will be the final event.

Remember, every challenge is a chance,

To learn, grow, and shape thy future bright,

And every obstacle a valuable dance,

That helps thee forge a path that’s just and right.

So go forth, my friend, and boldly strive,

To make innovation flourish and thrive.

The Abyss (Death and Rebirth)

Fight on corporate innovator, who art so bold

And brave despite the trials that thou hast,

Thou hast persevered through promises cold,

And fought through budget cuts that came so fast.

Thou hast not faltered, nor did thou despair,

Despite the lack of resources at thy door,

Thou hast with passion, worked beyond repair,

And shown a steel spine that’s hard to ignore.

Thou art a shining example to us all,

A beacon of hope in times that are so bleak,

Thou art a hero, standing tall and strong,

And leading us to victories that we seek.

So let us celebrate thy unwavering faith,

And honor thee, innovator of great grace.

The Triumph

My dear intrapreneur, well done,

The launch of thy innovation is a feat,

A result of years of hard work, and fun,

That sets a shining example for all to meet.

Thou hast persevered through many a trial,

With unwavering determination and drive,

And now, thy hard work doth make thee smile,

As thy business doth grow and thrive.

This triumph is a testament to thee,

Of thy creativity, passion, and might,

And serves as a reminder of what can be,

When we pour our hearts into what is right.

So let us raise a glass and celebrate,

Thy success, and the joy innovation hath created!

These sonnets were created with the help of ChatGPT

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Artificial Intelligence is Forcing Us to Answer Some Very Human Questions

Artificial Intelligence is Forcing Us to Answer Some Very Human Questions

GUEST POST from Greg Satell

Chris Dixon, who invested early in companies ranging from Warby Parker to Kickstarter, once wrote that the next big thing always starts out looking like a toy. That’s certainly true of artificial intelligence, which started out playing games like chess, go and playing humans on the game show Jeopardy!

Yet today, AI has become so pervasive we often don’t even recognize it anymore. Besides enabling us to speak to our phones and get answers back, intelligent algorithms are often working in the background, providing things like predictive maintenance for machinery and automating basic software tasks.

As the technology becomes more powerful, it’s also forcing us to ask some uncomfortable questions that were once more in the realm of science fiction or late-night dorm room discussions. When machines start doing things traditionally considered to be uniquely human, we need to reevaluate what it means to be human and what is to be a machine.

What Is Original and Creative?

There is an old literary concept called the Infinite Monkey Theorem. The basic idea is that if you had an infinite amount of monkeys pecking away an infinite amount of keyboards, they would, in time, produce the complete works of Shakespeare or Tolstoy or any other literary masterpiece.

Today, our technology is powerful enough to simulate infinite monkeys and produce something that looks a whole lot like original work. Music scholar and composer David Cope has been able to create algorithms that produce original works of music which are so good that even experts can’t tell the difference. Companies like Narrative Science are able to produce coherent documents from raw data this way.

So there’s an interesting philosophical discussion to be had about what what qualifies as true creation and what’s merely curation. If an algorithm produces War and Peace randomly, does it retain the same meaning? Or is the intent of the author a crucial component of what creativity is about? Reasonable people can disagree.

However, as AI technology becomes more common and pervasive, some very practical issues are arising. For example, Amazon’s Audible unit has created a new captions feature for audio books. Publishers sued, saying it’s a violation of copyright, but Amazon claims that because the captions are created with artificial intelligence, it is essentially a new work.

When machines can create does that qualify as an original, creative intent? Under what circumstances can a work be considered new and original? We are going to have to decide.

Bias And Transparency

We generally accept that humans have biases. In fact, Wikipedia lists over 100 documented biases that affect our judgments. Marketers and salespeople try to exploit these biases to influence our decisions. At the same time, professional training is supposed to mitigate them. To make good decisions, we need to conquer our tendency for bias.

Yet however much we strive to minimize bias, we cannot eliminate it, which is why transparency is so crucial for any system to work. When a CEO is hired to run a corporation, for example, he or she can’t just make decisions willy nilly, but is held accountable to a board of directors who represent shareholders. Records are kept and audited to ensure transparency.

Machines also have biases which are just as pervasive and difficult to root out. Amazon had to scrap an AI system that analyzed resumes because it was biased against female candidates. Google’s algorithm designed to detect hate speech was found to be racially biased. If two of the most sophisticated firms on the planet are unable to eliminate bias, what hope is there for the rest of us?

So, we need to start asking the same questions of machine-based decisions as we do of human ones. What information was used to make a decision? On what basis was a judgment made? How much oversight should be required and by whom? We all worry about who and what are influencing our children, we need to ask the same questions about our algorithms.

The Problem of Moral Agency

For centuries, philosophers have debated the issue of what constitutes a moral agent, meaning to what extent someone is able to make and be held responsible for moral judgments. For example, we generally do not consider those who are insane to be moral agents. Minors under the age of eighteen are also not fully held responsible for their actions.

Yet sometimes the issue of moral agency isn’t so clear. Consider a moral dilemma known as the trolley problem. Imagine you see a trolley barreling down the tracks that is about to run over five people. The only way to save them is to pull a lever to switch the trolley to a different set of tracks, but if you do one person standing there will be killed. What should you do?

For the most part, the trolley problem has been a subject for freshman philosophy classes and avant-garde cocktail parties, without any real bearing on actual decisions. However, with the rise of technologies like self-driving cars, decisions such as whether to protect the life of a passenger or a pedestrian will need to be explicitly encoded into the systems we create.

On a more basic level, we need to ask who is responsible for a decision an algorithm makes, especially since AI systems are increasingly capable of making judgments humans can’t understand. Who is culpable for an algorithmically driven decision gone bad? By what standard should they be evaluated?

Working Towards Human-Machine Coevolution

Before the industrial revolution, most people earned their living through physical labor. Much like today, tradesman saw mechanization as a threat — and indeed it was. There’s not much work for blacksmiths or loom weavers these days. What wasn’t clear at the time was that industrialization would create a knowledge economy and demand for higher paid cognitive work.

Today, we’re going through a similar shift, but now machines are taking over cognitive tasks. Just as the industrial revolution devalued certain skills and increased the value of others, the age of thinking machines is catalyzing a shift from cognitive skills to social skills. The future will be driven by humans collaborating with other humans to design work for machines that creates value for other humans.

Technology is, as Marshal McLuhan pointed out long ago, an extension of man. We are constantly coevolving with our creations. Value never really disappears, it just shifts to another place. So, when we use technology to automate a particular task, humans must find a way to create value elsewhere, which creates an opportunity to create new technologies.

This is how humans and machines coevolve. The dilemma that confronts us now is that when machines replace tasks that were once thought of as innately human, we must redefine ourselves and that raises thorny questions about our relationship to the moral universe. When men become gods, the only thing that remains to conquer is ourselves.

— Article courtesy of the Digital Tonto blog
— Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The AI Apocalypse is Here

3 Reasons You Should Celebrate!

The AI Apocalypse is Here

GUEST POST from Robyn Bolton

Whelp, the apocalypse is upon us. Again.

This time the end of the world is brought to you by AI.

How else do you explain the unending stream of headlines declaring that AI will eliminate jobsdestroy the education system, and rip the heart and soul out of culture and the arts? What more proof do you need of our imminent demise than that AI is as intelligent as a Wharton MBA?

We are doomed!

(Deep breath)

Did you get the panic out of your system? Feel better?

Good.

Because AI is also creating incredible opportunities for you, as a leader and innovator, to break through the inertia of the status quo, drive meaningful change, and create enormous value.

Here are just three of the ways AI will help you achieve your innovation goals:

1. Surface and question assumptions

Every company has assumptions that have been held and believed for so long that they hardened into fact. Questioning these assumptions is akin to heresy and done only by people without regard for job security or their professional reputation.

My favorite example of an assumption comes from the NYC public school district whose spokesperson explained the decision to ban ChatGPT by saying, “While the tool may be able to provide quick and easy answers to questions, it does not build critical-thinking and problem-solving skills, which are essential for academic and lifelong success,”

Buried just under the surface of this statement is the assumption that current teaching methods, specifically essays, do build critical thinking and problem-solving skills.

But is that true?

Or have we gotten so used to believing that essays demonstrate critical thinking and problem-solving that we’ve become blind to the fact that most students (yes, even, and maybe especially, the best students) follow the recipe that produces an essay that mirrors teachers’ expectations?

Before ChatGPT, only the bravest teachers questioned the value of essays as a barometer of critical thinking and problem-solving. After ChatGPT, scores of teachers took to Tik Tok and other social media platforms to share how they’re embracing the tool, using it alongside traditional tools like essays, to help their students build skills “essential for academic and lifelong success.”

2. EQ, not IQ, drives success

When all you need to do is type a question into a chatbot, and the world’s knowledge is synthesized and fed back to you in a conversational tone (or any tone you prefer), it’s easier to be the smartest person in the room.

Yes, there will always be a need for deep subject-matter experts, academics, and researchers who can push our knowledge beyond its current frontiers. But most people in most companies don’t need that depth of expertise.

Instead, you need to know enough to evaluate the options in front of you, make intelligent decisions, and communicate those decisions to others in a way that (ideally) inspires them to follow.

It’s that last step that creates an incredible opportunity for you. If facts and knowledge were all people needed to act, we would all be fit, healthy, and have absolutely no bad habits.

For example, the first question I asked ChatGPT was, “Why is it hard for big companies to innovate?” When it finished typing its 7-point answer, I nodded and thought, “Yep, that’s exactly right.”

The same thing happened when I asked the next question, “What should big companies do to be more innovative?”  I burst out laughing when the answer started with “It depends” and then nodded at the rest of its extremely accurate response.

It would be easy (and not entirely untrue) to say that this is the beginning of the end of consultants, but ChatGPT didn’t write anything that wasn’t already written in thousands of articles, books, and research papers.

Change doesn’t happen just because you know the answer. Change happens when you believe the answer and trust the people leading and walking alongside you on the journey.

3. Eliminate the Suck

Years ago, I spoke with Michael. B Jordan, Pixar’s Head of R&D, and he said something I’ll never forget – “Pain is temporary. Suck is forever.”

He meant this, of course, in the context of making a movie. There are periods of pain in movie-making – long days and nights, times when vast swaths of work get thrown out, moments of brutal and public feedback – but that pain is temporary. The movie you make is forever. And if it sucks, it sucks forever,

Sometimes the work we do is painful but temporary. Sometimes doing the work sucks, and we will need to keep doing it forever. Expense reports. Weekly update emails. Timesheets. These things suck. But they must be done.

Let AI do them and free yourself up to do things that don’t suck. Imagine the conversations you could have, ideas you could try, experiments you could run, and people you could meet if you no longer have to do things that suck.

Change is coming. And that’s good news.

Change can be scary, and it can be difficult. There will be people who lose more than they gain. But, overall, we will gain far more than we lose because of this new technology.

If you have any more doubts, I double-checked with an expert.

“ChatGPT is not a sign of the apocalypse. It is a tool created by humans to assist with language-based tasks. While artificial intelligence and other advanced technologies can bring about significant changes in the way we live and work, they do not necessarily signal the end of the world.”

ChatGPT in response to “Is ChatGPT a sign of the apocalypse?”

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

The Coming Innovation Slowdown

The Coming Innovation Slowdown

GUEST POST from Greg Satell

Take a moment to think about what the world must have looked like to J.P. Morgan a century ago, in 1919. He was not only an immensely powerful financier with access to the great industrialists of the day, but also an early adopter of new technologies. One of the first electric generators was installed at his home.

The disruptive technologies of the day, electricity and internal combustion, were already almost 40 years old, but had little measurable economic impact. Life largely went on as it always had. That would quickly change over the next decade when those technologies would drive a 50-year boom in productivity unlike anything the world had ever seen before.

It is very likely that we are at a similar point now. Despite significant advances in technology, productivity growth has been depressed for most of the last 50 years. Over the next ten years, however, we’re likely to see that change as nascent technologies hit their stride and create completely new industries. Here’s what you’ll need to know to compete in the new era.

1. Value Will Shift from Bits to Atoms

Over the past few decades, innovation has become almost synonymous with digital technology. Every 18 months or so, semiconductor manufacturers would bring out a new generation of processors that were twice as powerful as what came before. These, in turn, would allow entrepreneurs to imagine completely new possibilities.

However, while the digital revolution has given us snazzy new gadgets, the impact has been muted. Sure, we have hundreds of TV channels and we’re able to talk to our machines and get coherent answers back, but even at this late stage, information and communication technologies make up only about 6% of GDP in advanced countries.

At first, that sounds improbable. How could so much change produce so little effect? But think about going to a typical household in 1960, before the digital revolution took hold. You would likely see a TV, a phone, household appliances and a car in the garage. Now think of a typical household in 1910, with no electricity or running water. Even simple chores like cooking and cleaning took hours of backbreaking labor.

The truth is that much of our economy is still based on what we eat, wear and live in, which is why it’s important that the nascent technologies of today, such as synthetic biology and materials science, are rooted in the physical world. Over the next generation, we can expect innovation to shift from bits back to atoms.

2. Innovation Will Slow Down

We’ve come to take it for granted that things always accelerate because that’s what has happened for the past 30 years or so. So we’ve learned to deliberate less, to rapidly prototype and iterate and to “move fast and break things” because, during the digital revolution, that’s what you needed to do to compete effectively.

Yet microchips are a very old technology that we’ve come to understand very, very well. When a new generation of chips came off the line, they were faster and better, but worked the same way as earlier versions. That won’t be true with new computing architectures such as quantum and neuromorphic computing. We’ll have to learn how to use them first.

In other cases, such as genomics and artificial intelligence, there are serious ethical issues to consider. Under what conditions is it okay to permanently alter the germ line of a species. Who is accountable for the decisions and algorithm makes? On what basis should those decisions be made? To what extent do they need to be explainable and auditable?

Innovation is a process of discovery, engineering and transformation. At the moment, we find ourselves at the end of one transformational phase and about to enter a new one. It will take a decade or so to understand these new technologies enough to begin to accelerate again. We need to do so carefully. As we have seen over the past few years, when you move fast and break things, you run the risk of breaking something important.

3. Ecosystems Will Drive Technology

Let’s return to J.P. Morgan in 1919 and ask ourselves why electricity and internal combustion had so little impact up to that point. Automobiles and electric lights had been around a long time, but adoption takes time. It takes a while to build roads, to string wires and to train technicians to service new inventions reliably.

As economist Paul David pointed out in his classic paper, The Dynamo and the Computer, it takes time for people to learn how to use new technologies. Habits and routines need to change to take full advantage of new technologies. For example, in factories, the biggest benefit electricity provided was through enabling changes in workflow.

The biggest impacts come from secondary and tertiary technologies, such as home appliances in the case of electricity. Automobiles did more than provide transportation, but enables a shift from corner stores to supermarkets and, eventually, shopping malls. Refrigerated railroad cars revolutionized food distribution. Supply chains were transformed. Radios, and later TV, reshaped entertainment.

Nobody, not even someone like J.P. Morgan could have predicted all that in 1919, because it’s ecosystems, not inventions, that drive transformation and ecosystems are non-linear. We can’t simply extrapolate out from the present and get a clear future of what the future is going to look like.

4. You Need to Start Now

The changes that will take place over the next decade or so are likely to be just as transformative—and possibly even more so—than those that happened in the 1920s and 30s. We are on the brink of a new era of innovation that will see the creation of entirely new industries and business models.

Yet the technologies that will drive the 21st century are still mostly in the discovery and engineering phases, so they’re easy to miss. Once the transformation begins in earnest, however, it will likely be too late to adapt. In areas like genomics, materials science, quantum computing and artificial intelligence, if you get a few years behind, you may never catch up.

So the time to start exploring these new technologies is now and there are ample opportunities to do so. The Manufacturing USA Institutes are driving advancement in areas as diverse as bio-fabrication, additive manufacturing and composite materials. IBM has created its Q Network to help companies get up to speed on quantum computing and the Internet of Things Consortium is doing the same thing in that space.

Make no mistake, if you don’t explore, you won’t discover. If you don’t discover you won’t invent. And if you don’t invent, you will be disrupted eventually, it’s just a matter of time. It’s always better to prepare than to adapt and the time to start doing that is now.

— Article courtesy of the Digital Tonto blog
— Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Rise of the Prompt Engineer

Rise of the Prompt Engineer

GUEST POST from Art Inteligencia

The world of tech is ever-evolving, and the rise of the prompt engineer is just the latest development. Prompt engineers are software developers who specialize in building natural language processing (NLP) systems, like voice assistants and chatbots, to enable users to interact with computer systems using spoken or written language. This burgeoning field is quickly becoming essential for businesses of all sizes, from startups to large enterprises, to remain competitive.

Five Skills to Look for When Hiring a Prompt Engineer

But with the rapid growth of the prompt engineer field, it can be difficult to hire the right candidate. To ensure you’re getting the best engineer for your project, there are a few key skills you should look for:

1. Technical Knowledge: A competent prompt engineer should have a deep understanding of the underlying technologies used to create NLP systems, such as machine learning, natural language processing, and speech recognition. They should also have experience developing complex algorithms and working with big data.

2. Problem-Solving: Prompt engineering is a highly creative field, so the ideal candidate should have the ability to think outside the box and come up with innovative solutions to problems.

3. Communication: A prompt engineer should be able to effectively communicate their ideas to both technical and non-technical audiences in both written and verbal formats.

4. Flexibility: With the ever-changing landscape of the tech world, prompt engineers should be comfortable working in an environment of constant change and innovation.

5. Time Management: Prompt engineers are often involved in multiple projects at once, so they should be able to manage their own time efficiently.

These are just a few of the skills to look for when hiring a prompt engineer. The right candidate will be able to combine these skills to create effective and user-friendly natural language processing systems that will help your business stay ahead of the competition.

But what if you want or need to build your own artificial intelligence queries without the assistance of a professional prompt engineer?

Four Secrets of Writing a Good AI Prompt

As AI technology continues to advance, it is important to understand how to write a good prompt for AI to ensure that it produces accurate and meaningful results. Here are some of the secrets to writing a good prompt for AI.

1. Start with a clear goal: Before you begin writing a prompt for AI, it is important to have a clear goal in mind. What are you trying to accomplish with the AI? What kind of outcome do you hope to achieve? Knowing the answers to these questions will help you write a prompt that is focused and effective.

2. Keep it simple: AI prompts should be as straightforward and simple as possible. Avoid using jargon or complicated language that could confuse the AI. Also, try to keep the prompt as short as possible so that it is easier for the AI to understand.

3. Be specific: To get the most accurate results from your AI, you should provide a specific prompt that clearly outlines what you are asking. You should also provide any relevant information, such as the data or information that the AI needs to work with.

4. Test your prompt: Before you use your AI prompt in a real-world situation, it is important to test it to make sure that it produces the results that you are expecting. This will help you identify any issues with the prompt or the AI itself and make the necessary adjustments.

By following these tips, you can ensure that your AI prompt is effective and produces the results that you are looking for. Writing a good prompt for AI is a skill that takes practice, but by following these secrets you can improve your results.

So, whether you look to write your own AI prompts or feel the need to hire a professional prompt engineer, now you are equipped to be successful either way!

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.