Category Archives: Technology

Innovation or Not – Oklahoma State Football Helmets Seek to Revolutionize NIL

GUEST POST from Art Inteligencia

In the rapidly changing landscape of collegiate athletics, the Name, Image, and Likeness (NIL) revolution is creating both challenges and opportunities. Oklahoma State University (OSU) is taking a bold step to embrace this shift by introducing a unique, possibly groundbreaking concept – integrating NIL into their football helmets.

The Concept

OSU’s idea is straightforward yet revolutionary: use the football helmet as a platform for NIL branding. Instead of traditional school logos or player numbers, the helmets will display personal brand logos and endorsements. This turns every game into a live advertisement for players, directly tying their on-field performance to their marketability.

Key Elements of the Concept

  • Player-Centric Branding: Helmets will feature personalized logos or endorsements chosen by players, subject to NIL agreements.
  • Dynamic Advertising: The design can change weekly or according to the duration of individual endorsement deals.
  • Visibility and Impact: Enhances the visibility of players’ personal brands during high-visibility game broadcasts.

Potential Benefits

This innovative approach could have several major advantages:

For Players

  • Increased earning potential through personalized brand endorsements.
  • Enhanced marketability by combining athletic performance with brand visibility.
  • Empowerment in controlling their personal brand narrative.

For Schools

  • Attracting top talent by offering a unique platform for NIL opportunities.
  • Strengthening alumni and fan base connection through support of player-driven initiatives.
  • Potential new revenue streams through partnerships with brands aligned with athletes.

Challenges and Considerations

However, this initiative is not without its challenges. Key concerns include:

  • Ensuring fair and equitable opportunities for all players, regardless of their profile or position on the team.
  • Navigating NCAA regulations and maintaining compliance with NIL guidelines.
  • Managing potential conflicts between school sponsorship agreements and individual player deals.
  • Addressing potential aesthetic criticisms from traditionalists who prefer team-centric designs.

Integrating QR Codes for Enhanced Engagement

OSU is not stopping at logo-based branding; they are keen on leveraging technology to amplify the impact of their NIL initiative. The next phase of this bold experiment involves integrating QR codes onto the helmets and distributing them at local bars and restaurants.

Details of the QR Code Initiative

  • Helmet QR Codes: Each player’s helmet will sport a unique QR code that fans can scan with their smartphones. This will redirect them to the player’s personalized NIL content, including social media profiles, merchandise, and sponsorship deals.
  • Local Business Partnerships: QR codes will also be placed on tables at bars and restaurants around Stillwater, Oklahoma. This aims to create a seamless connection between the local business community and the athletic program.

Benefits of QR Code Integration

  • Increased Fan Interaction: Fans can engage more deeply with their favorite players by easily accessing content and offers through QR scans.
  • Boosting Local Economy: Encouraging local fans and visitors to frequent businesses supporting OSU athletics helps keep revenue within the community.
  • Augmented Revenue Streams: Creates additional opportunities for NIL deals, as businesses directly benefit from increased foot traffic and fan engagement.

Conclusion

OSU’s innovative approach to integrating NIL into football helmets represents a bold step into the future of collegiate athletics. It exemplifies the evolving dynamics of sports marketing, where athletes are increasingly seen as individual brands. While there are challenges to address, this initiative underscores the importance of embracing change and fostering creativity in an ever-competitive landscape.

Whether this will be a fleeting experiment or a long-lasting transformation remains to be seen. For now, OSU is at the forefront of redefining how college athletes can capitalize on their fame and pave the way for a more equitable sharing of revenues generated by their incredible talents and efforts.

Innovation or not, the journey of NIL in sports has only just begun, and Oklahoma State’s helmets might just be the catalyst for the revolution we’ve been waiting for.

Innovation or not?

Image credit: Oklahoma State University Athletics via ArizonaSports.com

This photo provided by Oklahoma State Athletics shows a QR code on an Oklahoma State NCAA college football helmet, Thursday, Aug. 15, 2024, at Boone Pickens Stadium in Stillwater, Okla. (Bruce Waterfield/OSU Athletics via AP)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

What We Have Learned About Digital Transformation Thus Far

What We Have Learned About Digital Transformation Thus Far

GUEST POST from Geoffrey A. Moore

We are well into our first decade of digital transformation, with both the successes and the scars to show for it, and we can see there is a long way to go. Realistically, there is probably never a finish line, so I think it is time for us to pause and take stock of what we have learned, and how best we can proceed from here. Here are three lessons to take to heart.

Lesson 1: There are three distinct levels of transformation, and operating model transformation is the one that deserves the most attention.

The least disruptive transformation is to the infrastructure model. This should be managed within the Productivity Zone, where to be fair, the disruption will be considerable, but it should not require much in the way of behavior change from the rest of the enterprise. Moving from data centers to cloud computing is a good example, as are enabling mobile applications and remote work centers. The goal here is to make employees more efficient while lowering total cost of IT ownership. These transformations are well underway, and there is little confusion about what next steps to take.

By contrast, the most disruptive transformation is to the business model. Here a company may be monetizing information derived from its operating model, as the SABRE system did for American Airlines, or overlaying a digital service on top of its core offering, as the automotive makers are seeking to do with in-car entertainment. The challenge here is that the economics of the new model have little in common with the core model, which creates repercussions both with internal systems and external ecosystem relationships. Few of these transformations to date can be said to be truly successful, and my view is they are more the exception than the rule.

The place where digital transformation is having its biggest impact is on the operating model. Virtually every sector of the economy is re-engineering its customer-facing processes to take advantage of ubiquitous mobile devices interacting with applications hosted in the cloud. These are making material changes to everyday interactions with customers and partners in the Performance Zone, where the priority is to improve effectiveness first, efficiency second. The challenge is to secure rapid, consistent, widespread adoption of the new systems from every employee who touches them. More than any other factor, this is the one that separates the winners from the losers in the digital transformation game.

Lesson 2: Re-engineer operating models from the outside in, not the inside out.

A major challenge that digital transformation at the operating model level must overcome is the inertial resistance of the existing operating model, especially where it is embedded in human behaviors. Simply put, people don’t like change. (Well, actually, they all want other people to change, just not themselves.) When we take the approach of internal improvement, things go way too slowly and eventually lose momentum altogether.

The winning approach is to focus on an external forcing function. For competition cultures, the battle cry should be, this new operating model poses an existential threat to our future. Our competitors are eating our lunch. We need to change, and we need to do it now! For collaboration cultures, the call to action should be, we are letting our customers down because we are too hard to do business with. They love our offers, but if we don’t modernize our operating model, they are going to take their business elsewhere. Besides, with this new digital model, we can make our offers even more effective. Let’s get going!

This is where design thinking comes in. Forget the sticky notes and lose the digital whiteboards. This is not about process. It is about walking a mile in the other person’s shoes, be that an end user, a technical buyer, a project sponsor, or an implementation partner, spending time seeing what hoops they have to go through to implement or use your products or simply to do business with you. No matter how good you were in the pre-digital era, there will be a ton of room for improvement, but it has to be focused on their friction issues, not yours. Work backward from their needs and problems, in other words, not forward from your intentions or desires.

Lesson 3: Digital transformations cannot be pushed. They must be pulled.

This is the hardest lesson to learn. Most executive teams have assumed that if they got the right digital transformation leader, gave them the title of Chief Transformation Officer, funded them properly, and insured that the project was on time, on spec, and on budget, that would do the trick. It makes total sense. It just doesn’t work.

The problem is one endemic to all business process re-engineering. The people whose behavior needs to change—and change radically—are the ones least comfortable with the program. When some outsider shows up with a new system, they can find any number of things wrong with it and use these objections to slow down deployment, redirect it into more familiar ways, and in general, diminish its impact. Mandating adoption can lead to reluctant engagement or even malicious compliance, and the larger the population of people involved, the more likely this is to occur.

So what does work? Transformations that are driven by the organization that has to transform. These start with the executive in charge who must galvanize the team to take up the challenge, to demand the digital transformation, and to insert it into every phase of its deployment. In other words, the transformation has to be pulled, not pushed.

Now, don’t get me wrong. There is still plenty of work on the push side involved, and that will require a strong leader. But at the end of the day, success will depend more on the leader of the consuming organization than that of the delivery team.

That’s what I think. What do you think?

Image Credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

How to Avoid AI Project Failures

How To Avoid AI Project Failures

GUEST POST from Greg Satell

A survey a few years ago by Deloitte of “aggressive adopters” of cognitive technologies found that 76% believe that they will “substantially transform” their companies within the next three years. There probably hasn’t been this much excitement about a new technology since the dotcom boom years in the late 1990s.

The possibilities would seem to justify the hype. AI isn’t just one technology, but a wide array of tools, including a number of different algorithmic approaches, an abundance of new data sources and advancement in hardware. In the future, we will see new computing architectures, like quantum computing and neuromorphic chips, propel capabilities even further.

Still, there remains a large gap between aspiration and reality. Gartner estimated that 85% of big data projects fail. There have also been embarrassing snafus, such as when Dow Jones reported that Google was buying Apple for $9 billion and the bots fell for it or Microsoft’s Tay chatbot went berserk on Twitter. Here’s how to transform the potential of AI into real results.

Make Your Purpose Clear

AI does not exist in a vacuum, but in the context of your business model, processes and culture. Just as you wouldn’t hire a human employee without an understanding of how he or she would fit into your organization, you need to think clearly about how an artificial intelligence application will drive actual business results.

“The first question you have to ask is what business outcome you are trying to drive,” Roman Stanek, CEO at GoodData, told me. “All too often, projects start by trying to implement a particular technical approach and not surprisingly, front-line managers and employees don’t find it useful. There’s no real adoption and no ROI.”

While change always has to be driven from the top, implementation is always driven lower down. So it’s important to communicate a sense of purpose clearly. If front-line managers and employees believe that artificial intelligence will help them do their jobs better, they will be much more enthusiastic and effective in making the project successful.

“Those who are able to focus on business outcomes are finding that AI is driving bottom-line results at a rate few had anticipated,” Josh Sutton, CEO of Agorai.ai, told me. He pointed to a McKinsey study from a few years ago that pegs the potential economic value of cognitive tools at between $3.5 trillion and $5.8 trillion as just one indication of the possible impact.

Choose The Tasks You Automate Wisely

While many worry that cognitive technologies will take human jobs, David Autor, an economist at MIT, sees the the primary shift as one of between routine and nonroutine work. In other words, artificial intelligence is quickly automating routine cognitive processes much like industrial era machines automated physical labor.

To understand how this can work, just go to an Apple store. Clearly, Apple is a company that clearly understands how to automate processes, but the first thing you see when you walk into an Apple store you see is a number employees waiting to help you. That’s because it has chosen to automate background tasks, not customer interactions.

However, AI can greatly expand the effectiveness of human employees. For example, one study cited by a White House report during the Obama Administration found that while machines had a 7.5 percent error rate in reading radiology images and humans had a 3.5% error rate, when humans combined their work with machines the error rate dropped to 0.5%.

Perhaps most importantly, this approach can actually improve morale. Factory workers actively collaborate with robots they program themselves to do low-level tasks. In some cases, soldiers build such strong ties with robots that do dangerous jobs that they hold funerals for them when they “die.”

Data Is Not Just An Asset, It Can Also Be A Liability

For a long time more data was considered better. Firms would scoop up as much of it as they could and then feed it into sophisticated algorithms to create predictive models with a high degree of accuracy. Yet it’s become clear that’s not a great approach.

As Cathy O’Neil explains in Weapons of Math Destruction, we often don’t understand the data we feed into our systems and data bias is becoming a massive problem. A related problem is that of over-fitting. It may sound impressive to have a model that is 99% accurate, but if it is not robust to changing conditions, you might be better off with one that is 70% accurate and simpler.

Finally, with the implementation of GDPR in Europe and the likelihood that similar legislation will be adopted elsewhere, data is becoming a liability as well as an asset. So you should think through which data sources you are using and create models that humans can understand and verify. “Black boxes” serve no one.

Shift Humans To Higher Value Tasks

One often overlooked fact about automation is that once you automate a task, it becomes largely commoditized and value shifts somewhere else. So if you are merely looking to use cognitive technologies to replace human labor and cut costs, you are most probably on the wrong track.

One surprising example of this principle comes from the highly technical field of materials science. A year ago, I was speaking to Jim Warren of the Materials Genome Initiative about the exciting possibility of applying machine learning algorithms to materials research. More recently, he told me that this approach has increasingly become a focus of materials research.

That’s an extraordinary shift in one year. So should we be expecting to see a lot of materials scientists at the unemployment office? Hardly. In fact, because much of the grunt work of research is being outsourced to algorithms, the scientists themselves are able to collaborate more effectively. As George Crabtree, Director of the Joint Center for Energy Storage Research, which has been a pioneer in automating materials research put it to me, “We used to advance at the speed of publication. Now we advance at the speed of the next coffee break.”

And that is the key to understanding how to implement cognitive technologies effectively. Robots are not taking our jobs, but rather taking over tasks. That means that we will increasingly see a shift in value from cognitive skills to social skills. The future of artificial intelligence, it seems, is all too human.

— Article courtesy of the Digital Tonto blog and previously appeared on Harvard Business Review
— Image credits: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

How to Pursue a Grand Innovation Challenge

How to Pursue a Grand Innovation Challenge

GUEST POST from Greg Satell

All too often, innovation is confused with agility. We’re told to “adapt or die” and encouraged to “move fast and break things.” But the most important innovations take time. Einstein spent ten years on special relativity and then another ten on general relativity. To solve tough, fundamental problems, we have to be able to commit for the long haul.

As John F. Kennedy put it in his moonshot speech, “We choose to go to the moon in this decade and do the other things, not because they are easy, but because they are hard, because that goal will serve to organize and measure the best of our energies and skills.” Every organization should pursue grand challenges for the same reason.

Make no mistake. Innovation needs exploration. If you don’t explore, you won’t discover. If you don’t discover you won’t invent and if you don’t invent you will be disrupted. It’s just a matter of time. Unfortunately, exploration can’t be optimized or iterated. That’s why grand challenges don’t favor the quick and agile, but the patient and the determined.

1. Don’t Bet The Company

Most grand challenges aren’t like the original moonshot, which was, in large part, the result of the space race with the Soviets that began with the Sputnik launch in 1957. That was a no-holds-barred effort that consumed the efforts of the nation, because it was widely seen as a fundamental national security issue that represented a clear and present danger.

For most organizations, those type of “bet-the-company” efforts are to be avoided. You don’t want to bet your company if you can avoid it, for the simple reason that if you lose you are unlikely to survive. Most successful grand challenges don’t involve a material investment. They are designed to be sustainable.

“Grand challenges are not about the amount of money you throw at the problem, Bernard Meyerson, IBM’s Chief Innovation Officer, told me. “To run a successful grand challenge program, failure should not be a material risk to the company, but success will have a monumental impact. That’s what makes grand challenges an asymmetric opportunity.”

Take, for example Google’s X division. While the company doesn’t release its budget, it appeared to cost the company about $3.5 billion in 2018, which is a small fraction of its $23 billion in annual profits at the time. At the same time, just one project, Waymo, may be worth $70 billion (2018). In a similar vein, the $3.8 billion invested in the Human Genome Project generated nearly $800 billion of economic activity as of 2011.

So the first rule of grand challenges is not to bet the company. They are, in fact, what you do to avoid having to bet the company later on.

2. Identify A Fundamental Problem

Every innovation starts out with a specific problem to be solved. The iPod, for example, was Steve Jobs’s way of solving the problem of having “a thousand songs in my pocket.” More generally, technology companies strive to deliver better performance and user experience, drug companies aim to cure disease and retail companies look for better ways to drive transactions. Typically, firms evaluate investment based on metrics rooted in past assumptions

Grand challenges are different because they are focused on solving fundamental problems that will change assumptions about what’s possible. For example, IBM’s Jeopardy Grand Challenge had no clear business application, but transformed artificial intelligence from an obscure field to a major business. Later, Google’s AlphaGo made a similar accomplishment with self-learning. Both have led to business opportunities that were not clear at the time.

Grand challenges are not just for technology companies either. MD Anderson Cancer Center has set up a series of Moonshots, each of which is designed to have far reaching effects. 100Kin10, an education nonprofit, has identified a set of grand challenges it has tasked its network with solving.

Talia Milgrom-Elcott, Executive Director of 100Kin10, told me she uses the 5 Whys as a technique to identify grand challenges. Start with a common problem, keep asking why it keeps occurring and you will eventually get to the root problem. By focusing your efforts on solving that, you can make a fundamental impact of wide-ranging consequence.

3. Commit To A Long Term Effort

Grand challenges aren’t like normal problems. They don’t conform to timelines and can’t effectively be quantified. You can’t justify a grand challenge on the basis of return on investment, because fundamental problems are too pervasive and ingrained to surrender themselves to any conventional form of analysis.

Consider The Cancer Genome Atlas, which eventually sequenced and published over 10,000 tumor genomes When Jean Claude Zenklusen first came up with the idea in 2005, it was highly controversial, because although it wasn’t particularly expensive, it would still take resources away from more conventional research.

Today, however, the project is considered to be a runaway success, which has transformed the field, greatly expanding knowledge and substantially lowering costs to perform genetic research. It has also influenced efforts in other fields, such as the Materials Genome Initiative. None of this would have been possible without commitment to a long-term effort.

And that’s what makes grand challenges so different. They are not business as usual and not immediately relevant to present concerns. They are explorations that expand conventional boundaries, so cannot be understood within them.

An Insurance Policy Against A Future You Can’t Yet See

Typically, we analyze a business by extrapolating current trends and making adjustments for things that we think will be different. So, for example, if we expect the market to pick up, we may invest in more capacity to profit from greater demand. On the other hand, if we expect a softer market, we’d probably start trimming costs to preserve margins.

The problem with this type of analysis is that the future tends to surprise us. Technology changes, customer preferences shift and competitors make unexpected moves. Nobody, no matter how diligent or smart, gets every call right. That’s why every business model fails sooner or later, it’s just a matter of time.

It’s also what makes pursuing grand challenges is so important. They are basically an insurance policy against a future we can’t yet see. By investing sustainably in solving fundamental problems, we can create new businesses to replace the ones that will inevitably falter. Google doesn’t invest in self-driving cars to improve its search business, it invests because it knows that the profits from search won’t last forever.

The problem is that there is a fundamental tradeoff between innovation and optimization, so few organizations have the discipline to invest in exploration today for a uncertain payoff tomorrow. That’s why so few businesses last.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Rise of the Atomic Consultant

Or the Making of a Superhero

Rise of the Atomic Consultant

by Braden Kelley

In today’s rapidly evolving world, the consulting landscape is undergoing a profound transformation. I was recently asked a series of questions to capture my thoughts on how the consulting industry and its employees will need to evolve to thrive in the coming years – including my thoughts on the creation of “superhero” consultants. The emergence of the “superhero” consultant is not merely a result of advanced tools and technologies, but rather the cultivation of essential skills and capabilities. As we navigate through this era of unprecedented change, it is imperative for consulting firms to foster a culture of flexibility, growth, and continuous learning. The future of consulting lies in the hands of those who can seamlessly integrate human expertise with artificial intelligence (AI), build meaningful connections in a hybrid work environment, and facilitate diverse perspectives to drive innovation. This article delves into the key attributes that will define the next generation of consultants and explores the obstacles that must be overcome to unlock their full potential.

Here are the questions:

1) What are the tools and technologies that a consultant should use to become a “superhero” consultant? Why are these specific tools/technologies important? How should these tools be used most effectively?

This is the wrong question. It is not tools and technologies that will enable “superhero” consultants, but instead the development of the right skills and capabilities. The future of consulting will require consulting firms to hire and develop employees that are:

  1. Flexible and growth minded – the world is changing at an accelerating rate and consultants more than ever before will need to be lifelong learners, comfortable with knowledge gaps and eager to become an expert in something on behalf of the client with each new project
  2. AI Taskmasters – the future of work is man and machine working together and consultants skilled at breaking down work to the right size (atomizing work) and assigning it to both human and AI workers
  3. Socially Savvy – remote and hybrid work is here to stay and even clients have soured on having consultants travel in every week, so “superhero” consultants must excel at building connections and relationships via internal, external and client social tools to both distribute/execute work and to source new work
  4. Skilled facilitators – as data and AI-generated work products become plentiful, sense-making rises in importance along with a diversity of perspectives – often in workshops facilitated by consultants
  5. Open Sourced – gone are the days of rinse and repeat projects powered by proprietary frameworks and IP, instead “superhero” consultants will excel at identifying the right tools and frameworks to bring to bear – from FutureHacking™ to Design Thinking to the Change Planning Toolkit™

The capabilities of tools and technologies will grow over time and new ones will emerge. The best consultants will constantly be scanning the horizon for new tools, technologies, and capabilities and leverage the above skills and capabilities to unlearn and then re-learn the best ways to create value for their clients.

2) What are the biggest obstacles that prevent consultants from being able to access or learn the steps needed to become a “superhero” consultant? What should be done to remove these obstacles to help make this transformation easier for more consultants?

The biggest obstacles that prevent consultants from becoming “superheroes” are internal – to both the consultants themselves and the firms they work for. Companies will need to examine their own policies, procedures, and training programs to right-size them for this emerging new reality. Firms will need to allow consultants to pick the right frameworks, tools and technologies for addressing client challenges – instead of limiting them to those owned by the firm. Consultants will need to shift their mindset from being experts in a particular tool or technology and towards being masters of the above skills and capabilities and experts in achieving key client outcomes. Firms will need to invest in the training and the technology necessary to provide AI’s built for purpose to accelerate the ability of consultants to more efficiently and effectively solve client challenges. Firms will also need to update their tools and methods for capturing and sharing knowledge to leverage AI capabilities at the same time.

3) What specific areas of consulting (eg. IT, finance, marketing, etc.) have the greatest potential to produce this new brand of “superhero” consultants? Why?

This new brand of “superhero” consultants will excel in a number of different disciplines because they will be able to not only find more efficient and effective ways to execute work traditionally performed by consultants (technology implementations, analytical work, etc.), but as they are helping clients transform the ways they perform different types of work, they will also be able to help clients identify new activities that will be made possible by the transformation and the new technologies and ways of working they bring with it. The reason is their focus on building skills and capabilities into which tools and technologies plug in – somewhat interchangeably.

Conclusion

The journey to becoming a “superhero” consultant is not without its challenges, but the rewards are immense. By embracing a mindset of lifelong learning and adaptability, consultants can harness the power of emerging technologies to deliver unparalleled value to their clients. The future of consulting is not about rigid frameworks or proprietary tools, but about the ability to unlearn and relearn, to innovate and collaborate, and to drive meaningful change. As we look ahead, it is clear that the most successful consultants will be those who can navigate the complexities of a dynamic world with agility and foresight. Let us continue to push the boundaries of what is possible and strive to create a brighter future for the consulting industry. Keep innovating!

p.s. Be sure and follow both my personal account and the Human-Centered Change and Innovation community on LinkedIn.

Image credit: Bing Copilot (Microsoft Designer)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Artificial Intelligence is a No-Brainer

Why innovation management needs co-intelligence

Artificial Intelligence is a No-Brainer

GUEST POST from John Bessant

Long fuse, big bang. A great descriptor which Andrew Hargadon uses to describe the way some major innovations arrive and have impact. For a long time they exist but we hardly notice them, they are confined to limited application, there are constraints on what the technology can do and so on. But suddenly, almost as if by magic they move center stage and seem to have impact everywhere we look.

Which is pretty much the story we now face with the wonderful world of AI. While there is plenty of debate about labels — artificial intelligence, machine learning, different models and approaches — the result is the same. Everywhere we look there is AI — and it’s already having an impact.

More than that; the pace of innovation within the world of AI is breath-taking, even by today’s rapid product cycle standards. We’ve become used to seeing major shifts in things like mobile phones, change happening on a cycle measured in months. But AI announcements of a breakthrough nature seem to happen with weekly frequency.

That’s also reflected in the extent of use — from the ‘early days’ (only last year!) of hearing about Chat GPT and other models we’ve now reached a situation where estimates suggest that millions of people are experimenting with them. Chat GPT has grown from a handful of people to over 200 million in less than a year; it added its first million subscribers within five days of launch! Similar figures show massive and rapid take -up of competing products like Anthropic’s Claude and Google’s Gemini, etc. It’s pretty clear that there’s a high-paced ‘arms race’ going on and it’s drawing in all the big players.

This rapid rate of adoption is being led by an even faster proliferation on the supply side, with many new players entering the market , especially in niche fields. As with the apps market there’s a huge number of players jumping on the bandwagon, and significant growth in the open source availability of models. And many models now allow for users to create their own custom versions — mini-GPTs’ and ‘Co-pilots’ which they can deploy for highly specific needs.

Not surprisingly estimates suggest that the growth potential in the market for AI technologies is vast, amounting to around 200 billion U.S. dollars in 2023 and expected to grow to over 1.8 trillion U.S. dollars by 2030.

Growth in Artificial Intelligence

There’s another important aspect to this growth. As Ethan Mollick suggests in his excellent book ‘Co-intelligence’, everything that we see AI doing today is the product of a far-from-perfect version of the technology; in very short time, given the rate of growth so far, we can expect much more power, integration and multi-modality.

The all-singing, dancing and doing pretty much anything else version of AI we can imagine isn’t far off. Speculation about when AGI — artificial general intelligence — will arrive is still just that — speculative — but the direction of travel is clear.

Not that the impact is seen as entirely positive. Whilst there have been impressive breakthroughs, using AI to help understand and innovate in fields as diverse as healthcare , distribution and education these are matched by growing concern about, for example, privacy and data security, deep-fake abuse and significant employment effects.

With its demonstrable potential for undertaking a wide range of tasks AI certainly poses a threat to the quality and quantity of a wide range of jobs — and at the limit could eliminate them entirely. And where earlier generations of technological automation impacted simple manual operations or basic tasks AI has the capacity to undertake many complex operations — often doing so faster and more effectively than humans.

AI models like Chat GPT can now routinely pass difficult exams for law or medical school, they can interpret complex data sets and spot patterns better than their human counterparts and they can quickly combine and analyze complex data to arrive at decisions which may often be better quality than those made by even experienced practitioners. Not surprisingly the policy discussion around this potential impact has proliferated at a similarly fast rate, echoing growing public concern about the darker side of AI.

But is it inevitable going to be a case of replacement, with human beings shunted to the side-lines? No-one is sure and it is still early days. We’ve had technological revolutions before — think back fifty years to when we first felt the early shock waves of what was to become the ‘microelectronics revolution’. Newspaper headlines and media programs with provocative titles like ‘Now the chips are down’ prompted frenzied discussion and policy planning for a future world staffed by robots and automated to the point where most activity would be undertaken by automated systems, overseen by one man and a dog. The role of the dog being to act as security guard, the role of the man being confined to feeding the dog.

Automation Man and Dog

This didn’t materialize; as many commentators pointed out at the time and as history has shown there were shifts and job changes but there was also compensating creation of new roles and tasks for which new skills were needed. Change yes — but not always in the negative direction and with growing potential for improving the content and quality of remaining and new jobs.

So if history is any guide then there are some grounds for optimism. Certainly we should be exploring and anticipating and particularly trying to match skills and capacity building to likely future needs.

Not least in the area of innovation management. What impact is AI having — and what might the future hold? It’s certainly implicated in a major shift right across the innovation space in terms of its application. If we take a simple ‘innovation compass’ to map these developments we can find plenty of examples:

Exploring Innovation Space

Innovation in terms of what we offer the world — our products and services — here AI already has a strong presence in everything from toys through intelligent and interactive services on our phones through to advanced weapon systems

And it’s the same story if we look at process innovation — changes in the ways we create and deliver whatever it is we offer. AI is embedded in automated and self-optimizing control systems for a huge range of tasks from mining, through manufacturing and out to service delivery.

Position innovation is another dimension where we innovate in opening up new or under-served markets, and changing the stories we tell to existing ones. AI has been a key enabler here, helping spot emerging trends, providing detailed market analysis and underpinning so many of the platform businesses which effectively handle the connection between multi-sided markets. Think Amazon, Uber, Alibaba or AirBnB and imagine them without the support of AI.

And innovation is possible through rethinking the whole approach to what we do, coming up with new business models. Rethinking the underlying value and how it might be delivered — think Spotify, Netflix and many others replacing the way we consume and enjoy our entertainment. Once again AI step forward as a key enabler.

AI is already a 360 degree solution looking for problems to attach to. Importantly this isn’t just in the commercial world; the power of AI is also being harnessed to enable social innovation in many different ways.

But perhaps the real question is not about AI-enabled innovations but one of how it affects innovators — and the organizations employing them? By now we know that innovation isn’t some magical force that strikes blindly in the light bulb moment. It’s a process which can be organized and managed so that we are able to repeat the trick. And after over 100 years of research and documenting hard-won experience we know the kind of things we need to put in place — how to manage innovation. It’s reached the point where we can codify it into an international standard — ISO 56001- and use this as a template to check out the ways in which we build and operate our innovation management systems.

So how will AI affect this — and, more to the point, how is it already doing so? Let’s take our helicopter and look down on where and how AI playing a role in the key areas of innovation management systems.

Typically the ‘front end’ of innovation involves various kinds of search activity, picking up strong and weak signals about needs and opportunities for change. And this kind of exploration and forecasting is something which AI has already shown itself to be very good at — whether in the search for new protein forms or the generation of ideas for consumer products.

Frank Piller’s research team published an excellent piece last year describing their exploration of this aspect of innovation. They looked at the potential which AI offered and tested their predictions out by tasking Chat GPT with a number of prompts based on the needs of a fictitious outdoor activities company. They had it monitoring and picking up on trends, scraping online communities for early warning signals about new consumer themes and, crucially, actually doing idea generation to come up with new product concepts. Their results mimic many other studies which suggest that AI is very good at this — in fact, as Mollick reports, it often does the job better than humans.

Of course finding opportunities is only the start of the innovation process; a key next stage is some kind of strategic selection. Out of all the possibilities of what we could do, what are we going to do and why? Limited resources mean we have to make choices — and the evidence is that AI is pretty helpful here too. It can explore and compare alternatives, make better bets and build more viable business models to take emerging value propositions forward. (At least in the test case where it competed against MBA students…!)

Innovation Process John Bessant

And then we are in the world of implementation, the long and winding road to converting our value proposition into something which will actually work and be wanted. Today’s agile innovation involves a cycle of testing, trial and error learning, gradually pivoting and homing in on what works and building from that. And once again AI is good at this — not least because it’s at the heart of how it does what it does. There’s a clue in the label — machine learning is all about deploying different learning and improvement strategies. AI can carry out fast experiments and focus in, it can simulate markets and bring to bear many of the adoption influences as probabilistic variables which it can work with.

Of course launching a successful version of a value proposition converted to a viable solution is still only half the innovation journey. To have impact we need to scale — but here again AI is likely to change the game. Much of the scaling journey involves understanding and configuring your solution to match the high variability across populations and accelerate diffusion. We know a lot about what influences this (not least thanks to the extensive work of Everett Rogers) and AI has particular capabilities in making sense of the preferences and predilections of populations through studying big datasets. It’s record in persuasion in fields like election campaigning suggests it has the capacity to enhance our ability to influence the innovation adoption decision process.

Scaling also involves complementary assets — the ‘who else?’ and ‘what else?’ which we need to have impact at scale. We need to assemble value networks, ecosystems of co-operating stakeholders — but to do this we need to be able to make connections. Specifically finding potential partners, forming relationships and getting the whole system to perform with emergent properties, where the whole is greater than the sum of the parts.

And here too AI has an growing track record in enabling recombinant innovation, cross-linking, connecting and making sense of patterns, even if we humans can’t always see them.

So far, so disturbing — at least if you are a practicing innovation manager looking over your shoulder at the AI competition rapidly catching up. But what about the bigger picture, the idea of developing and executing an innovation strategy? Here our concern is with the long-term, managing the process of accumulating competencies and capabilities to create long term competitiveness in volatile and unpredictable markets?

It involves being able to imagine and explore different options and make decisions based on the best use of resources and the likely fit with a future world. Which is, once again, the kind of thing which AI has shown itself to be good at. It’s moved a long way from playing chess and winning by brute calculating force. Now it can beat world champions at complex games of strategy like Go and win poker tournaments, bluffing with the best of them to sweep the pot.

Artificial Intelligence Poker Player

So what are we left with? In many ways it takes us right back to basics. We’ve survived as a species on the back of our imaginations — we’re not big or fast, or able to fly, but we are able to think. And our creativity has helped us devise and share tools and techniques, to innovate our way out of trouble. Importantly we’ve learned to do this collectively — shared creativity is a key part of the puzzle.

We’ve seen this throughout history; the recent response to the Covid-19 pandemic provides yet another illustration. In the face of crisis we can work together and innovate radically. It’s something we see in the humanitarian innovation world and in many other crisis contexts. Innovation benefits from more minds on the job.

So one way forward is not to wring our hands and say that the game is over and we should step back and let the AI take over. Rather it points towards us finding ways of working with it — as Mollick’s book title suggests, learning to treat it as a ‘co-intelligence’. Different, certainly but often in in complementary ways. Diversity has always mattered in innovation teams — so maybe by recruiting AI to our team we amplify that effect. There’s enough to do in meeting the challenge of managing innovation against a background of uncertainty; it makes sense to take advantage of all the help we can get.

AI may seem to point to a direction in which our role becomes superfluous — the ‘no-brain needed’ option. But we’re also seeing real possibilities for it to become an effective partner in the process.

And subscribe to my (free) newsletter here

You can find my podcast here and my videos here

And if you’d like to learn with me take a look at my online course here

Image credits: Dall-E via Microsoft CoPilot, John Bessant

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

AI Can Help Attract, Retain and Grow Customer Relationships

AI Can Help Attract, Retain and Grow Customer Relationships

GUEST POST from Shep Hyken

How do you know what your customers want if they don’t tell you? It’s more than sending surveys and interpreting data. Joe Tyrrell is the CEO of Medallia, a company that helps its customers tailor experiences through “intelligent personalization” and automation. I had a chance to interview him on Amazing Business Radio and he shared how smart companies are using AI to build and retain customer relationships. Below are some of his comments followed by my commentary:

  • The generative AI momentum is so widespread that 85% of executives say the technology will be interacting directly with customers in the next two years. AI has been around for longer than most people realize. When a customer is on a website that makes suggestions, when they interact with a chatbot or get the best answers to frequently asked questions, they are interacting with AI-infused technology, whether they know it or not.
  • While most executives want to use AI, they don’t know how they want to use it, the value it will bring and the problems it will solve. In other words, they know they want to use it, but don’t know how (yet). Tyrrell says, “Most organizations don’t know how they are going to use AI responsibly and ethically, and how they will use it in a way that doesn’t introduce unintended consequences, and even worse, unintended bias.” There needs to be quality control and oversight to ensure that AI is meeting the goals and intentions of the company or brand.
  • Generative AI is different than traditional AI. According to Tyrrell, the nature of generative AI is to, “Give me something in real time while I’m interacting with it.” In other words, it’s not just finding answers. It’s communicating with me, almost like human-to-human. When you ask it to clarify a point, it knows exactly how to respond. This is quite different from a traditional search bar on a website—or even a Google search.
  • AI’s capability to personalize the customer experience will be the focus of the next two years. Based on the comment about how AI technology currently interacts with customers, I asked Tyrrell to be more specific about how AI will be used. His answer was focused on personalization. The data we extract from multiple sources will allow for personalization like never before. According to Tyrrell, 82% of consumers say a personalized experience will influence which brand they end up purchasing from in at least half of all shopping situations. The question isn’t whether a company should personalize the customer experience. It is what happens if they don’t.
  • Personalization isn’t about being seen as a consumer, but as a person. That’s the goal of personalization. Medallia’s North Star, which guides all its decisions and investments, is its mission to personalize every customer experience. What makes this a challenge is the word every. If customers experience this one time but the next time the brand acts as if they don’t recognize them, all the work from the previous visit along with the credibility built with the customer is eroded.
  • The next frontier of AI is interpreting social feedback. Tyrrell is excited about Medallia’s future focus. “Surveys may validate information,” says Tyrrell, “but it is often what’s not said that can be just as important, if not even more so.” Tyrrell talked about Medallia’s capability to look everywhere, outside of surveys and social media comments, reviews and ratings, where customers traditionally express themselves. There is behavioral feedback, which Tyrrell refers to as social feedback, not to be confused with social media feedback. Technology can track customer behavior on a website. What pages do they spend the most time on? How do they use the mouse to navigate the page? Tyrell says, “Wherever people are expressing themselves, we capture the information, aggregate it, translate it, interpret it, correlate it and then deliver insights back to our customers.” This isn’t about communicating with customers about customer support issues. It’s mining data to understand customers and make products and experiences better.

Tyrrell’s insights emphasize the opportunities for AI to support the relationship a company or brand has with its customers. The future of customer engagement will be about an experience that creates customer connection. Even though technology is driving the experience, customers appreciate being known and recognized when they return. Tyrrell and I joked about the theme song from the TV sitcom Cheers, which debuted in 1982 and lasted 11 seasons. But it really isn’t a joke at all. It’s what customers want, and it’s so simple. As the song title suggests, customers want to go to a place Where Everybody Knows Your Name.

Image Credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Creating Effective Digital Teams

Creating Effective Digital Teams

GUEST POST from Howard Tiersky

Creating digital products is a multi-disciplinary process, blending creativity, engineering, strategy, customer support, legal regulations and more. How to structure their teams is a major challenge faced by large enterprises and global brands undergoing a digital transformation. Specifically, they need to answer the following three questions:

  1. What’s the optimal way to organize the necessary roles and responsibilities?
  2. Which part of the organization should own each capability?
  3. How do we get everyone working together?

The optimal structure for digital teams varies across different organizations. At FROM, we use a base framework that identifies fifteen key roles or competencies that are part of creating and operating most digital properties. Those roles are divided into three conceptual teams: the Digital Business Team, the Digital Technology Team, and the Extended Business Team.

The Digital Business Team

  1. Digital Business Vision Owner: The Business Vision Owner defines the key business measures and objectives for the digital property, including target market segments and their objectives. This “visioneer” makes final decisions on product direction.
  2. Product Management: Product Management owns the product on a day-to-day basis, and liaises with other areas to make sure the digital value proposition is realized. They’re responsible for commissioning and reviewing customer research to develop and maintain the product roadmap in terms of the business vision and can prioritize the backlog of changes and improvements.
  3. Program Management: Distinct from the Product Manager, the Program Manager is responsible for owning the long-term plan to achieve the product roadmap, including budgets and resource allocations, and for maintaining the release schedule.
  4. User Interface/User Experience: UI/UX is responsible for the overall look and feel of the digital product. They develop and maintain UI standards to be used as the product is developed, are involved in user testing, and QA new releases.
  5. Content Development: Content Development creates non-campaign and non-marketing or editorial content for the site, including articles, instructions, and FAQ or helps content. Their job is to create content that’s easy to understand and consistent with the brand or voice of the product or site.

The Digital Technology Team

  1. Front End Development: Front End Development selects frameworks and defines front-end coding standards for any technologies that will be used. They’re also responsible for writing code that will execute in the browser, such as HTML, HTML5, JavaScript, and mobile code (e.g., Objective-C.) Front End Development drives requirements for back-end development teams, to ensure the full user experience can be implemented.
  2. Back End Development: Back End Development manages core enterprise systems, including inventory, financial, and CRM. They’re responsible for exposing, as web services, the capabilities that are needed for front-end development. They’re responsible for developing and enforcing standards to protect the integrity of those enterprise systems, as well as reviewing requests for and implementing new capabilities.
  3. Data: Data develops and maintains enterprise and digital specific data models, managing data, and creating and maintaining plans for data management and warehousing. They monitor the health of databases, expose services for data access, and manage data architecture.
  4. Infrastructure: Infrastructure maintains the physical hardware used for applications and data. They maintain disaster and business continuity programs and monitor the scalability and reliability of the physical infrastructure. They also monitor and proactively manage the security of the infrastructure environment.
  5. Quality Assurance: Quality Assurance creates and maintains QA standards for code in production, develops automated and manual test scripts, and executes any integration, browser, or performance testing scenarios. They also monitor site metrics to identify problems proactively. (It should be noted that, though you want dedicated QA professionals on your team, QA is everyone’s responsibility!)

The Extended Business Team

  1. Marketing: Marketing is responsible for some key digital operations. They develop offers and campaigns to drive traffic. They manage email lists and execution and manage and maintain the CRM system.
  2. Product and Pricing: Product and Pricing responsibility can vary, depending on industry and type of digital property. When appropriate, they develop, license or merchandise anything sold on the site. They set pricing and drive requirements for aligning digital features with any new products based on those product’s parameters.
  3. Operations: Operations is responsible for fulfillment of the value proposition. For commerce sites, for example, this includes picking, packing and shipping orders. For something like a digital video aggregation site, responsibilities include finding, vetting and uploading new video content.
  4. Business Development: Business Development is focused on creating partnerships that increase traffic and sales, or find new streams of revenue.
  5. Customer Support: Customer support is responsible for maintaining knowledge of digital platforms, policies, and known issues and solutions. They assist customers with problems and questions and track customer interactions to report on trends and satisfaction levels.

How these teams and the roles within them fit together varies from company to company. However, it’s good practice to review this model to see, first, if you have these key roles represented in your organization. Then, make sure to create well-defined responsibilities and processes, and finally, look at how they function together, to see if they’re organized in the most effective manner. If your Digital Business, Digital Technology, and Extended Business teams are in sync, all your projects will benefit.

This article originally appeared on the Howard Tiersky blog
Image Credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.






Is Disruption About to Claim a New Victim?

Kodak. Blockbuster. Google?

GUEST POST from Robyn Bolton

You know the stories.  Kodak developed a digital camera in the 1970s, but its images weren’t as good as film images, so it ended the project.  Decades later, that decision ended Kodak.  Blockbuster was given the chance to buy Netflix but declined due to its paltry library of titles (and the absence of late fees).  A few years later, that decision led to Blockbuster’s decline and demise.  Now, in the age of AI, disruption may be about to claim another victim – Google.

A very brief history of Google’s AI efforts

In 2017, Google Research invented Transformer, a neural network architecture that could be trained to read sentences and paragraphs, pay attention to how the words relate to each other, and predict the words that would come next. 

In 2020, Google developed LaMDA, or Language Model for Dialogue Applications, using Transformer-based models trained on dialogue and able to chat. 

Three years later, Google began developing its own conversational AI using its LaMDA system. The only wrinkle is that OpenAI launched ChatGPT in November 2022. 

Now to The Financial Times for the current state of things

“In early 2023, months after the launch of OpenAI’s groundbreaking ChatGPT, Google was gearing up to launch its competitor to the model that underpinned the chatbot.

.

The search company had been testing generative AI software internally for several months by then.  But as the company rallied its resources, multiple competing models emerged from different divisions within Google, vying for internal attention.”

That last sentence is worrying.  Competition in the early days of innovation can be great because it pushes people to think differently, ask tough questions, and take risks. But, eventually, one solution should emerge as superior to the others so you can focus your scarce resources on refining, launching, and scaling it. Multiple models “vying for internal attention” so close to launch indicate that something isn’t right and about to go very wrong.

“None was considered good enough to launch as the singular competitor to OpenAI’s model, known as ChatGPT-4.  The company was forced to postpone its plans while it tried to sort through the scramble of research projects.  Meanwhile, it pushed out a chatbot, Bard, that was widely viewed to be far less sophisticated than ChatGPT.”

Nothing signals the threat of disruption more than “good enough.”  If Google, like most incumbent companies, defined “good enough” as “better than the best thing out there,” then it’s no surprise that they wouldn’t want to launch anything. 

What’s weird is that instead of launching one of the “not good enough” models, they launched Bard, an obviously inferior product. Either the other models were terrible (or non-functional), or different people were making different decisions to achieve different definitions of success.  Neither is a good sign.

When Google’s finished product, Gemini, was finally ready nearly a year later, it came with flaws in image generation that CEO Sundar Pichai called ‘completely unacceptable’ – a let-down for what was meant to be a demonstration of Google’s lead in a key new technology.”

“A let-down” is an understatement.  You don’t have to be first.  You don’t have to be the best.  But you also shouldn’t embarrass yourself.  And you definitely shouldn’t launch things that are “completely unacceptable.”

What happens next?

Disruption takes a long time and doesn’t always mean death.  Blackberry still exists, and integrated steel mills, one of Clayton Christensen’s original examples of disruption, still operate.

AI, LLMs, and LaMDAs are still in their infancy, so it’s too early to declare a winner.  Market creation and consumer behavior change take time, and Google certainly has the knowledge and resources to stage a comeback.

Except that that knowledge may be their undoing.  Companies aren’t disrupted because their executives are idiots. They’re disrupted because their executives focus on extending existing technologies and business models to better serve their best customers with higher-profit offerings.  In fact, Professor Christensen often warned that one of the first signs of disruption was a year of record profits.

In 2021, Google posted a profit of $76.033 billion. An 88.81% increase from the previous year.

2022 and 2023 profits have both been lower.

Image credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Humans Wanted for the Decade’s Biggest Innovation Challenges

Humans Wanted for the Decade's Biggest Innovation Challenges

GUEST POST from Greg Satell

Every era is defined by the problems it tackles. At the beginning of the 20th century, harnessing the power of internal combustion and electricity shaped society. In the 1960s there was the space race. Since the turn of this century, we’ve learned how to decode the human genome and make machines intelligent.

None of these were achieved by one person or even one organization. In the case of electricity, Faraday and Maxwell established key principles in the early and mid 1800s. Edison, Westinghouse and Tesla came up with the first applications later in that century. Scores of people made contributions for decades after that.

The challenges we face today will be fundamentally different because they won’t be solved by humans alone, but through complex human-machine interactions. That will require a new division of labor in which the highest level skills won’t be things like the ability to retain information or manipulate numbers, but to connect and collaborate with other humans.

Making New Computing Architectures Useful

Technology over the past century has been driven by a long succession of digital devices. First vacuum tubes, then transistors and finally microchips transformed electrical power into something approaching an intelligent control system for machines. That has been the key to the electronic and digital eras.

Yet today that smooth procession is coming to an end. Microchips are hitting their theoretical limits and will need to be replaced by new computing paradigms such as quantum computing and neuromorphic chips. The new technologies will not be digital, but will work fundamentally different than what we’re used to.

They will also have fundamentally different capabilities and will be applied in very different ways. Quantum computing, for example, will be able to simulate physical systems, which may revolutionize sciences like chemistry, materials research and biology. Neuromorphic chips may be thousands of times more energy efficient than conventional chips, opening up new possibilities for edge computing and intelligent materials.

There is still a lot of work to be done to make these technologies useful. To be commercially viable, not only do important applications need to be identified, but much like with classical computers, an entire generation of professionals will need to learn how to use them. That, in truth, may be the most significant hurdle.

Ethics For AI And Genomics

Artificial intelligence, once the stuff of science fiction, has become an everyday technology. We speak into our devices as a matter of course and expect to get back coherent answers. In the near future, we will see autonomous cars and other vehicles regularly deliver products and eventually become an integral part of our transportation system.

This opens up a significant number of ethical dilemmas. If given a choice to protect a passenger or a pedestrian, which should be encoded into the software of a autonomous car? Who gets to decide which factors are encoded into systems that make decisions about our education, whether we get hired or if we go to jail? How will these systems be trained? We all worry about who’s educating our kids, but who’s teaching our algorithms?

Powerful genomics techniques like CRISPR open up further ethical dilemmas. What are the guidelines for editing human genes? What are the risks of a mutation inserted in one species jumping to another? Should we revive extinct species, Jurassic Park style? What are the potential consequences?

What’s striking about the moral and ethical issues of both artificial intelligence and genomics is that they have no precedent, save for science fiction. We are in totally uncharted territory. Nevertheless, it is imperative that we develop a consensus about what principles should be applied, in what contexts and for what purpose.

Closing A Perpetual Skills Gap

Education used to be something that you underwent in preparation for your “real life.” Afterwards, you put away the schoolbooks and got down to work, raised a family and never really looked back. Even today, Pew Research reports that nearly one in four adults in the US did not read a single book last year.

Today technology is making many things we learned obsolete. In fact, a study at Oxford estimated that nearly half of the jobs that exist today will be automated in the next 20 years. That doesn’t mean that there won’t be jobs for humans to do, in fact we are in the midst of an acute labor shortage, especially in manufacturing, where automation is most pervasive.

Yet just as advanced technologies are eliminating the need for skills, they are also increasingly able to help us learn new ones. A number of companies are using virtual reality to train workers and finding that it can boost learning efficiency by as much as 40%. IBM, with the Rensselaer Polytechnic Institute, has recently unveiled a system that help you learn a new language like Mandarin. This video shows how it works.

Perhaps the most important challenge is a shift in mindset. We need to treat education as a lifelong need that extends long past childhood. If we only retrain workers once their industry has become obsolete and they’ve lost their jobs, then we are needlessly squandering human potential, not to mention courting an abundance of misery.

Shifting Value To Humans

The industrial revolution replaced the physical labor of humans with that of machines. The result was often mind-numbing labor in factories. Yet further automation opened up new opportunities for knowledge workers who could design ways to boost the productivity of both humans and machines.

Today, we’re seeing a similar shift from cognitive to social skills. Go into a highly automated Apple Store, to take just one example, and you don’t see a futuristic robot dystopia, but a small army of smiling attendants on hand to help you. The future of technology always seems to be more human.

In much the same way, when I talk to companies implementing advanced technologies like artificial intelligence or cloud computing, the one thing I constantly hear is that the human element is often the most important. Unless you can shift your employees to higher level tasks, you miss out on many of the most important benefits

What’s important to consider is that when a task is automated, it is also democratized and value shifts to another place. So, for example, e-commerce devalues the processing of transactions, but increases the value of things like customer service, expertise and resolving problems with orders, which is why we see all those smiling faces when we walk into an Apple Store.

That’s what we often forget about innovation. It’s essentially a very human endeavor and, to measure as true progress, humans always need to be at the center.

— Article courtesy of the Digital Tonto blog and previously appeared on Inc.com
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.