Digital transformation is hardly new. Advances in computing create more powerful infrastructure which in turn enables more productive operating models which in turn can enable wholly new business models. From mainframes to minicomputers to PCs to the Internet to the Worldwide Web to cloud computing to mobile apps to social media to generative AI, the hits just keep on coming, and every IT organization is asked to both keep the current systems running and to enable the enterprise to catch the next wave. And that’s a problem.
The dynamics of productivity involve a yin and yang exchange between systems that improve efficiency and programs that improve effectiveness. Systems, in this model, are intended to maintain state, with as little friction as possible. Programs, in this model, are intended to change state, with maximum impact within minimal time. Each has its own governance model, and the two must not be blended.
It is a rare IT organization that does not know how to maintain its own systems. That’s Job 1, and the decision rights belong to the org itself. But many IT organizations lose their way when it comes to programs—specifically, the digital transformation initiatives that are re-engineering business processes across every sector of the global economy. They do not lose their way with respect to the technology of the systems. They are missing the boat on the management of the programs.
Specifically, when the CEO champions the next big thing, and IT gets a big chunk of funding, the IT leader commits to making it all happen. This is a mistake. Digital transformation entails re-engineering one or more operating models. These models are executed by organizations outside of IT. For the transformation to occur, the people in these organizations need to change their behavior, often drastically. IT cannot—indeed, must not—commit to this outcome. Change management is the responsibility of the consuming organization, not the delivery organization. In other words, programs must be pulled. They cannot be pushed. IT in its enthusiasm may believe it can evangelize the new operating model because people will just love it. Let me assure you—they won’t. Everybody endorses change as long as other people have to be the ones to do it. No one likes to move their own cheese.
Given all that, here’s the playbook to follow:
If it is a program, the head of the operating unit that must change its behavior has to sponsor the change and pull the program in. Absent this commitment, the program simply must not be initiated.
To govern the program, the Program Management Office needs a team of four, consisting of the consuming executive, the IT executive, the IT project manager, and the consuming organization’s program manager. The program manager, not the IT manager, is responsible for change management.
The program is defined by a performance contract that uses a current state/future state contrast to establish the criteria for program completion. Until the future state is achieved, the program is not completed.
Once the future state is achieved, then the IT manager is responsible for securing the system that will maintain state going forward.
Delivering programs that do not change state is the biggest source of waste in the Productivity Zone. There is an easy fix for this. Just say No.
That’s what I think. What do you think?
Image Credit: Unsplash
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.
My publisher is having a 24 hour flash sale that will allow you to get the hardcover or the digital version (eBook) of my latest best-selling bookCharting Change for 40% off!
I created the Human-Centered Change methodology to help organizations get everyone literally all on the same page for change. The 70+ visual, collaborative tools are introduced in my book Charting Change, including the powerful Change Planning Canvas™. The toolkit has been created to help organizations:
Beat the 70% failure rate for change programs
Quickly visualize, plan and execute change efforts
Quick reminder: Everyone can download ten free tools from the Human-Centered Change methodology by going to its page on this site via the link in this sentence, and book buyers can get 26 of the 70+ tools from the Change Planning Toolkit (including the Change Planning Canvas™) by contacting me with proof of purchase.
*This offer is valid for selected English-language Palgrave books and eBooks and is redeemable on link.springer.com only. Titles affected by fixed book price laws, forthcoming titles and titles temporarily not available on link.springer.com are excluded from this promotion, as are reference works, handbooks, encyclopedias, subscriptions, or bulk purchases. The currency in which your order will be invoiced depends on the billing address associated with the payment method used, not necessarily your home currency. Regional VAT/tax may apply. Promotional prices may change due to exchange rates. This offer is valid for individual customers only. Booksellers, book distributors, and institutions such as libraries and corporations please visit springernature.com/contact-us. This promotion does not work in combination with other discounts or gift cards.
When doing customer experience work, better to create a range of personas based on where potential customer journeys are likely to diverge and what their behaviors and psychology are.
To create more impactful personas, leave out the demographics and instead choose a collection of representative photos (one per persona), name each persona, and create a descriptive statement for each persona. This is enough. And it will leave you more room (and focus) left for the kinds of information that will better help you not just step into the shoes of the customer, but into their mindset as well. This includes information like:
THEIR business goals
What they need from the company
How they behave
Pain points
One or two key characteristics important for your situation (how they buy, technology they use, etc.)
What shapes their expectations of the company
Focusing more on what the customers think, feel and do will enable your customer experience improvement team to better understand and connect with the needs and motivations of the customers, their journey and what will represent meaningful improvements for them.
In 1996 the U.S. hosted the Summer Olympics. I’ll never forget reading about this story. Wade Miller, a Santa Fe, New Mexico, resident, tried to buy tickets to the volleyball match from the Summer Olympics ticket office in Atlanta. When the agent found out he lived in New Mexico, she refused to sell him a ticket, claiming she couldn’t sell tickets to anyone outside the United States. He appealed to the agent’s supervisor, who also believed that New Mexico was not part of the United States, even though New Mexico became the 47th state in 1912.
There is a happy ending to the story. Miller eventually bought tickets, and Scott Anderson, managing director of the games, promised it wouldn’t happen again. He said, “Obviously, we made a mistake, and we want to apologize to everybody out in New Mexico. The good news is that of all the mistakes we could make, this one is at least easily fixable.”
And there is a similar story that happened just a few weeks ago. A Puerto Rican family traveling from the United States to Puerto Rico was denied boarding a plane because their infant child did not have a U.S. passport. Despite the family pleading their case, the most the agent offered to do was refund the ticket or reschedule them to a later flight after they could acquire a passport for their child. The family eventually walked over to the JetBlue ticket counter, where they were told what they already knew: passports are not required to travel between the U.S. mainland and U.S. territories, such as Puerto Rico.
From these stories – and there are plenty more just like them – here are three (3) lessons we can take away:
1. Customer Service Training: Many problems can be avoided with good customer service training. There is the soft-skill side of customer service, being friendly and empathetic. Then there is the technical side that covers anything specific to what the company does, which can include basic geography. That makes me wonder, how can someone in the airline industry not understand the requirements for different countries – or at least know where to go to get the correct information?
2. It’s Okay to Get Help: If a customer and agent are at an impasse that doesn’t look like it can be resolved, the agent needs to know when to say, “I’ll be right back,” and find someone who can help. It’s okay to get help!
3. Recovery is Key: While not part of these two stories, it’s still important to recognize that how someone apologizes, and the actions they take do two things. First, it shows empathy and care for the customer and the situation. Second, when the problem is resolved to the customer’s complete satisfaction, it may renew the customer’s confidence in the company to come back next time.
There are more lessons and examples like these. I wanted to share these two for two reasons: one, they are entertaining examples that not only make you smile but also make you think. And two, it proves a point that I often make: common sense isn’t always so common!
Image Credits: Shep Hyken, Unsplash
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.
I recently finished reading Stephen Wolfram’s very approachable introduction to ChatGPT, What is ChatGPT Doing . . . And Why Does It Work?, and I encourage you to do the same. It has sparked a number of thoughts that I want to share in this post.
First, if I have understood Wolfram correctly, what ChatGPT does can be summarized as follows:
Ingest an enormous corpus of text from every available digitized source.
While so doing, assign to each unique word a unique identifier, a number that will serve as a token to represent that word.
Within the confines of each text, record the location of every token relative to every other token.
Using just these two elements—token and location—determine for every word in the entire corpus the probability of it being adjacent to, or in the vicinity of, every other word.
Feed these probabilities into a neural network to cluster words and build a map of relationships.
Leveraging this map, given any string of words as a prompt, use the neural network to predict the next word (just like AutoCorrect).
Based on feedback from so doing, adjust the internal parameters of the neural network to improve its performance.
As performance improves, extend the reach of prediction from the next word to the next phrase, then to the next clause, the next sentence, the next paragraph, and so on, improving performance at each stage by using feedback to further adjust its internal parameters.
Based on all of the above, generate text responses to user questions and prompts that reviewers agree are appropriate and useful.
OK, I concede this is a radical oversimplification, but for the purposes of this post, I do not think I am misrepresenting what is going on, specifically when it comes to making what I think is the most important point to register when it comes to understanding ChatGPT. That point is a simple one. ChatGPT has no idea what it is talking about.
Indeed, ChatGPT has no ideas of any kind—no knowledge or expertise—because it has no semantic information. It is all math. Math has been used to strip words of their meaning, and that meaning is not restored until a reader or user engages with the output to do so, using their own brain, not ChatGPT’s. ChatGPT is operating entirely on form and not a whit on content. By processing the entirety of its corpus, it can generate the most probable sequence of words that correlates with the input prompt it had been fed. Additionally, it can modify that sequence based on subsequent interactions with an end user. As human beings participating in that interaction, we process these interactions as a natural language conversation with an intelligent agent, but that is not what is happening at all. ChatGPT is using our prompts to initiate a mathematical exercise using tokens and locations as its sole variables.
OK, so what? I mean, if it works, isn’t that all that matters? Not really. Here are some key concerns.
First, and most importantly, ChatGPT cannot be expected to be self-governing when it comes to content. It has no knowledge of content. So, whatever guardrails one has in mind would have to be put in place either before the data gets into ChatGPT or afterward to intercept its answers prior to passing them along to users. The latter approach, however, would defeat the whole purpose of using it in the first place by undermining one of ChatGPT’s most attractive attributes—namely, its extraordinary scalability. So, if guardrails are required, they need to be put in place at the input end of the funnel, not the output end. That is, by restricting the datasets to trustworthy sources, one can ensure that the output will be trustworthy, or at least not malicious. Fortunately, this is a practical solution for a reasonably large set of use cases. To be fair, reducing the size of the input dataset diminishes the number of examples ChatGPT can draw upon, so its output is likely to be a little less polished from a rhetorical point of view. Still, for many use cases, this is a small price to pay.
Second, we need to stop thinking of ChatGPT as artificial intelligence. It creates the illusion of intelligence, but it has no semantic component. It is all form and no content. It is a like a spider that can spin an amazing web, but it has no knowledge of what it is doing. As a consequence, while its artifacts have authority, based on their roots in authoritative texts in the data corpus validated by an extraordinary amount of cross-checking computing, the engine itself has none. ChatGPT is a vehicle for transmitting the wisdom of crowds, but it has no wisdom itself.
Third, we need to fully appreciate why interacting with ChatGPT is so seductive. To do so, understand that because it constructs its replies based solely on formal properties, it is selecting for rhetoric, not logic. It is delivering the optimal rhetorical answer to your prompt, not the most expert one. It is the one that is the most popular, not the one that is the most profound. In short, it has a great bedside manner, and that is why we feel so comfortable engaging with it.
Now, given all of the above, it is clear that for any form of user support services, ChatGPT is nothing less than a godsend, especially where people need help learning how to do something. It is the most patient of teachers, and it is incredibly well-informed. As such, it can revolutionize technical support, patient care, claims processing, social services, language learning, and a host of other disciplines where users are engaging with a technical corpus of information or a system of regulated procedures. In all such domains, enterprises should pursue its deployment as fast as possible.
Conversely, wherever ambiguity is paramount, wherever judgment is required, or wherever moral values are at stake, one must not expect ChatGPT to be the final arbiter. That is simply not what it is designed to do. It can be an input, but it cannot be trusted to be the final output.
That’s what I think. What do you think?
Image Credit: Pixabay
Sign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.
Innovation is something different that creates value. Sometimes it’s big, new to the world, world-changing things. Sometimes it’s a slight tweak to make things easier, faster, cheaper or better.
Sometimes, it’s both.
It’s no secret that the military and NASA are birthplaces of incredible inventions (something new) and innovations (something different that creates value). Most people know that Velcro, nylon, and powdered drinks (Tang!) originated at Nasa, and that Jeep, GPS, and the internet come to us from the military.
But did you know that these 10 everyday innovations have their origin in the military?
1. Duct Tape
Invented in 1942 to seal ammo boxes with something that could resist water and dirt while also being fast and easy to remove so soldiers could quickly access ammunition when they needed it. Originally, it was made by applying a rubber-based adhesive to duck cloth, a plain and tightly woven cotton fabric, and has evolved over the years to be used for everything from repairing equipment on the moon to purses.
2. Synthetic Rubber Tires
Speaking of rubber, prior to WWII, most rubber was harvested from trees in South America and shipped to southern Asia where the majority of rubber products were produced. When the Axis powers cut-off access to Asia, the US military turned to Firestone, Goodyear, and Standard Oil to create a replacement substance. The recipe they created is still used today.
3. Silly Putty
Image Credit: thestrong.org
Like most inventions, there were a lot of failed experiments before the right synthetic rubber recipe was found. Silly Putty is the result of one of those experiments. A scientist at GE developed the strange substance but quickly shelved it after it became clear that it had no useful military application. Years later, GER execs started showing off the novelty item at cocktail parties, an advertising exec in attendance saw its commercial potential and bought the manufacturing rights, packaged it into eggs and sold it as a toy. 350 million eggs later, we’re still playing with it.
4. Superglue
The result of another failed experiment, Superglue came onto the market in 1958 and has stuck around ever since (sorry, that pun was intended). Military scientists were testing materials to use as clear plastic rifle sights and created an incredibly durable but impossibly sticky substance called cyanoacrylate. Nine years later it was being sold commercially as Superglue and eventually did make its way into military use during the Vietnam War as a way to immediately stop bleeding from wounds.
5. Feminine Hygiene Pads
Image Credit: Museum of American History
Before Superglue was used to stop bleeding, bandages woven with cellulose were used on the battlefields and hospitals. Seeing how effective the bandages were at holding blood and the convenience of having so many on hand, US and British WW1 nurses began using them as sanitary napkins and bandage makers adapted and expanded their post-War product lines to accommodate.
6. Undershirts
Image Credit: Foto-ianniello/Getty Images
While people have been wearing undergarments for centuries, the undershirt as we know it — a t-shaped, cotton, crewneck — didn’t come into being until the early twentieth century. Manufactured and sold by the Cooper Underwear Co., it caught the Navy’s eye as a more convenient and practical option than the current button-up shirts. In 1905, it became part of the official Navy uniform and the origin of the term “crewneck.”
7. Aerosol Big Spray
Image Credit: National WWII Museum
Soldiers fighting in the Pacific theater of WWII had a lot to worry about, so they were eager to cross mosquitos and malaria off that list. In response, the Department of Defense teamed up with the Department of Agriculture to find a way to deliver insecticide as a fine mist. The first aerosol “bug bomb” was patented in 1941 and, thanks to the development of a cheaper plastic aerosol valve, became commercially available to civilians in 1949.
8. Canned Food
Image Credit: Pacific Paratrooper — WordPress.com
While it’s not surprising that canned foods were originally created for the military, it may surprise you to learn that it was Napoleon’s armies that first used the concept. In response to the French Government’s offer of a large cash reward for anyone who could find a way to preserve large quantities of food, an inventor discovered that food cooked inside a jar wouldn’t spoil unless the seal leaked, or the container was broken. But glass jars are heavy and fragile, so innovation continued until WW1 when metal cans replaced the glass jars.
9. Microwave
RadaRange on the Nuclear Ship NS Savannah
This is another one that you probably would have guessed has its origins in the military but may be surprised by its actual origin story. The term “microwave” refers to an adaptation of radar technology that creates electromagnetic waves on a tiny scale and passes those micro-waves through food, vibrating it, and heating it quickly. The original microwaves made their debut in 1946 on ships but it took another 20 years to get the small and affordable enough to be commercially viable.
10. Wristwatches
Image Credit: Hodinkee
Watches first appeared on the scene in the 15th century but they didn’t become reliable or accurate until the late 1700s. However, up until the early 20th century, wristwatches were primarily worn as jewelry by women and men used pocket watches. During its military campaigns in the late 1880s, the British Army began using wristwatches as a way to synchronize maneuvers without alerting the enemy to their plans. And the rest, as they say, is history.
So, there you have it. 10 everyday innovations brought to us civilians by the military. Some, like synthetic rubber, started as intentional inventions (something new) and quickly became innovations (something new that creates value). Some, like superglue and silly putty, are “failed” experiments that became innovations. And some, like undershorts and feminine products, are pure innovations (value-creating adaptations of pre-existing products to serve different users and users).
There is a line of thinking that says that the world is built on ideas. It was an idea that launched the American Revolution and created a nation. It was an idea that led Albert Einstein to pursue relativity, Linus Pauling to invent a vaccine and for Steve Jobs to create the iPhone and build the most valuable company in the world.
It is because of the power of ideas that we hold them so dear. We want to protect those we believe are valuable and sometimes become jealous when others think them up first. There’s nothing so rapturous as the moment of epiphany in which an idea forms in our mind and begins to take shape.
Clearly, ideas are important, but not as many believe. America is what it is today, for better or worse, not just because of the principles of its founding, but because of the actions that came after it. We revere people like Einstein, Pauling and Jobs not because of their ideas, but what they did with them. The truth is that although possibilities are infinite, ideas are limited.
The Winklevoss Affair
The muddled story of Facebook’s origin is now well known. Mark Zuckerberg met with the Winklevoss twins and another Harvard classmate to discuss building a social network together. Zuckerberg agreed, but then sandbagged his partners while he built and launched a competing site. He would later pay out a multimillion dollar settlement for his misdeeds.
Zuckerberg and the Winklevoss twins were paired in the news together again recently when Facebook announced that it’s developing a new cryptocurrency called Libra. As it happens, the Winklevoss twins have been high profile investors in Bitcoin for a while now. The irony was too delicious for many in the media to ignore. First he stole their idea for Facebook and now he’s doing the same with cryptocurrencies!
Of course this is ridiculous. Social networks like Friendster and Myspace existed before Facebook and many others came after. Most failed. In much the same way, many people today have ideas about starting cryptocurrency businesses. Most of them will fail too. The value of an initial idea is highly questionable.
Different people have similar ideas all the time. In fact, in a landmark study published in 1922 identified 148 major inventions or discoveries that at least two different people, working independently, arrived at the same time. So the fact that both the Winklevoss twins and Zuckerberg wanted to launch a social network was meaningless.
The truth is that Zuckerberg didn’t have to pay the Winklevoss twins because he stole their idea, but because he used their trust to actively undermine their business to benefit his. His crime wasn’t creation, but destruction.
The Semmelweis Myth
In 1847, a young doctor named Ignaz Semmelweis had a major breakthrough. Working in a maternity ward, he discovered that a regime of hand washing could dramatically lower the incidence of childbed fever. Unfortunately, the medical establishment rejected his idea and the germ theory of disease didn’t take hold until decades later.
The phenomenon is now known as the Semmelweis effect, the tendency for people to reject new knowledge that contradicts established beliefs. We tend to think that a great idea will be immediately obvious to everyone, but the opposite usually happens. Ideas that have the power to change the world always arrive out of context for the simple reason that the world hasn’t changed yet.
However, the Semmelweis effect is misleading. As Sherwin Nuland explains in The Doctor’s Plague, there’s more to the story than resistance to a new idea. Semmelweis didn’t see the value in communicating his work effectively, formatting his publications clearly or even collecting data in a manner that would gain his ideas greater acceptance.
Here again, we see the limits of ideas. Like a newborn infant, they can’t survive alone. They need to be nurtured to grow. They need to make friends, interact with other ideas and mature. The tragedy of Semmelweis is not that the medical establishment did not immediately accept his idea, but that he failed to steward it in such a way that it could spread and make an impact.
Why Blockbuster Video Really Failed
One of the most popular business myths today is that of Blockbuster Video. As the story is usually told, the industry giant failed to recognize the disruptive threat that Netflix represented. The truth is that the company’s leadership not only recognized the problem, but developed a smart strategy and executed it well.
The failure, in fact, had less to do with strategy and tactics than it did with managing stakeholder networks. Blockbuster moved quickly to launch an online business, cut late fees and innovated its business model. However, resistance from franchisees, who were concerned that the changes would kill their business, and from investors and analysts, who balked at the cost of the initiatives, sent the stock price reeling.
From there things spiraled downward. The low stock price attracted the corporate raider Carl Icahn, who got control of the board. His overbearing style led to a compensation dispute with Blockbuster’s CEO, John Antioco. Frustrated, Antioco negotiated his exit and left the company in July of 2007.
His successor, Jim Keyes, was determined to reverse Antioco’s strategy, cut investment in the subscription model, reinstated late fees and shifted focus back to the retail stores in a failed attempt to “leapfrog” the online subscription model. Three years later, in 2010, Blockbuster filed for bankruptcy.
The Fundamental Fallacy Of Ideas
One of the things that amazed me while I was researching my book Cascades was how often movements behind powerful ideas failed. The ones that succeeded weren’t those with different ideas or those of higher quality, but those that were able to align small groups, loosely connected, but united by a shared purpose.
The stories of the Winklevoss twins, Ignaz Semmelweis and Blockbuster Video are all different versions of the same fundamental fallacy, that ideas, if they are powerful enough, can stand on their own. Clearly, that’s not the case. Ideas need to be adopted and then combined with other ideas to make an impact on the world.
The truth is that ideas need ecosystems to support them and that doesn’t happen overnight. To make an idea viable in the real world it needs to continually connect outward, gaining adherents and widening its original context. That takes more than an initial epiphany. It takes the will to make the idea subservient to its purpose.
What we have to learn to accept is that what makes an idea powerful is its ability to solve problems. The ideas embedded in the American Constitution were not new at the time of the country’s founding, but gained power by their application in the real world. In much the same way, we revere Einstein’s relativity, Pauling’s vaccine and Jobs iPhone because of their impact on the world.
As G.H. Hardy once put it, “For any serious purpose, intelligence is a very minor gift.” The same can be said about ideas. They do not and cannot stand alone, but need the actions of people to bring them to life.
Why keeping an eye on the clock matters in the world of bright ideas
Image: Dall-E via Bing
GUEST POST from John Bessant
On 29th September 1707 a fleet of 21 ships under the command of Admiral Cloudesley Shovell was returning from Gibraltar where it had been supporting action during the long-running war with the French. Crossing the Bay of Biscay the weather grew worse and they were struggling to make their home port of Plymouth in the south-west of England. At around 6pm they believed they were in safe waters but in fact were heading on to the rocks near St Agnes’s Bay in the Scilly Islands — fifty miles west of where they thought they were. Four ships were lost in the disaster including HMS Association, the flagship of the fleet which carried Sir Cloudesley and over 1400 other sailors to their deaths. It was one of the worst naval disasters to befall the British Navy.
And an avoidable one. The problem — which urgent follow-up enquiries highlighted — was familiar. The ships were lost because the experienced seamen steering them didn’t know where they were. Navigating the rocky coastline with hidden shelves and shallows depended on accurate awareness of position — but the methods available at the time weren’t up to it. Depth soundings could help but the key missing ingredient was an accurate measurement of longitude. For which they needed a reliable timepiece on board; despite an array of clocks and pocket watches the technology wasn’t good enough to maintain an accurate sense of the time relative to the Greenwich clock on which all naval longitude is based. Time slipped away — and with it any clear sense of where they were.
Cue one of many early innovation contests — attempts at crowdsourcing a good solution to the problem as fast as possible. The British government offered a huge prize of £20,000 in 1714 equivalent in today’s money to around £3m) to anyone who could construct a clock that would enable sailors to calculate their longitude at sea with an accuracy of within half a degree. It was famously won by John Harrison, a carpenter by trade who spent over twenty years working on the problem, producing four models of chronometer each improving on the previous one. His H4 model finally achieved the accuracy and reliability required and he duly won the prize. More important he — and the many others working on the problem — changed the face of seaborne navigation forever.
Image: Dall-E via Bing
We’re used to thinking about the Industrial Revolution in terms of Britain as the ‘workshop of the world’, driven by a steady stream of manufacturing technology innovations coming from men like Arkwright, Wedgwood, Boulton and Watt. But we should add the clockmakers to that list; without them the workshop of the world would not have been able to get its wares reliably to anywhere but the closest markets. Their innovation heralded the first wave of globalisation in world trade and we’re still building on that legacy.
Nor was their effort confined to seaborne navigation in its impact; with reliable clocks it became possible to standardise time itself. Prior to 1880 different cities in the UK kept different versions of time, each geared to a local standard timepiece. But the introduction of standardised time, all linked to Greenwich Mean time allowed for important shifts like the expansion of railways with trains running on a clear and predictable timetable.
Time is in many ways a key part of the enabling infrastructure, a foundation on which so much innovation can be built. Like today’s internet it enables things to happen which would not have been possible before — and in similar fashion the early days spent innovating towards a reliable infrastructure represent an important but often neglected innovation history.
So time deserves credit as a macro-level innovation enabler — but it also has an impressive history at the micro-level in terms of its innovation impact. In 1909 Frederick Taylor published his book on ‘The principles of Scientific Management’ which became, (in the view of the 2001 members of the Academy of Management) the most influential book on management ever. His principles laid the foundations for the ways in which factories and later many service businesses were constructed and operated and paved the way for Henry Ford’s mass production model. At heart the approach involved applying rigorous engineering principles to the flow and execution of activities throughout a process and they still underpin much of the industrial engineering curriculum today.
Its impact was huge — for example in Ford’s Highland Park factory where he began experimenting towards the model using Taylor’s ideas, the productivity gains were stunning. In the first assembly line, installed in 1913 for flywheel production for the Model T, the assembly time fell from 20 man minutes to 5. By 1914 three lines were being used in the chassis department to reduce assembly time from around 12 hours to less than 2.
Key contributors to enabling this to happen were an American couple, Frank and Lilian Gilbreth. They worked on what became known as ‘time and motion study’, analysing and breaking down work processes into individual motions, and then eliminating unnecessary motions to improve efficiency. (They also followed in the above illustrious tradition of creating reliable timepieces, in their case developing the micro-chronometer, a clock that could record time to the 1/2000th of a second).
The image of stop watches and clipboards goes back to their influence — and while (like Taylor) their work often receives a negative press (think Charlie Chaplin in the film ‘Modern Times’ in which he is literally caught up in the machine and under enormous time pressure), the reality is that the Gilbreth’s enabled major improvements not just in productivity but in working conditions and employee satisfaction.
They were early but key figures in what later became ‘lean thinking’ — essentially reducing unnecessary waste, especially in movement. Ergonomics owes a lot to their measurement approach which charmingly gave us a unit of measurement — the ‘therblig’ (Gilbreth spelled backwards) which they applied to analyse a set of 18 elemental motions involved in performing a task in the workplace. These elements include movements such as reach, move, grasp, release, load, use, assemble, and disassemble, as well as unnecessary ones like hold, rest, position, search, select, plan, unavoidable delay, avoidable delay, and inspect.
Paying attention to detail, especially around the time taken to carry out a task, and then redesigning it to reduce wasteful effort, movement, queuing, temporary storage, etc. lies at the heart of another revolutionary process innovation — lean thinking. Pioneered in Japanese factories during the post-war years lean is essentially a focus on waste elimination through the application of core principles and key tools. Amongst the ‘seven deadly wastes’ which lean focuses on is time — and not surprisingly the toolkit which emerges from that places a premium on reducing unnecessary expenditure of that precious commodity.
For example one of the early challenges to the emerging car industry was set-up time. With giant machines capable of pressing a piece of steel into the required shape the ability to make different models depends on how quickly those presses can be set-up for a new job. In the early days this typically took up to a day to reset; now (using the widely-applied techniques originally pioneered by Shigeo Shingo and captured in his excitingly titled book ‘Single minute exchange of die — the SMED system’) that time is routinely counted in single minutes.
The implications of this reach far beyond the car factory. Anywhere where rapid changeover of a key resource is needed can benefit from the approach — and has done. Formula 1 pitstop teams can ‘reset’ an entire car with new tyres, fuel and many other changes within seconds. Hospital operating theatres can maximise the productive (and life-saving) time for operations by applying the principles to changeovers. And the revolution in short-haul flying which we have seen in recent decades owes a great deal to the simple performance metric of turnaround time — how fast can a plane land, empty , be cleaned, refuelled and refilled with passengers and take off again? Southwest Airlines have held the crown for years with turnaround times typically around 15 minutes.
Saving time is at one end of an innovation spectrum — it’s worth looking at because wasted time adds no value and saving it enhances productivity. But there’s another end of this spectrum, one which William McKnight discovered in his work at 3M in their early years. It’s all about spending time.
Innovation is about ideas and sometimes coming up with the good ones , the ones which may offer a whole new angle on a problem, needs time for the innovation to incubate. He observed that by giving people a sense of having a little extra time in which they could play around paid off for the company. His 15% policy did just that, giving people the sense that they have time to think and explore without needing to show productivity — the opposite of the tightly-controlled time of the Gilbreths.
(In reality this didn’t cost much in the way of lost productivity since McKnight observed that 15% of a working day, is taken up with coffee and tea breaks, lunch and other time. Plus people don’t religiously take their 15% and then stop thinking about their innovation; most give much more of their own thinking time for free!)
Breakthroughs (of which 3M has many to be proud of) come more frequently if people have time to think — which is why the approach has been successfully adopted by many other organizations. Google, for example, links many major innovations like Gmail to allowing their engineers to spend 20% of their time on their own projects.
There’s another reason why time pressure shouldn’t always be too strong in the innovation area. By its nature innovation is uncertain — which means that we need to experiment and things will go wrong which need time to explore and fix. But sometimes the project level pressure is too strong — prestige, racing the competition, the need to meet performance targets — there are plenty of culprits turning the temporal screws. Think about the fateful Challenger space shuttle explosion back in 1986 which eventually was blamed on a faulty O-ring seal. But importantly — as the Rogers commission of enquiry commented later — it wasn’t the component which was the problem but the system which put so much pressure on the engineers to push past it and press ahead.
That’s sadly not a new tale; the novelist Nevil Shute spent much of his early life working in the aircraft industry and had first hand experience of the race to design an airship. In response to the German dominance with their Zeppelin designs in the 1920s the British government pushed for a challenger and backed two projects, the R-100 which was built on a shoestring by Shute’s company and the other the R-101 which was built with government resources. The latter had all the advantages of unlimited resources and budget but that came with enormous political pressure to get the job done — and fast.
Sadly on a test flight in 1930 the R101 ploughed into a French hillside killing all on board. Once again the enquiry found that the engineers had been pushed to cut corners and ignore safety concerns; Shute and his fellow engineers had enormous sympathy for the difficult situation in which their R101 colleagues had found themselves. As he describes in his book ‘Slide Rule’
‘The R101 team was working under impossible conditions; they had to design and build an airship that was larger and more complex than anything ever attempted before. They had to meet unrealistic deadlines and specifications imposed by the government. They had to cope with constant changes and revisions to their plans. They had to deal with political interference and public scrutiny…… they were doomed to fail’.
It would be good to think we’ve finally learned this lesson — but the 2022 well-researched Netflix documentary ‘Downfall’ which charts the disastrous history of the Boeing 737-Max points once again at the same kind of time pressure as being responsible for pushing too far too fast.
There are many more places where we can see time playing a role as a key innovation enabler or shaper. For example the challenge to board-level patience in the face of the long slow haul towards bringing innovation impact at scale. Evidence shows it takes a long time to move from pilot success to widespread impact. As Ray Croc (the architect behind the scaling of McDonalds) pointed out, ‘I was an overnight success all right, but 30 years is a long, long night’.
So providing support and commitment over the long-haul is going to be as important as having an innovation team with a clear vision and strategy to undertake the expedition. Not for nothing does the term ‘patient money’ first appear in the findings of the famous Project Sappho study back in the 1970s which looked at factors affecting success and failure in innovation.
Or the challenge of innovation timing — we hear a lot about ‘first mover advantage’ and it would be easy to think that speed is always the key factor. But being too early is often as risky as being too late; pushing untried innovations into the market too soon can sometimes mean being cut by the bleeding edge of technology. And sometimes the innovation is so far advanced it has to wait for the wider infrastructure or for the social or political climate to catch up. Shai Agassi’s vision for making the world a better place through his electromobility solutions is a good example. The collapse what had been one of the world’s biggest start-ups came ten years before the underlying idea (of battery swap technology for electric vehicles) found widespread acceptance in niche markets like city taxi networks in China.
Time is a precious commodity which, used wisely, is a key part of the innovation story. So when you glance at your watch or the little clock running in the corner of your computer screen spare a thought for the innovators, thousands of them over the centuries, who solved the problem of measuring it reliably and accurately.
Additional Image Credit: Wikimedia Commons
You can find a podcast version of this here and a video version here
At the beginning of each month, we will profile the ten articles from the previous month that generated the most traffic to Human-Centered Change & Innovation. Did your favorite make the cut?
But enough delay, here are June’s ten most popular innovation posts:
If you’re not familiar with Human-Centered Change & Innovation, we publish 4-7 new articles every week built around innovation and transformation insights from our roster of contributing authors and ad hoc submissions from community members. Get the articles right in your Facebook, Twitter or Linkedin feeds too!
Have something to contribute?
Human-Centered Change & Innovation is open to contributions from any and all innovation and transformation professionals out there (practitioners, professors, researchers, consultants, authors, etc.) who have valuable human-centered change and innovation insights to share with everyone for the greater good. If you’d like to contribute, please contact me.
P.S. Here are our Top 40 Innovation Bloggers lists from the last three years: