How Leaders Make Employees Feel Respected

How Leaders Make Employees Feel Respected

GUEST POST from David Burkus

Leadership is about relationships. And the cornerstone of just about every relationship is respect.

When employees feel respected, they are more engaged, motivated, and productive. But many managers struggle to convey their respect to team members. Consequently, colleagues often experience a sense of being undervalued, disconnected, or even inferior. It is difficult to envision individuals who harbor such sentiments being able to their best work ever.

In this article, we will outline how to make employees feel respected through five actions leaders can take to build a respectful work environment.

Check-In Often

The first action leaders can take to make employees feel respected is to check-in often. By staying in contact with your team members on a regular basis, you show that you value their work and progress. That’s why regularly checking in with team members through one-on-one meetings is essential for making them feel respected. During these check-ins, ask about their work progress and if they need any resources or support. But beyond just work-related checking in, this is a time to check in with them on a deeper level as well. Show genuine interest in their personal lives and let them choose how much they want to share. They may not answer right away, as it takes time to grow comfortable with sharing personal information at work. But inquiring about it still demonstrates that you care about them as individuals and not just as employees.

By maintaining open lines of communication and regularly checking in, you create a supportive and respectful environment where employees feel heard and valued.

Ask for Input

The second action leaders can take to make employees feel respected is to ask for input. Employees involved in the decision-making process feel like their perspective and knowledge is respected. You don’t need to agree to follow every decision they make, and you don’t even need to let them make the decision. But you should absolutely seek out their input before you decide. By asking for their input, you show respect for their expertise and that you value their opinions. And you recognize that others may have different perspectives and access to information that you may not have. Even if their input is not ultimately followed, it is crucial to explain the reasoning behind the decision. In fact, asking for input helps you better explain to employees why a decision was made that they may disagree with, while still helping employees understand that their input was considered and respected.

By actively seeking input from your team members, you foster a culture of collaboration, trust, and respect.

Demonstrate Trust

The third action leaders can take to make employees feel respected is to demonstrate trust. Trust is a fundamental aspect of creating a respectful work environment. And the research on how trust develops suggests that trust isn’t given or earned, it’s built over time through a reciprocal process. When people feel trusted, they’re more likely to respond with trustworthy behavior. And in a work-context, this means leaders ought to go first by demonstrating they trust their employees. This often takes the form of giving employees more autonomy. Set clear standards and expectations but allow them to find the best way to meet them. By giving autonomy, you show that you trust your employees’ abilities and judgment.

However, it is important to balance autonomy with accountability. While giving employees the freedom to work in their own way, ensure that they are still accountable to the team and the organization’s goals. This balance between trust and accountability creates a respectful and empowering work environment.

Referee Conflicts

The fourth action leaders can take to make employees feel respected is to referee conflicts. Conflicts within a team can be detrimental to a respectful work environment, but they can also be hugely beneficial. It just depends on the type of conflict and how it is handled. Personal conflicts need to be resolved and eliminated quickly. But task-focused conflicts can benefit the team by making ideas stronger and making final decisions better. As a leader, this means referring task-focused conflicts to ensure they stay productive. Establish ground rules for conflicts, such as starting with positive feedback before addressing disagreements. This helps create a safe space for open and productive discussions. Additionally, teach your team members how to have productive conflicts that lead to better ideas and solutions.

By encouraging task-focused conflict and working to find productive resolutions, you foster a culture of respect and continuous improvement.

Give Fair Feedback

The final action leaders can take to make employees feel respected is to give fair feedback. Providing direct and fair feedback is essential for helping employees improve and grow. When giving feedback, focus on both the positive aspects and areas for improvement. By acknowledging their strengths and offering constructive criticism, you show that you value their efforts and are invested in their professional development. Where many leaders go wrong is in spending too much time on constructive criticism and not enough time on positive elements of one’s performance. That’s not fair. Fair feedback ensures that the conversation is proportionate to the overall performance of the employee. If their work is 90 percent positive and 10 percent needing improvement, then the conversation should be 90 percent positive. This not only helps the constructive criticism be better received, but it also helps the employees know their contribution is valued.

By giving fair feedback, employees not only grow faster but they grow in their feeling of being respected.

Creating a respectful work environment requires consistent effort and commitment from leaders. By regularly checking in with team members, involving them in decision-making processes, demonstrating trust, refereeing conflicts, and giving fair feedback, you can make employees feel respected and valued. Remember, a respectful work environment leads to higher employee satisfaction, engagement, and productivity—in other words, employees who feel respected are employees able to do their best work ever.

Image credit: Pexels

Originally published on DavidBurkus.com on August 14, 2023

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

False Choice – Founder versus Manager

False Choice - Founder versus Manager

GUEST POST from Robyn Bolton

Paul Graham, cofounder of Y Combinator, was so inspired by a speech by Airbnb cofounder and CEO that he wrote an essay about well-intentioned advice that, to scale a business, founders must shift modes and become managers.

It went viral. 

In the essay, he argued that:

In effect there are two different ways to run a company: founder mode and manager mode. Till now most people even in Silicon Valley have implicitly assumed that scaling a startup meant switching to manager mode. But we can infer the existence of another mode from the dismay of founders who’ve tried it, and the success of their attempts to escape from it.

With curiosity and an open mind, I read on.

I finished with a deep sigh and an eye roll. 

This is why.

Manager Mode: The realm of liars and professional fakers

On the off chance that you thought Graham’s essay would be a balanced and reflective examination of management styles in different corporate contexts, his description of Manager Mode should relieve you of that thought:

The way managers are taught to run companies seems to be like modular design in the sense that you treat subtrees of the org chart as black boxes. You tell your direct reports what to do, and it’s up to them to figure out how. But you don’t get involved in the details of what they do. That would be micromanaging them, which is bad.

Hire good people and give them room to do their jobs. Sounds great when it’s described that way, doesn’t it? Except in practice, judging from the report of founder after founder, what this often turns out to mean is: hire professional fakers and let them drive the company into the ground.

Later, he writes about how founders are gaslit into adopting Manager Mode from every angle, including by “VCs who haven’t been founders themselves don’t know how founders should run companies, and C-level execs, as a class, include some of the most skillful liars in the world.”

Founder Mode: A meritocracy of lifelong learners

For Graham, Founder Mode boils down to two things:

  1. Sweating the details
  2. Engaging with employees throughout the organization beyond just direct reports.  He cites Steve Jobs’ practice of holding “an annual retreat for what he considered the 100 most important people at Apple, and these were not the 100 people highest on the org chart.”

To his credit, Graham acknowledges that getting involved in the details is micromanaging, “which is bad,” and that delegation is required because “founders can’t keep running a 2000 person company the way they ran it when it had 20.” A week later, he acknowledged that female founders “don’t have permission to run their companies in Founder Mode the same way men can.”

Yet he persists in believing that Founder, not Manager, Mode is critical to success,

“Look at what founders have achieved already, and yet they’ve achieved this against a headwind of bad advice. Imagine what they’ll do once we can tell them how to run their companies like Steve Jobs instead of John Sculley.”

Leader Mode: Manager Mode + Founder Mode

The essay is interesting, but I have real issues with two of his key points:

  • Professional managers are disconnected from the people and businesses they manage, and as a result, their practices and behaviors are inconsistent with startup success.
  • Founders should ignore conventional wisdom and micromanage to their heart’s content.

Most “professional managers” I’ve met are deeply connected to the people they manage, committed to the businesses they operate, and act with integrity and authenticity. They are a far cry from the “professional fakers” and “skillful liars” Graham describes.

Most founders I’ve met should not be allowed near the details once they have a team in place. Their meddling, need for control, and soul-crushing FOMO (Fear of Missing Out) lead to chaos, burnout, and failure.

The truth is, it’s contextual.  The leaders I know switch between Founder and Manager mode based on the context.  They work with the passion of founders, trust with the confidence of managers, and are smart and humble enough to accept feedback when they go too far in one direction or the other.

Being both manager and founder isn’t just the essence of being a leader. It’s the essence of being a successful corporate innovator.  You are a founder,  investing in, advocating for, and sweating the details of ambiguous and risky work.  And you are a manager navigating the economic, operational, and political minefields that govern the core business and fund your paycheck and your team.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Innovation is Combination

Silicon Valley’s Innovator’s Dilemma – The Atom, the Bit and the Gene

Innovation is Combination

GUEST POST from Greg Satell

Over the past several decades, innovation has become largely synonymous with digital technology. When the topic of innovation comes up, somebody points to a company like Apple, Google or Meta rather than, say, a car company, a hotel or a restaurant. Management gurus wax poetically about the “Silicon Valley way.”

Of course, that doesn’t mean that other industries haven’t been innovative. In fact, there are no shortage of excellent examples of innovation in cars, hotels, restaurants and many other things. Still, the fact remains that for most of recent memory digital technology has moved further and faster than anything else.

This has been largely due to Moore’s Law, our ability to consistently double the number of transistors we’re able to cram onto a silicon wafer. Now, however, Moore’s Law is ending and we’re entering a new era of innovation. Our future will not be written in ones and zeros, but will be determined by our ability to use information to shape the physical world.

The Atom

The concept of the atom has been around at least since the time of the ancient Greek philosopher Democritus. Yet it didn’t take on any real significance until the early 20th century. In fact, the paper Albert Einstein used for his dissertation helped to establish the existence of atoms through a statistical analysis of Brownian motion.

Yet it was the other papers from Einstein’s miracle year of 1905 that transformed the atom from an abstract concept to a transformative force, maybe even the most transformative force in the 20th century. His theory of mass-energy equivalence would usher in the atomic age, while his work on black-body radiation would give rise to quantum mechanics and ideas so radical that even he would refuse to accept them.

Ironically, despite Einstein’s reluctance, quantum theory would lead to the development of the transistor and the rise of computers. These, in turn, would usher in the digital economy, which provided an alternative to the physical economy of goods and services based on things made from atoms and molecules.

Still, the vast majority of what we buy is made up of what we live in, ride in, eat and wear. In fact, information and communication technologies only make up about 6% of GDP in advanced countries, which is what makes the recent revolution in materials science is so exciting. We’re beginning to exponentially improve the efficiency of how we design the materials that make up everything from solar panels to building materials.

The Bit

While the concept of the atom evolved slowly over millennia, the bit is one of the rare instances in which an idea seems to have arisen in the mind of a single person with little or no real precursor. Introduced by Claude Shannon in a paper in 1948—incidentally, the same year the transistor was invented—the bit has shaped how we see and interact with the world ever since.

The basic idea was that information isn’t a function of content, but the absence of ambiguity, which can be broken down to a single unit – a choice between two alternatives. Much like how a coin toss which lacks information while in the air, but takes on a level of certainty when it lands, information arises when ambiguity disappears.

He called this unit, a “binary digit” or a “bit” and much like the pound, quart, meter or liter, it has become such a basic unit of measurement that it’s hard to imagine our modern world without it. Shannon’s work would soon combine with Alan Turing’s concept of a universal computer to create the digital computer.

Now the digital revolution is ending and we will soon be entering a heterogeneous computing environment that will include things like quantum, neuromorphic and biological computing. Still, Claude Shannon’s simple idea will remain central to how we understand how information interacts with the world it describes.

The Gene

The concept of the gene was first discovered by an obscure Austrian monk named Gregor Mendel, but in one of those strange peculiarities of history, his work went almost totally unnoticed until the turn of the century. Even then, no one really knew what a gene was or how they functioned. The term was, for the most part, just an abstract concept.

That changed abruptly when James Watson and Francis Crick published their article in the scientific journal Nature. In a single stroke, the pair were able to show that genes were, in fact, made up of a molecule called DNA and that they operated through a surprisingly simple code made up of A,T,C and G.

Things really began to kick into high gear when the Human Genome Project was completed in 2003. Since then the cost to sequence a genome has been falling faster than the rate of Moore’s Law, which has unleashed a flurry of innovation. Jennifer Doudna’s discovery of CRISPR in 2012 revolutionized our ability to edit genes. More recently, mRNA technology has helped develop COVID-19 vaccines in record time.

Today, we have entered a new era of synthetic biology in which we can manipulate the genetic code of A,T,C and G almost as easily as we can the bits in the machines that Turing imagined all those years ago. Researchers are also exploring how we can use genes to create advanced materials and maybe even create better computers.

Innovation Is Combination

The similarity of the atom, the bit and the gene as elemental concepts is hard to miss and they’ve allowed us to understand our universe in a visceral, substantial way. Still, they arose in vastly different domains and have been largely applied to separate and distinct fields. In the future, however, we can expect vastly greater convergence between the three.

We’ve already seen glimpses of this. For example, as a graduate student Charlie Bennett was a teaching assistant for James Watson. Yet in between his sessions instructing undergraduates in Watson’s work on genes, he took an elective course on the theory of computing in which he learned about the work of Shannon and Turing. That led him to go work for IBM and become a pioneer in quantum computing.

In much the same way, scientists are applying powerful computers to develop new materials and design genetic sequences. Some of these new materials will be used to create more powerful computers. In the future, we can expect the concepts of the atom, the bit and the gene to combine and recombine in exciting ways that we can only begin to imagine today.

The truth is that innovation is combination and has, in truth, always been. The past few decades, in which one technology so thoroughly dominated that it was able to function largely in isolation to other fields, was an anomaly. What we are beginning to see now is, in large part, a reversion to the mean, where the most exciting work will be interdisciplinary.

This is Silicon Valley’s innovator’s dilemma. Nerdy young geeks will no longer be able to prosper coding blithely away in blissful isolation. It is no longer sufficient to work in bits alone. Increasingly we need to combine those bits with atoms and genes to create significant value. If you want to get a glimpse of the future, that’s where to look.

— Article courtesy of the Digital Tonto blog
— Image credits: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






The Runaway Innovation Train

The Runaway Innovation Train

GUEST POST from Pete Foley

In this blog, I return and expand on a paradox that has concerned me for some time.    Are we getting too good at innovation, and is it in danger of getting out of control?   That may seem like a strange question for an innovator to ask.  But innovation has always been a two edged sword.  It brings huge benefits, but also commensurate risks. 

Ostensibly, change is good. Because of technology, today we mostly live more comfortable lives, and enjoy superior health, longevity, and mostly increased leisure and abundance compared to our ancestors.

Exponential Innovation Growth:  The pace of innovation is accelerating. It may not exactly mirror Moore’s Law, and of course, innovation is much harder to quantify than transistors. But the general trend in innovation and change approximates exponential growth. The human stone-age lasted about 300,000 years before ending in about 3,000 BC with the advent of metalworking.  The culture of the Egyptian Pharos lasted 30 centuries.  It was certainly not without innovations, but by modern standards, things changed very slowly. My mum recently turned 98 years young, and the pace of change she has seen in her lifetime is staggering by comparison to the past.  Literally from horse and carts delivering milk when she was a child in poor SE London, to todays world of self driving cars and exploring our solar system and beyond.  And with AI, quantum computing, fusion, gene manipulation, manned interplanetary spaceflight, and even advanced behavior manipulation all jockeying for position in the current innovation race, it seems highly likely that those living today will see even more dramatic change than my mum experienced.  

The Dark Side of Innovation: While accelerated innovation is probably beneficial overall, it is not without its costs. For starters, while humans are natural innovators, we are also paradoxically change averse.  Our brains are configured to manage more of our daily lives around habits and familiar behaviors than new experiences.  It simply takes more mental effort to manage new stuff than familiar stuff.  As a result we like some change, but not too much, or we become stressed.  At least some of the burgeoning mental health crisis we face today is probably attributable the difficulty we have adapting to so much rapid change and new technology on multiple fronts.

Nefarious Innovation:  And of course, new technology can be used for nefarious as well as noble purpose. We can now kill our fellow humans far more efficiently, and remotely than our ancestors dreamed of.  The internet gives us unprecedented access to both information and connectivity, but is also a source of misinformation and manipulation.  

The Abundance Dichotomy:  Innovation increases abundance, but it’s arguable if that actually makes us happier.  It gives us more, but paradoxically brings greater inequalities in distribution of the ‘wealth’ it creates. Behavior science has shown us consistently that humans make far more relative than absolute judgments.  Being better off than our ancestors actually doesn’t do much for us.  Instead we are far more interested in being better off than our peers, neighbors or the people we compare ourselves to on Instagram. And therein lies yet another challenge. Social media means we now compare ourselves to far more people than past generations, meaning that the standards we judge ourselves against are higher than ever before.     

Side effects and Unintended Consequences: Side effects and unintended consequences are perhaps the most difficult challenge we face with innovation. As the pace of innovation accelerates, so does the build up of side effects, and problematically, these often lag our initial innovations. All too often, we only become aware of them when they have already become a significant problem. Climate change is of course a poster child for this, as a huge unanticipated consequence of the industrial revolution. The same applies to pollution.  But as innovation accelerates, the unintended consequences it brings are also stacking up.  The first generations of ‘digital natives’ are facing unprecedented mental health challenges.  Diseases are becoming resistant to antibiotics, while population density is leading increased rate of new disease emergence. Agricultural efficiency has created monocultures that are inherently more fragile than the more diverse supply chain of the past.  Longevity is putting enormous pressure on healthcare.

The More we Innovate, the less we understand:  And last, but not least, as innovation accelerates, we understand less about what we are creating. Technology becomes unfathomably complex, and requires increasing specialization, which means few if any really understand the holistic picture.  Today we are largely going full speed ahead with AI, quantum computing, genetic engineering, and more subtle, but equally perilous experiments in behavioral and social manipulation.  But we are doing so with increasingly less pervasive understanding of direct, let alone unintended consequences of these complex changes!   

The Runaway Innovation Train:  So should we back off and slow down?  Is it time to pump the brakes? It’s an odd question for an innovator, but it’s likely a moot point anyway. The reality is that we probably cannot slow down, even if we want to.  Innovation is largely a self-propagating chain reaction. All innovators stand on the shoulders of giants. Every generation builds on past discoveries, and often this growing knowledge base inevitably leads to multiple further innovations.  The connectivity and information access of internet alone is driving today’s unprecedented innovation, and AI and quantum computing will only accelerate this further.  History is compelling on this point. Stone-age innovation was slow not because our ancestors lacked intelligence.  To the best of our knowledge, they were neurologically the same as us.  But they lacked the cumulative knowledge, and the network to access it that we now enjoy.   Even the smartest of us cannot go from inventing flint-knapping to quantum mechanics in a single generation. But, back to ‘standing on the shoulder of giants’, we can build on cumulative knowledge assembled by those who went before us to continuously improve.  And as that cumulative knowledge grows, more and more tools and resources become available, multiple insights emerge, and we create what amounts to a chain reaction of innovations.  But the trouble with chain reactions is that they can be very hard to control.    

Simultaneous Innovation: Perhaps the most compelling support for this inevitability of innovation lies in the pervasiveness of simultaneous innovation.   How does human culture exist for 50,000 years or more and then ‘suddenly’ two people, Darwin and Wallace come up with the theory of evolution independently and simultaneously?  The same question for calculus (Newton and Leibniz), or the precarious proliferation of nuclear weapons and other assorted weapons of mass destruction.  It’s not coincidence, but simply reflects that once all of the pieces of a puzzle are in place, somebody, and more likely, multiple people will inevitably make connections and see the next step in the innovation chain. 

But as innovation expands like a conquering army on multiple fronts, more and more puzzle pieces become available, and more puzzles are solved.  But unfortunately associated side effects and unanticipated consequences also build up, and my concern is that they can potentially overwhelm us. And this is compounded because often, as in the case of climate change, dealing with side effects can be more demanding than the original innovation. And because they can be slow to emerge, they are often deeply rooted before we become aware of them. As we look forward, just taking AI as an example, we can already somewhat anticipate some worrying possibilities. But what about the surprises analogous to climate change that we haven’t even thought of yet? I find that a sobering thought that we are attempting to create consciousness, but despite the efforts of numerous Nobel laureates over decades, we still have to idea what consciousness is. It’s called the ‘hard problem’ for good reason.  

Stop the World, I Want to Get Off: So why not slow down? There are precedents, in the form of nuclear arms treaties, and a variety of ethically based constraints on scientific exploration.  But regulations require everybody to agree and comply. Very big, expensive and expansive innovations are relatively easy to police. North Korea and Iran notwithstanding, there are fortunately not too many countries building nuclear capability, at least not yet. But a lot of emerging technology has the potential to require far less physical and financial infrastructure.  Cyber crime, gene manipulation, crypto and many others can be carried out with smaller, more distributed resources, which are far more difficult to police.  Even AI, which takes considerable resources to initially create, opens numerous doors for misuse that requires far less resource. 

The Atomic Weapons Conundrum.  The challenge with getting bad actors to agree on regulation and constraint is painfully illustrated by the atomic bomb.  The discovery of fission by Strassman and Hahn in the late 1930’s made the bomb inevitable. This set the stage for a race to turn theory into practice between the Allies and Nazi Germany. The Nazis were bad actor, so realistically our only option was to win the race.  We did, but at enormous cost. Once the ‘cat was out of the bag, we faced a terrible choice; create nuclear weapons, and the horror they represent, or chose to legislate against them, but in so doing, cede that terrible power to the Nazi’s?  Not an enviable choice.

Cumulative Knowledge.  Today we face similar conundrums on multiple fronts. Cumulative knowledge will make it extremely difficult not to advance multiple, potentially perilous technologies.  Countries who legislate against it risk either pushing it underground, or falling behind and deferring to others. The recent open letter from Meta to the EU chastising it for the potential economic impacts of its AI regulations may have dripped with self-interest.  But that didn’t make it wrong.   https://euneedsai.com/  Even if the EU slows down AI development, the pieces of the puzzle are already in place.  Big corporations, and less conservative countries will still pursue the upside, and risk the downside. The cat is very much out of the bag.

Muddling Through:  The good news is that when faced with potentially perilous change in the past, we’ve muddled through.  Hopefully we will do so again.   We’ve avoided a nuclear holocaust, at least for now.  Social media has destabilized our social order, but hasn’t destroyed it, yet.  We’ve been through a pandemic, and come out of it, not unscathed, but still functioning.  We are making progress in dealing with climate change, and have made enormous strides in managing pollution.

Chain Reactions:  But the innovation chain reaction, and the impact of cumulative knowledge mean that the rate of change will, in the absence of catastrophe, inevitably continue to accelerate. And as it does, so will side effects, nefarious use, mistakes and any unintended consequences that derive from it. Key factors that have helped us in the past are time and resource, but as waves of innovation increase in both frequency and intensity, both are likely to be increasingly squeezed.   

What can, or should we do? I certainly don’t have simple answers. We’re all pretty good, although by definition, far from perfect at scenario planning and trouble shooting for our individual innovations.  But the size and complexity of massive waves of innovation, such as AI, are obviously far more challenging.  No individual, or group can realistically either understand or own all of the implications. But perhaps we as an innovation community should put more collective resources against trying? We’ll never anticipate everything, and we’ll still get blindsided.  And putting resources against ‘what if’ scenarios is always a hard sell. But maybe we need to go into sales mode. 

Can the Problem Become the Solution? Encouragingly, the same emerging technology that creates potential issues could also help us.  AI and quantum computing will give us almost infinite capacity for computation and modeling.  Could we collectively assign more of that emerging resource against predicting and managing it’s own risks?

With many emerging technologies, we are now where we were in the 1900’s with climate change.  We are implementing massive, unpredictable change, and by definition have no idea what the unanticipated consequences of that will be. I personally think we’ll deal with climate change.  It’s difficult to slow a leviathan that’s been building for over a hundred years.  But we’ve taken the important first steps in acknowledging the problem, and are beginning to implement corrective action. 

But big issues require big solutions.  Long-term, I personally believe the most important thing for humanity to escape the gravity well.   Given the scale of our ability to curate global change, interplanetary colonization is not a luxury, but an essential.  Climate change is a shot across the bow with respect to how fragile our planet is, and how big our (unintended) influence can be.  We will hopefully manage that, and avoid nuclear war or synthetic pandemics for long enough to achieve it.  But ultimately, humanity needs the insurance dispersed planetary colonization will provide.  

Image credits: Microsoft Copilot

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Acting on Strategy and Tactics

Acting on Strategy and Tactics

GUEST POST from Mike Shipulski

When it comes to strategy and tactics, there are a lot of definitions, a lot of disagreement, and a whole lot of confusion. When is it strategy? When is it tactics? Which is more important? How do they inform each other?

Instead of definitions and disagreement, I want to start with agreement. Everyone agrees that both strategy AND tactics are required. If you have one without the other, it’s just not the same. It’s like with shoes and socks: Without shoes, your feet get wet; without socks, you get blisters; and when you have both, things go a lot better. Strategy and tactics work best when they’re done together.

The objective of strategy and tactics is to help everyone take the right action. Done well, everyone from the board room to the trenches knows how to take action. In that way, here are some questions to ask to help decide if your strategy and tactics are actionable.

What will we do? This gets to the heart of it. You’ve got to be able to make a list of things that will get done. Real things. Real actions. Don’t be fooled by babble like “We will provide customer value” and “Will grow the company by X%.” Providing customer value may be a good idea, but it’s not actionable. And growing the company by an arbitrary percentage is aspirational, but not actionable.

Why will we do it? This one helps people know what’s powering the work and helps them judge whether their actions are in line with that forcing function. Here’s a powerful answer: Competitors now have products and services that are better than ours, and we can’t have that. This answer conveys the importance of the work and helps everyone put the right amount of energy into their actions. [Note: this question can be asked before the first one.]

Who will do it? Here’s a rule: if no one is freed up to do the new work, the new work won’t get done. Make a list of the teams that will stop their existing projects before they can take action on the new work. Make a list of the new positions that are in the budget to support the strategy and tactics. Make a list of the new companies you’ll partner with. Make a list of all the incremental funding that has been put in the budget to help all the new people complete all these new actions. If your lists are short or you can make any, you don’t have what it takes to get the work done. You don’t have a strategy and you don’t have tactics. You have an unfunded mandate. Run away.

When will it be done? All actions must have completion dates. The dates will be set without consideration of the work content, so they’ll be wrong. Even still, you should have them. And once you have the dates, double all the task durations and push out the dates in your mind. No need to change the schedule now (you can’t change it anyway) because it will get updated when the work doesn’t get done on time. Now, using your lists of incremental headcount and budget, assign the incremental resources to all the actions with completion dates. Look for actions and budgets as those are objective evidence of the unfunded mandate character of your strategy and tactics. And for actions without completion dates, disregard them because they can never be late.

How will we know it’s done? All actions must call out a definition of success (DOS) that defines when the action has been accomplished. Without a measurable DOS, no one is sure when they’re done so they’ll keep working until you stop them. And you don’t want that. You want them to know when they’re done so they can quickly move on to the next action without oversight. If there’s no time to create a DOS, the action isn’t all that important and neither is the completion date.

When the wheels fall off, and they will, how will we update the strategy and tactics? Strategy and tactics are forward-looking and looking forward is rife with uncertainty. You’ll be wrong. What actions will you take to see if everything is going as planned? What actions will you take when progress doesn’t meet the plan? What actions will you take when you learn your tactics aren’t working and your strategy needs a band-aid?

  • What will you do?
  • Who will do it?
  • When will it be done?
  • And how will you know it’s done?

Image credit: Eric Minbiole

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Revolutionizing Customer Service

Brian Higgins On Driving Verizon’s Customer Experience Vision

Revolutionizing Customer Service - Brian Higgins On Driving Verizon's CX Vision

GUEST POST from Shep Hyken

If you have the best product in the world, that’s nice, but it’s not enough. You need a strong customer experience to go with it.

If you have the best service in the world, that’s nice, but it’s not enough. You need a strong product to go with it.

And one other thing. You also need customers! Without them, it doesn’t matter if you have the best product and the best service; you will eventually go out of business.

That’s why I’m excited about this week’s article. I had the opportunity to have an Amazing Business Radio interview with Brian Higgins, the chief customer experience officer at Verizon Consumer. After a career of 20-plus years working for one of the most recognized brands in the world, he has a lot to share about what it takes to get customers to say, “I’ll be back.”

Verizon is one of the most recognizable brands on the planet. A Fortune 50 company, it has more than 100,000 employees, a global presence serving more than 150 countries, more than $130 billion in annual revenue and a market cap of more than $168 billion.

Higgins made it clear that in addition to a premium network and product offerings, there needs to be a focus on customer experience with three primary objectives: addressing pain points, enhancing digital experiences and highlighting signature experiences exclusive to Verizon customers/members. They want to be easy to do business with and to use Customer Experience (CX) to capture market share and retain customers. What follows is a summary of Higgins’ most important points in our interview, followed by my commentary:

  1. Who Reports to Whom?: With Verizon’s emphasis on CX, one of the first questions I asked Higgins was about the company’s structure. Does CX report to marketing? Is CX over sales and marketing? Different companies put an emphasis on marketing, sales or experience. Often, one reports to the other. At Verizon, sales, revenue and experience work together. Higgins says, “We work in partnership with each other. You can’t build an experience if you don’t have the sales, revenue and customer care teams all on board.” The chief sales officer, chief revenue officer and chief experience officer “sit next to each other.”
  2. Membership: In our conversation, Higgins referred to Verizon’s customers as customers, members and subscribers. I asked which he preferred, and he quickly responded, “I would refer to them as members.” The membership is diverse, but the goal is to create a consistent and positive experience regardless of how individuals interact with the company. He sees the relationship with members as a partnership that is an intricate part of their lives. Most people check their phone the moment they wake up, throughout the day, and often, it’s one of the last things they check before going to bed. Verizon is a part of its members’ lives, and that’s an opportunity that cannot be mismanaged or abused.
  3. Employees Must Be Happy Too: More companies are recognizing that their CX must also include EX (employee experience). Employees must have the tools they need. This is an emphasis in his organization. Simplifying the employee experience with better tools and policies is the key to elevating the customer’s experience. Higgins shared the perfect description of why employee experience is paramount to the success of a business: “If employees aren’t happy and don’t feel they have the policies and tools they need that are right to engage with customers, you’re not going to get the experience right.”
  4. Focus on Little Pain Points: One of the priorities Higgins focuses on is what he refers to as “small cracks in the experience.” Seventy-five percent of the calls coming in to customer care are for small problems or questions, such as a promo code that didn’t work or an issue with a bill. His team continuously analyzes all customer journeys and works to fix them when needed. This helps to minimize recurring issues, thereby reducing customer support calls and the time employees spend fixing the same issue.
  5. The Digital Experience: Customers are starting to get comfortable with—and sometimes prefer—digital experiences. Making these experiences seamless and user-friendly increases overall customer satisfaction. More and more, they are using digital platforms to help with the “small cracks in the experience.” Employees also get an AI-infused digital experience. Higgins said Verizon uses AI to analyze customer conversations and provide real-time answers and solutions to employees, demonstrating how AI can support both employees and customers.
  6. Amplifying the Power of One Interaction: The final piece of wisdom Higgins shared was about recognizing how important a single interaction can be. Most customers don’t call very often. They may call once every three years, so each interaction needs to be treated like it’s a special moment—a unique opportunity to leave a lasting positive impression, one that leaves no doubt the customer made the right decision to do business with Verizon. Higgins believes in treating the customer like a relative visiting your home for a holiday. He closed by saying, “You’d be amazed how getting that one interaction with a customer right versus anything less than right can have a huge impact on the brand.”

Higgins’ vision for Verizon is not just about maintaining a superior network. It’s about creating an unparalleled customer experience that resonates with every interaction. As Verizon continues integrating advanced AI technologies and streamlining its processes, the focus continues to be on personalizing and enhancing every customer touchpoint, creating an experience that fosters high customer satisfaction and loyalty.

Image Credits: Pexels

This article originally appeared on Forbes.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Push versus Pull in the Productivity Zone

Push versus Pull in the Productivity Zone

GUEST POST from Geoffrey A. Moore

Digital transformation is hardly new. Advances in computing create more powerful infrastructure which in turn enables more productive operating models which in turn can enable wholly new business models. From mainframes to minicomputers to PCs to the Internet to the Worldwide Web to cloud computing to mobile apps to social media to generative AI, the hits just keep on coming, and every IT organization is asked to both keep the current systems running and to enable the enterprise to catch the next wave. And that’s a problem.

The dynamics of productivity involve a yin and yang exchange between systems that improve efficiency and programs that improve effectiveness. Systems, in this model, are intended to maintain state, with as little friction as possible. Programs, in this model, are intended to change state, with maximum impact within minimal time. Each has its own governance model, and the two must not be blended.

It is a rare IT organization that does not know how to maintain its own systems. That’s Job One, and the decision rights belong to the org itself. But many IT organizations lose their way when it comes to programs — specifically, the digital transformation initiatives that are re-engineering business processes across every sector of the global economy. They do not lose their way with respect to the technology of the systems. They are missing the boat on the management of the programs.

Specifically, when the CEO champions the next big thing, and IT gets a big chunk of funding, the IT leader commits to making it all happen. This is a mistake. Digital transformation entails re-engineering one or more operating models. These models are executed by organizations outside of IT. For the transformation to occur, the people in these organizations need to change their behavior, often drastically. IT cannot — indeed, must not — commit to this outcome. Change management is the responsibility of the consuming organization, not the delivery organization. In other words, programs must be pulled. They cannot be pushed. IT in its enthusiasm may believe it can evangelize the new operating model because people will just love it. Let me assure you — they won’t. Everybody endorses change as long as other people have to be the ones to do it. No one likes to move their own cheese.

Given all that, here’s the playbook to follow:

  1. If it is a program, the head of the operating unit that must change its behavior has to sponsor the change and pull the program in. Absent this commitment, the program simply must not be initiated.
  2. To govern the program, the Program Management Office needs a team of four, consisting of the consuming executive, the IT executive, the IT project manager, and the consuming organization’s program manager. The program manager, not the IT manager, is responsible for change management.
  3. The program is defined by a performance contract that uses a current state/future state contrast to establish the criteria for program completion. Until the future state is achieved, the program is not completed.
  4. Once the future state is achieved, then the IT manager is responsible for securing the system that will maintain state going forward.

Delivering programs that do not change state is the biggest source of waste in the Productivity Zone. There is an easy fix for this. Just say No.

That’s what I think. What do you think?

Image Credit: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Why Modifying This One Question Changes Everything

Why Modifying This One Question Changes Everything

GUEST POST from Robyn Bolton

You know that asking questions is essential.  After all, when you’re innovating, you’re doing something new, which means you’re learning, and the best way to learn is by asking questions.  You also know that asking genuine questions, rather than rhetorical or weaponized ones, is critical to building a culture of curiosity, exploration, and smart risk-taking.  But did you know that making a small change to a single question can radically change everything for your innovation strategy, process, and portfolio?

What is your hypothesis?

Before Lean Startup, there was Discovery-Driven Planning.  This approach, first proposed by Columbia Business School professor Rita McGrath and Wharton School professor Ian MacMillan in their 1995 HBR article, outlines a “planning” approach that acknowledges and embraces assumptions (instead of pretending that they’re facts) and relentlessly tests them to uncover new data and inform and update the plan.

It’s the scientific method applied to business.

How confident are you?

However, not all assumptions or hypotheses are created equal.  This was the assertion in the 2010 HBR article “Beating the Odds When You Launch a New Venture.”  Using examples from Netflix, Johnson & Johnson, and a host of other large enterprises and scrappy startups, the authors encourage innovators to ask two questions about their assumptions:

  1. How confident am I that this assumption is true?
  2. What is the (negative) impact on the idea if the assumption is false?

By asking these two questions of every assumption, the innovator sorts assumptions into three categories:

  1. Deal Killers: Assumptions that, if left untested, threaten the idea’s entire existence
  2. Path-dependent risks: Assumptions that impact the strategic underpinnings of the idea and cost significant time and money to resolve
  3. High ROI risks: Assumptions that can be quickly and easily tested but don’t have a significant impact on the idea’s strategy or viability

However, human beings have a long and inglorious history of overconfidence.  This well-established bias in which our confidence in our judgment exceeds the objective (data-based) accuracy of those judgments resulted in disasters like Chernobyl, the sinking of the Titanic, the explosions of the Space Shuttle Challenger and Discovery, and the Titan submersible explosion.

Let’s not add your innovation to that list.

How much of your money are you willing to bet?

For years, I’ve worked with executives and their teams to adopt Discovery-Driven Planning and focus their earliest efforts on testing Deal Killer assumptions. I was always struck by how confident everyone was and rather dubious when they reported that they had no Deal Killer assumptions.

So, I changed the question.

Instead of asking how confident they were, I asked how much they would bet. Then I made it personal—high confidence meant you were willing to bet your annual income, medium confidence meant dinner for the team at a Michelin-starred restaurant, and low confidence meant a cup of coffee.

Suddenly, people weren’t quite so confident, and there were A LOT of Deal Killers to test.

Make it Personal

It’s easy to become complacent in companies.  You don’t get paid more if you come in under budget, and you don’t get fired if you overspend.  Your budget is a rounding error in the context of all the money available to the company.  And your signing authority is probably a rounding error on the rounding error that is your budget.  So why worry about ten grand here and a hundred grand there?

Because neither you, your team, nor your innovation efforts have the luxury of complacency.

Innovation is always under scrutiny.  People expect you to generate results with a fraction of the resources in record time.  If you don’t, you, your team, and your budget are the first to be cut.

The business of innovation is personal.  Treat it that way. 

How much of your time, money, and reputation are you willing to risk?  What do you need your team to risk in terms of their time, money, and professional aspirations?  How much time, money, and reputation are your stakeholders willing to risk?

The answers change everything.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Triggering Radical Transformational Change

Triggering Radical Transformational Change

GUEST POST from Greg Satell

There’s an old adage that says we should never let a crisis go to waste. The point is that during a crisis there is a visceral sense of urgency and resistance often falls by the wayside. We certainly saw that during the COVID-19 pandemic. Digital technologies such as video conferencing, online grocery and tele-health have gone from fringe to mainstream in record time.

Seasoned leaders learn how to make good use of a crisis. Consider Bill Gates and his ‘Internet Tidal Wave‘ memo, which leveraged what could have been a mortal threat to Microsoft into a springboard to even greater dominance. Or how Steve Jobs used Apple’s near-death experience to reshape the ailing company into a powerhouse.

But what if we could prepare for a trigger before it happens? The truth is that indications of trouble are often clear long before the crisis arrives. Clearly, there were a number of warning signs that a pandemic was possible, if not likely. As every good leader knows, there’s never a shortage of looming threats. If we learn to plan ahead, we can make a crisis work for us.

The Plan Hatched in a Belgrade Cafe

In the fall of 1998, five young activists met in a coffee shop in Belgrade, Serbia. Although still in their twenties, they were already grizzled veterans. In 1992, they took part in student protests against the war in Bosnia. In 1996, they helped organize a series of rallies in response to Slobodan Milošević’s attempt to steal local elections.

To date, their results were decidedly mixed. The student protests were fun, but when the semester ended, everyone went home for the summer and that was the end of that. The 1996 protests were more successful, overturning the fraudulent results, but the opposition coalition, called “Zajedno,” soon devolved into infighting.

So they met in the coffee shop to discuss their options for the upcoming presidential election to be held in 2000. They knew from experience that they could organize rallies effectively and get people to the polls. They also knew that when they got people to the polls and won, Milošević would use his power and position to steal the election.

That would be their trigger.

The next day, six friends joined them and they called their new organization Otpor. Things began slowly, with mostly street theatre and pranks, but within 2 years their ranks had swelled to more than 70,000. When Milošević tried to steal the election they were ready and what is now known as the Bulldozer Revolution erupted.

The Serbian strongman was forced to concede. The next year, Milošević would be arrested and sent to The Hague for his crimes against humanity. He would die in his prison cell in 1996, awaiting trial.

Opportunity From the Ashes

In 2014, in the wake of the Euromaidan protests that swept the thoroughly corrupt autocrat Viktor Yanukovych from power, Ukraine was in shambles. Having been looted of roughly $100 billion (roughly the amount of the country’s entire GDP) and invaded by Russia, things looked bleak. Without western aid, the proud nation’s very survival was in doubt.

Yet for Vitaliy Shabunin and the Anti-Corruption Action Center, it was a moment he had been waiting for. He established the organization with his friend Dasha Kaleniuk a few years earlier. Since then they, along with a small staff, had been working with international NGOs to document corruption and develop effective legislation to fight it.

With Ukraine’s history of endemic graft, which had greatly worsened under Yanukovych, progress had been negligible. Yet now, with the IMF and other international institutions demanding reform, Shabunin and Kaleniuk were instantly in demand to advise the government on instituting a comprehensive anti-corruption program, which passed in record time.

Yet they didn’t stop there either. “Our long-term strategy is to create a situation in which it will be impossible not to do anti-corruption reforms,” Shabunin would later tell me. “We are working to ensure that these reforms will be done, either by these politicians or by another, because they will lose their office if they don’t do these reforms.”

Vitaliy, Dasha and the Anti-Corruption Action Center continue to prepare for future triggers.

The Genius of Xerox PARC

One story that Silicon Valley folks love to tell involves Steve Jobs and Xerox. After the copier giant made an investment in Apple, which was then a fledgling company, it gave Jobs access to its Palo Alto Research Center (PARC). He then used the technology he saw there to create the Macintosh. Jobs built an empire based on Xerox’s oversight.

Yet the story misses the point. By the late 60s, its Xerox CEO Peter McColough knew that the copier business, while still incredibly profitable, was bound to be disrupted eventually. At the same time it was becoming clear that computer technology was advancing quickly and, someday, would revolutionize how we worked. PARC was created to prepare for that trigger.

The number of groundbreaking technologies created at PARC is astounding. The graphical user interface, networked computing, object oriented programing, the list goes on. Virtually everything that we came to know as “personal computing” had its roots in the work done at PARC in the 1970s.

Most of all, PARC saved Xerox. The laser printer invented there would bring in billions and, eventually, largely replace the copier business. Some technologies were spun off into new companies, such as Adobe and 3Com, with an equity stake going to Xerox. And, of course, the company even made a tidy profit off the Macintosh, because of the equity stake that gave Jobs access to the technology in the first place.

Transforming an Obstacle Into a Design Constraint

The hardest thing about change is that, typically, most people don’t want it. If they did, it have already been accepted as the normal state of affairs. That can make transformation a lonely business. The status quo has inertia on its side and never yields its power gracefully. The path for an aspiring changemaker can be heartbreaking and soul crushing.

Many would see the near-certainty that Milosevic would try to steal the election as an excuse to do nothing. Most people would look at the almost impossibly corrupt Yanukovych regime and see the idea of devoting your life to anti-corruption reforms as quixotic folly. It is extremely rare for a CEO whose firm dominates an industry to ask, “What comes after?”

Yet anything can happen and often does. Circumstances conspire. Events converge. Round-hole businesses meet their square-peg world. We can’t predict exactly when or where or how or what will happen, but we know that everybody and everything gets disrupted eventually. It’s all just a matter of time.

When that happens resistance to change temporarily abates. So there’s lots to do and no time to waste. We need to empower our allies, as well as listen to our adversaries. We need to build out a network to connect to others who are sympathetic to our cause. Transformational change is always driven by small groups, loosely connected, but united by a common purpose.

Most of all, we need to prepare. A trigger always comes and, when it does, it brings great opportunity with it.

— Article courtesy of the Digital Tonto blog
— Image credits: Unsplash

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.






Innovation or Not – The Microdosing Revolution

Innovation or Not - The Microdosing Revolution

GUEST POST from Art Inteligencia

In recent years, the concept of microdosing has moved from the fringes of alternative therapy into the mainstream as a potential tool for enhancing mental performance and wellness. But is microdosing truly an innovation, or is it a passing trend destined for the annals of speculative practices? Through examining its revolutionary potential and analyzing its impact in real-world scenarios, we can better understand the role microdosing plays in our continuous pursuit of human-centered innovation.

Understanding Microdosing

Microdosing typically involves taking sub-perceptual doses of psychedelics, like LSD or psilocybin, approximately one-tenth of a recreational dose, to experience the potential therapeutic benefits without hallucinogenic effects. Advocates claim it can boost creativity, alleviate anxiety, and improve focus, leading to its rising popularity among entrepreneurs, artists, and the tech-savvy.

Case Study 1: Microdosing in Silicon Valley

In the competitive landscape of Silicon Valley, professionals are constantly seeking a competitive edge to enhance productivity and creativity. The tech hub has notably become a breeding ground for experimentation with microdosing. Tech workers claim the practice helps them to sustain high levels of innovation and problem-solving abilities in an environment where mental agility is highly prized.

For instance, a significant number of software developers and startup founders have reported that microdosing has supported cognitive function and stress reduction, leading to improved workplace performance and job satisfaction. Companies have begun embracing wellness practices, subtly endorsing microdosing as part of a broader strategy to cultivate employee well-being and foster an innovative work culture.

Case Study 2: Microdosing in Mental Health Treatment

Beyond corporate environments, microdosing has gained attention as a potential revolutionary approach in mental health treatment. Psychedelics-assisted therapy research has opened up dialogues about microdosing’s efficacy as a treatment for mood disorders and PTSD. Leading institutions are exploring the controlled use of microdoses as an adjunct to traditional therapies.

A pilot study conducted at a renowned university evaluated the impact of psilocybin microdosing on patients with treatment-resistant depression. Preliminary findings suggest a marked improvement in mood stabilization and cognitive flexibility among participants, renewing hope for alternative approaches in mental health treatment. This study has prompted further research and dialogue within the medical community, transforming discussions around treatment paradigms.

Case Study 3: Brez Beverages – Microdosing in the Consumer Market

Brez Beverages, a pioneering player in the beverage industry, has embraced the microdosing revolution by developing a line of drinks infused with adaptogenic and nootropic compounds. Their products aim to provide consumers with the benefits of microdosing in a more accessible and socially acceptable format.

Brez Beverages

The innovative approach of Brez Beverages lies in their ability to tap into the growing desire for wellness-centric consumer products. By integrating microdosed elements into beverages, they offer a unique alternative for individuals seeking mental clarity and stress reduction without committing to psychedelic substances. Brez Beverages represents a shift in how microdosing concepts can be commercialized and introduced to mainstream consumers.

Market feedback indicates a burgeoning interest among health-conscious customers who are drawn to the idea of enhancing their daily lives with subtle botanical blends, thus carving a new niche in the health and wellness sector. Brez continues to capitalize on the demand for unconventional health solutions, reflecting both the challenge and potential of integrating microdosing into consumer products.

The Verdict: Innovation or Not?

Whether microdosing is labeled as an innovation largely depends on one’s perspective. On one hand, it presents a novel application of existing compounds, showcasing unconventional problem-solving in enhancing human potential—an experimental departure from typical wellness and therapeutic practices. On the other hand, its lack of universal acceptance and scientific consensus makes it a contentious archetype of modern self-experimentation rather than unmistakable innovation.

In conclusion, microdosing embodies the dynamic nature of innovation—provocative yet promising. As we push the boundaries of what’s possible in the human experience, microdosing remains an emblem of the desire to enhance and evolve our capabilities. Whether it stands the test of time will depend on ongoing research, legal structures, and societal acceptance, but it undoubtedly shapes the current discourse on potential pathways for human-centered transformation.

Image credit: DrinkBrez.com, Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.