Tag Archives: behavioral science

Change Behavior to Change Culture

Change Behavior to Change Culture

GUEST POST from Mike Shipulski

There’s always lots of talk about culture and how to change it. There is culture dial to turn or culture level to pull. Culture isn’t a thing in itself, it’s a sentiment that’s generated by behavioral themes. Culture is what we use to describe our worn paths of behavior. If you want to change culture, change behavior.

At the highest level, you can make the biggest cultural change when you change how you spend your resources. Want to change culture? Say yes to projects that are different than last year’s and say no to the ones that rehash old themes. And to provide guidance on how to choose those new projects create, formalize new ways you want to deliver new value to new customers. When you change the criteria people use to choose projects you change the projects. And when you change the projects people’s behaviors change. And when behavior changes, culture changes.

The other important class of resources is people. When you change who runs the project, they change what work is done. And when they prioritize a different task, they prioritize different behavior of the teams. They ask for new work and get new behavior. And when those project leaders get to choose new people to do the work, they choose in a way that changes how the work is done. New project leaders change the high-level behaviors of the project and the people doing the work change the day-to-day behavior within the projects.

Change how projects are chosen and culture changes. Change who runs the projects and culture changes. Change who does the project work and culture changes.

Image credits: 1 of 850+ FREE quote slides available for download at http://misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Is It Bad Behavior or Unskilled Behavior?

Is It Bad Behavior or Unskilled Behavior?

GUEST POST from Mike Shipulski

What if you could see everyone as doing their best?

When they are ineffective, what if you think they are using all the skills to the best of their abilities?

What changes when you see people as having a surplus of good intentions and a shortfall of skills?

If someone cannot recognize social cues and behaves accordingly, what does that say about them?

What does it say about you if you judge them as if they recognize those social cues?

Even if their best isn’t all skillful, what if you saw them as doing their best?

When someone treats you unskillfully, maybe they never learned how to behave skillfully.

When someone yells at you, maybe yelling is the only skill they were taught.

When someone treats you unskillfully, maybe that’s the only skill they have at their disposal.

And what if you saw them as doing their best?

Unskillful behavior cannot be stopped with punishment.

Unskillful behavior changes only when new skills are learned.

New skills are learned only when they are taught.

New skills are taught only when a teacher notices a yet-to-be-developed skillset.

And a teacher only notices a yet-to-be-developed skillset when they understand that the unskillful behavior is not about them.

And when a teacher knows the unskillful behavior is not about them, the teacher can teach.

And when teachers teach, new skills develop.

And as new skills develop, behavior becomes skillful.

It’s difficult to acknowledge unskillful behavior when it’s seen as mean, selfish, uncaring, and hurtful.

It’s easier to acknowledge unskillful behavior when it’s seen as a lack of skills set on a foundation of good intentions.

When you see unskillful behavior, what if you see that behavior as someone doing their best?

Unskillful behavior cannot change unless it is called by its name.

And once called by name, skillful behavior must be clearly described within the context that makes it skillful.

If you think someone “should” know their behavior is unskillful, you won’t teach them.

And when you don’t teach them, that’s about you.

If no one teaches you to hit a baseball, you never learn the skill of hitting a baseball.

When their bat always misses the ball, would you think the lesser of them? If you did, what does that say about you?

What if no one taught you how to crochet and you were asked to knit scarf? Even if you tried your best, you couldn’t do it. How could you possibly knit a scarf without developing the skill? How would you want people to see you? Wouldn’t you like to be seen as someone with good intentions that wants to be taught how to crochet?

If you were never taught how to speak French, should I see your inability to speak French as a character defect or as a lack of skill?

We are not born with skills. We learn them.

And we cannot learn skillful behavior unless we’re taught.

When we think they “should” know better, we assume they had good teachers.

When we think their unskillful behavior is about us, that’s about us.

When we punish unskillful behavior, it would be more skillful to teach new skills.

When we use prizes and rewards to change behavior, it would be more skillful to teach new skills.

When in doubt, it’s skillful to think the better of people.

Image credit: Pexels

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Just Because We Can, Doesn’t Mean That We Should!

Just Because We Can, Doesn’t Mean That We Should!

GUEST POST from Pete Foley

An article on innovation from the BBC caught my eye this week. https://www.bbc.com/news/science-environment-64814781. After extensive research and experimentation, a group in Spain has worked out how to farm octopus. It’s clever innovation, but also comes with some ethical questions. The solution involves forcing highly intelligent, sentient animals together in unnatural environments, and then killing them in a slow, likely highly stressful way. And that triggers something that I believe we need to always keep front and center in innovation: Just Because We Can, Doesn’t Mean That We Should!

Pandora’s Box

It’s a conundrum for many innovations. Change opens Pandora’s Box, and with new possibilities come unknowns, new questions, new risks and sometimes, new moral dilemmas. And because our modern world is so complex, interdependent, and evolves so quickly, we can rarely fully anticipate all of these consequences at conception.

Scenario Planning

In most fields we routinely try and anticipate technical challenges, and run all sorts of stress, stability and consumer tests in an effort to anticipate potential problems. We often still miss stuff, especially when it’s difficult to place prototypes into realistic situations. Phones still catch fire, Hyundai’s can be surprisingly easy to steal, and airbags sometimes do more harm than good. But experienced innovators, while not perfect, tend to be pretty good at catching many of the worst technical issues.

Another Innovators Dilemma

Octopus farming doesn’t, as far as I know, have technical issues, but it does raise serious ethical questions. And these can sometimes be hard to spot, especially if we are very focused on technical challenges. I doubt that the innovators involved in octopus farming are intrinsically bad people intent on imposing suffering on innocent animals. But innovation requires passion, focus and ownership. Love is Blind, and innovators who’ve invested themselves into a project are inevitably biased, and often struggle to objectively view the downsides of their invention.

And this of course has far broader implications than octopus farming. The moral dilemma of innovation and unintended consequences has of course been brought into sharp focus with recent advances in AI.  In this case the stakes are much higher. Stephen Hawking and many others expressed concerns that while AI has the potential to provide incalculable benefits, it also has the potential to end the human race. While I personally don’t see CHATgpt as Armageddon, it is certainly evidence that Pandora’s Box is open, and none of us really knows how it will evolve, for better or worse.

What are our Solutions

So what can we do to try and avoid doing more harm than good? Do we need an innovator’s equivalent of the Hippocratic Oath? Should we as a community commit to do no harm, and somehow hold ourselves accountable? Not a bad idea in theory, but how could we practically do that? Innovation and risk go hand in hand, and in reality we often don’t know how an innovation will operate in the real world, and often don’t fully recognize the killer application associated with a new technology. And if we were to eliminate most risk from innovation, we’d also eliminate most progress. This said, I do believe how we balance progress and risk is something we need to discuss more, especially in light of the extraordinary rate of technological innovation we are experiencing, the potential size of its impact, and the increasing challenges associated with predicting outcomes as the pace of change accelerates.

Can We Ever Go Back?

Another issue is that often the choice is not simply ‘do we do it or not’, but instead ‘who does it first’? Frequently it’s not so much our ‘brilliance’ that creates innovation. Instead, it’s simply that all the pieces have just fallen into place and are waiting for someone to see the pattern. From calculus onwards, the history of innovation is replete with examples of parallel discovery, where independent groups draw the same conclusions from emerging data at about the same time.

So parallel to the question of ‘should we do it’ is ‘can we afford not to?’ Perhaps the most dramatic example of this was the nuclear bomb. For the team working the Manhattan Project it must have been ethically agonizing to create something that could cause so much human suffering. But context matters, and the Allies at the time were in a tight race with the Nazi’s to create the first nuclear bomb, the path to which was already sketched out by discoveries in physics earlier that century. The potential consequences of not succeeding were even more horrific than those of winning the race. An ethical dilemma of brutal proportions.

Today, as the pace of change accelerates, we face a raft of rapidly evolving technologies with potential for enormous good or catastrophic damage, and where Pandoras Box is already cracked open. Of course AI is one, but there are so many others. On the technical side we have bio-engineering, gene manipulation, ecological manipulation, blockchain and even space innovation. All of these have potential to do both great good and great harm. And to add to the conundrum, even if we were to decide to shut down risky avenues of innovation, there is zero guarantee that others would not pursue them. On the contrary, as bad players are more likely to pursue ethically dubious avenues of research.

Behavioral Science

And this conundrum is not limited to technical innovations. We are also making huge strides in understanding how people think and make decisions. This is superficially more subtle than AI or bio-manipulation, but as a field I’m close to, it’s also deeply concerning, and carries similar potential to do both great good or cause great harm. Public opinion is one of the few tools we have to help curb mis-use of technology, especially in democracies. But Behavioral Science gives us increasingly effective ways to influence and nudge human choices, often without people being aware they are being nudged. In parallel, technology has given us unprecedented capability to leverage that knowledge, via the internet and social media. There has always been a potential moral dilemma associated with manipulating human behavior, especially below the threshold of consciousness. It’s been a concern since the idea of subliminal advertising emerged in the 1950’s. But technical innovation has created a potentially far more influential infrastructure than the 1950’s movie theater.   We now spend a significant portion of our lives on line, and techniques such as memes, framing, managed choice architecture and leveraging mere exposure provide the potential to manipulate opinions and emotional engagement more profoundly than ever before. And the stakes have gotten higher, with political advertising, at least in the USA, often eclipsing more traditional consumer goods marketing in sheer volume.   It’s one thing to nudge someone between Coke and Pepsi, but quite another to use unconscious manipulation to drive preference in narrowly contested political races that have significant socio-political implications. There is no doubt we can use behavioral science for good, whether it’s helping people eat better, save better for retirement, drive more carefully or many other situations where the benefit/paternalism equation is pretty clear. But especially in socio-political contexts, where do we draw the line, and who decides where that line is? In our increasingly polarized society, without some oversight, it’s all too easy for well intentioned and passionate people to go too far, and in the worst case flirt with propaganda, and thus potentially enable damaging or even dangerous policy.

What Can or Should We Do?

We spend a great deal of energy and money trying to find better ways to research and anticipate both the effectiveness and potential unintended consequences of new technology. But with a few exceptions, we tend to spend less time discussing the moral implications of what we do. As the pace of innovations accelerates, does the innovation community need to adopt some form of ‘do no harm’ Hippocratic Oath? Or do we need to think more about educating, training, and putting processes in place to try and anticipate the ethical downsides of technology?

Of course, we’ll never anticipate everything. We didn’t have the background knowledge to anticipate that the invention of the internal combustion engine would seriously impact the world’s climate. Instead we were mostly just relieved that projections of cities buried under horse poop would no longer come to fruition.

But other innovations brought issues we might have seen coming with a bit more scenario-planning? Air bags initially increased deaths of children in automobile accidents, while prohibition in the US increased both crime and alcoholism. Hindsight is of course very clear, but could a little more foresight have anticipated these? Perhaps my favorite example unintended consequences is the ‘Cobra Effect’. The British in India were worried about the number of venomous cobra snakes, and so introduced a bounty for every dead cobra. Initially successful, this ultimately led to the breeding of cobras for bounty payments. On learning this, the Brits scrapped the reward. Cobra breeders then set the now-worthless snakes free. The result was more cobras than the original start-point. It’s amusing now, but it also illustrates the often significant gap between foresight and hindsight.

I certainly don’t have the answers. But as we start to stack up world changing technologies in increasingly complex, dynamic and unpredictable contexts, and as financial rewards often favor speed over caution, do we as an innovation community need to start thinking more about societal and moral risk? And if so, how could, or should we go about it?

I’d love to hear the opinions of the innovation community!

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to join 17,000+ leaders getting Human-Centered Change & Innovation Weekly delivered to their inbox every week.

Shedding Old Habits for New Possibilities

Unlearning to Learn

Shedding Old Habits for New Possibilities

GUEST POST from Art Inteligencia

In a world characterized by exponential change — where AI capabilities evolve every quarter and market demands shift before the quarterly report is filed — learning is often cited as the key to survival. Yet, leaders consistently overlook the prerequisite for true innovative learning: Unlearning. As a human-centered change and innovation thought leader, I contend that the greatest obstacle to embracing new possibilities isn’t a lack of knowledge or resources; it’s the weight of what we already know. Our past successes, entrenched processes, and deeply held technical expertise act as cognitive anchors, preventing us from navigating uncharted waters.

Unlearning is the deliberate process of discarding obsolete information, mindsets, and behavioral routines that are no longer relevant to the current reality. It is not forgetting, but rather making room for new knowledge by consciously retiring outdated, suboptimal habits. This is a profound human and organizational challenge. We are biologically wired to favor efficiency and certainty, meaning our brains prefer to use existing cognitive pathways. For organizations, this manifests as organizational memory bias, where past triumphs dictate future strategy, causing us to learn a new tool but insist on applying it using the old, linear process. The key is shedding the old process.

The Three Strategic Imperatives of Unlearning

For organizations to transform unlearning from an abstract concept into a strategic advantage, they must focus on three core imperatives:

  1. De-Crystallizing Core Assumptions (The ‘Why’): Challenge the sacred cows—the beliefs about customers, competitors, or processes that have been true for a decade but may be failing now. This includes unlearning technical assumptions, such as the belief that data must remain siloed, which prevents modern AI integration.
  2. Creating Friction for Automation (The ‘How’): Old habits are dangerous when they become automated and unquestioned. We must introduce controlled friction points—such as mandatory cross-functional rotation or requiring new-hire perspectives in legacy project reviews—to force teams to pause, reflect, and consciously choose a new path over the default path. This is a deliberate intervention against autopilot thinking.
  3. Decoupling Identity from Expertise (The ‘Who’): The most senior and successful employees often have the most to unlearn, as their identity is intrinsically linked to their obsolete expertise. Leaders must establish psychological safety where unlearning is framed not as an admission of individual failure, but as a continuous commitment to organizational relevance.

“Your past success is your organization’s greatest vulnerability. Don’t let yesterday’s win anchor you to tomorrow’s failure.” — Braden Kelley


Case Study 1: Netflix – Unlearning the Physical Asset Model

The Challenge:

In the early 2000s, Netflix achieved remarkable success by disrupting video rental with a superior mail-order, DVD-based model. Their core organizational competency was logistics — managing physical inventory, shipping, and returns. This success became a massive cognitive anchor when high-speed internet made streaming possible. Their deeply ingrained knowledge of the physical world actively worked against their digital future.

The Unlearning Solution:

Netflix’s leadership, led by Reed Hastings, made a conscious, painful decision to unlearn their core asset. They had to shed the identity of a logistics company and embrace the identity of a technology and content company. This meant separating the DVD business and the streaming business, forcing the streaming unit to build entirely new competencies and metrics focused on digital delivery and latency, rather than physical inventory and postal service efficiency. They had to unlearn the “perfect” physical delivery process.

The Innovation Impact:

This deliberate act of self-disruption and unlearning allowed Netflix to build the foundation for its streaming dominance. By voluntarily creating friction and letting go of the habits that made them successful, they freed capital, talent, and attention to master the new competencies required for the digital era, ultimately redefining an entire industry.


Case Study 2: Haier – Unlearning the Traditional Management Hierarchy

The Challenge:

Haier, a massive Chinese appliance manufacturer, faced the global challenge of becoming truly customer-centric in a bureaucratic, centrally managed corporate structure. Their organizational muscle was built on command-and-control and mass production efficiency—a model that stifled local innovation and responsiveness.

The Unlearning Solution:

Haier’s CEO, Zhang Ruimin, initiated the RenDanHeYi model, a radical exercise in organizational unlearning. They abolished nearly all traditional middle management and restructured the company into thousands of small, autonomous business units called Microenterprises (MEs). These MEs were forced to become self-governing, find their own customers, and manage their own P&L (profit and loss) against the market. They had to unlearn the security and structure of guaranteed corporate security and centralized decision-making.

The Innovation Impact:

This massive organizational unlearning forced responsiveness at the edge. By shedding the old habits of central planning and top-down control, Haier enabled its MEs to rapidly innovate and localize products (e.g., specialized washing machines for specific niche markets). The shift created an internal entrepreneurial ecosystem, proving that organizational structure itself is an outdated habit that must be unlearned to achieve true agility and customer-centricity.


Conclusion: The L&D Imperative and the Courage to Be Obsolete

Unlearning is the highest-leverage activity in a change-driven environment. It requires leaders to demonstrate courage to be obsolete — to admit that the ways that brought them success yesterday will likely be the source of their failure tomorrow.

The L&D function must pivot its focus from teaching new skills to facilitating the shedding of old, limiting beliefs and processes. This is done by actively building the three strategic imperatives—challenging core assumptions, creating friction for automated habits, and decoupling identity from expertise. Stop asking only, “What must we learn next?” and start by asking the harder, more critical question: “What must we willingly let go of first?” Only by creating empty cognitive and structural space can you truly plant the seeds of new, emerging possibilities.

Extra Extra: Futurology is not fortune telling. Futurists use a scientific approach to create their deliverables, but a methodology and tools like those in FutureHacking™ can empower anyone to engage in futurology themselves.

Image credit: 1 of 950+ FREE quote slides available at http://misterinnovation.com

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

Using Analytics to Understand Human Behavior

The Data-Driven Innovator

Using Analytics to Understand Human Behavior

GUEST POST from Art Inteligencia

In the world of change and innovation, there is a false dichotomy that has persisted for too long: the perceived conflict between **human-centered design** and **data science**. We often hear that the most profound insights come from intuition, empathy, and listening to the customer’s story. While true, that view misses a critical reality: the most powerful innovation emerges when intuition is fueled by rigorous data. As a human-centered change and innovation thought leader, I argue that the future belongs to the **Data-Driven Innovator**—the one who uses analytics not just to measure performance, but to deeply understand, predict, and ultimately serve complex human behavior. Data is not the enemy of empathy; it is the most sophisticated tool we have to **quantify human needs** and **de-risk the innovation process**.

The problem with relying solely on traditional methods—surveys, focus groups, and simple intuition—is that they are often limited by what people *say* they do, which rarely aligns with what they *actually* do. Behavioral data, gathered from digital footprints, transactional records, and usage patterns, provides an unbiased, unfiltered window into genuine human motivation. It tells us where customers get stuck, which features they ignore, and the specific sequence of actions that leads to delight or frustration. Innovation, therefore, must move beyond simply collecting Big Data to mastering **Deep Data**—the careful, ethical analysis of behavioral patterns to uncover the latent needs and unarticulated desires that lead to breakthrough products and experiences.

The Analytics-Driven Empathy Framework

To successfully fuse human-centered thinking with data rigor, innovators must adopt a framework that treats analytics as the starting point for empathy, not the endpoint for analysis:

  • 1. Behavioral Mapping (The ‘What’): Begin by mapping the customer journey using pure behavioral data. Which steps have the highest drop-off rate? What is the *actual* time between a pain point being identified and a solution being sought? This quantifies the problem space and directs attention to where human frustration is highest.
  • 2. Qualitative Triangulation (The ‘Why’): Once data identifies a “what” (e.g., 60% of users fail at this step), the innovator must deploy qualitative research (interviews, observation) to find the “why.” Data highlights the anomaly; human-centered methods explain the motivation, the fear, or the confusion behind it.
  • 3. Predictive Prototyping (The ‘How to Fix’): Use analytics to build predictive models that test new concepts. Instead of launching a full product, use A/B testing and multivariate analysis on small, targeted groups. Data allows you to quickly iterate on prototypes, measuring the direct impact on human behavior (e.g., effort reduction, time saved, emotional response captured via text analysis).
  • 4. Ethical Guardrails (The ‘Should We?’): Data analysis carries immense responsibility. Innovators must establish clear ethical guidelines to ensure data is used to serve customers, not to manipulate them. Prioritize transparency, privacy-by-design, and actively audit algorithms to eliminate bias and ensure fairness.

“Empathy tells you *how* to talk to the customer. Data tells you *when* and *where* to listen.”


Case Study 1: Netflix – Quantifying the Appetite for Content

The Challenge:

In the crowded media landscape, the challenge for Netflix was twofold: how to reduce churn (customers leaving) and how to justify the massive, risky investment in original content. They couldn’t rely on simple focus groups for such high-stakes, long-term decisions.

The Data-Driven Innovation Solution:

Netflix became the master of **deep data analysis** to understand the human appetite for content. They didn’t just track viewing habits; they tracked every micro-interaction: when a user paused, rewound, what they searched for, the time of day they watched, and the precise moment they abandoned a show. This behavioral data revealed clear, quantitative unmet needs. For example, the data showed that a significant cohort of users watched British period dramas, starring a specific type of actor, and favored directors with a particular cinematic style. This insight was then used to greenlight shows like House of Cards and Orange Is the New Black, not just because they sounded good, but because the data demonstrated a latent, high-demand audience for that exact combination of themes, talent, and viewing format.

The Human-Centered Result:

By using analytics as an engine for creative decision-making, Netflix revolutionized media production. They proved that data can fuel, rather than stifle, creativity. The result was not just reduced churn and massive market dominance, but a fundamentally improved customer experience—a personalized library that feels tailor-made for each user, making them feel genuinely understood. This is innovation where the data-driven decision leads directly to human delight.


Case Study 2: Spotify – Using Behavioral Data to Define Identity

The Challenge:

For a music streaming service, the challenge is not just providing access to millions of songs, but helping users navigate that overwhelming volume and connecting them with the *right* song at the *right* emotional moment. The user’s relationship with music is deeply personal and often unarticulable—how do you quantify musical identity?

The Data-Driven Innovation Solution:

Spotify innovated by translating passive listening into actionable behavioral data. They moved beyond simple “most played” lists to create products like **Discover Weekly** and **Wrapped**. These features rely on deep analytics that track everything from the track’s tempo and key (acoustic data) to the time of day it was played, the device used, and the listener’s immediate skip rate (behavioral data). The key innovation was to use machine learning to identify the musical identity of the user not by asking them, but by observing their habits, and then to use that data to serve them content they didn’t even know they wanted. The company uses this data to quantify a person’s mood, context, and latent taste.

The Human-Centered Result:

Spotify transformed passive music consumption into an active, highly personalized journey. Products like ‘Wrapped’ don’t just give users data; they give them a **narrative about themselves**, which is profoundly human-centered. This innovation has led to unmatched user engagement and loyalty. It demonstrates that data analytics, when applied empathetically, can be used to reflect a user’s identity back to them, deepening their connection to the service and making the abstract concept of personal taste tangible and delightful.


Conclusion: The Future of Innovation is Quantified Empathy

The time for the intuitive innovator to stand apart from the data scientist is over. The next great wave of innovation will be led by those who understand that **Deep Data is the greatest tool for Deep Empathy**. Analytics does not dehumanize the innovation process; it refines it, allowing us to move from generalized guesses about human needs to precise, actionable insights. By fusing human-centered design principles with the rigor of behavioral analytics, we create a powerful feedback loop. Data points us toward the friction, empathy reveals the solution, and data again validates the fix. This is the quantified path to innovation, ensuring that we are not just building things that are technically possible, but things that people genuinely need, deeply want, and, most importantly, actually use.

The future belongs to the data-driven innovators who treat every behavioral click, every pause, and every purchase as a precious piece of the human story they are trying to tell.

Extra Extra: Futurology is not fortune telling. Futurists use a scientific approach to create their deliverables, but a methodology and tools like those in FutureHacking™ can empower anyone to engage in futurology themselves.

Image credit: Pixabay

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.

50 Cognitive Biases Reference – Free Download

by Braden Kelley

I came across this cognitive biases infographic from TitleMax and it has a lot of great information in it, but…

The problem with long, information-rich infographics like this is that they’re hard to consume on the screen in their entirety, you can’t print them in a legible way, and they’re hard to leverage in your work. The creators of this infographic did a nice job of capturing a wide range of cognitive biases, which makes this a quite useful tool for design thinking, but not in this format.

To help everyone out, I’ve taken the original infographic and reformatted it into a five page PDF for easy reading and printing on 8.5″ x 11″ letter size paper.

Click here to download the 50 Cognitive Biases PDF (8.5″x11″)

See the original infographic below (click to access the source image):

Cognitive Biases Infographic

Click here to download the 50 Cognitive Biases PDF (8.5″x11″)

Subscribe to Human-Centered Change & Innovation WeeklySign up here to get Human-Centered Change & Innovation Weekly delivered to your inbox every week.