Skip to content
resp_AI_header A
J.J. Westfall

The Missing Metric in Responsible AI Implementation: Behavior

Responsible AI implementation is often framed as a matter of policy, governance, or technical guardrails. Those things matter. But in practice, many implementation efforts succeed or fail at the level of daily behavior: what people actually do, repeat, reinforce, and make visible over time.

Early in my career, I helped stand up a Customer Issue Resolution Tracker at WaMu. We built a cross-enterprise process for handling customer complaints in an organization that operated in deep silos. We created a clear, easy-to-use, thoroughly documented process. However, real uptake came only after we introduced a weekly executive leadership sync. That routine changed everything. Leaders began to see issues outside their own business lines and created new habits of shared visibility, accountability, and customer-first thinking.That lesson stayed with me. A well-designed system is not the same thing as a well-implemented change.

Christian Anibarro, who leads our Operational Effectiveness practice, has seen the same pattern across large-scale transformation efforts. Again and again, the initiatives that succeed are not only the ones with strong process design, clear standards, or effective training. They are the ones that create consistent experiences for employees. If you want consistent behavior, you need consistent practices. Rituals and routines are often what turn an aspiration into something people actually do.

We are seeing many organizations approach AI the same way they approached other enterprise changes: deploying tools, setting policies, and offering training; then still struggling to translate those steps into broad, effective adoption. But AI raises a sharper version of the same challenge because it touches judgment, trust, risk, quality, disclosure, and accountability all at once.

That is where Key Behavior Indicators (KBIs) can play an important role. KBIs help organizations identify, measure and track the vital few behaviors that drive responsible and effective AI use.

While research is still emerging, some of those behaviors are already becoming visible. They include pausing to verify AI-generated content before sharing it, disclosing meaningful AI use in client or stakeholder work, escalating uncertainty or risk instead of pushing ahead quietly, experimenting within agreed guardrails, asking whether AI is solving a real problem before adopting it, sharing learnings openly with the team, and protecting confidential information when prompting tools. Research and guidance from OECD and NIST inform these insights.

Why does this matter? Because AI implementation often breaks down in familiar ways.

  • Surface-level adoption: people use the tools, but unevenly or inconsistently.
  • Shadow AI behavior: people create workarounds, hide usage, or quietly avoid the new standards.
  • Misalignment: leaders say AI matters, but the incentives, routines, and expectations of daily work do not support responsible use.

These three breakdown patterns are our synthesis of several trends showing up in current AI adoption research as seen here, here, and here.

KBIs surface that often-invisible behavioral layer.

Responsible AI implementation is not only about compliance; it’s about leadership and the systems leaders create. Leaders have to define the desired behaviors, then model them, create consistent practices, celebrate and reinforce them, and build feedback loops around them.

The organizations that use AI well will not simply be the ones with the best tools. They will be the ones that make responsible behavior part of the system itself. In the age of AI, that may be the most important implementation decision leaders make.