In bidirectional human–AI alignment, alignment is conceptualized as a dynamic process of reciprocal adjustment between people and AI systems across time and contexts of use. While this agenda often foregrounds broad societal values, we focus on a concrete value that helps realize them in practice: repairability, understood as minimizing struggle by making breakdowns detectable, correctable, and accountable across stakeholders. We argue that repair work cannot be fully specified or reliably evaluated in advance because it is shaped by the setting—including workflows, operational constraints, available evidence (e.g., monitoring/logs), and participants’ roles and authority—and by the consequences of mistaken or premature risky actions. We propose two complementary layers for alignment in use: configuration-time governance to articulate AI requirements as a coherent set for stakeholder review, and setting-dependent runtime orchestration that may operate in trace-only mode (to support verification and learning) or enforce runtime repair support. We also propose a lightweight evaluation approach that (1) defines the setting and stakeholders, (2) extracts repair episodes from logs, and (3) evaluates detectability, correctability, and accountability from episode outcomes. We illustrate the approach with two settings: AI support for diagnosing service-robot failures via error logs, and a coffee-ordering chatbot facing misunderstandings and iterative clarification.

