Healthcare AI Governance Frameworks: Why Speed Without Guardrails Is Exposure, Not Innovation
- Y. Olivia Erimsah

- Feb 25
- 9 min read
On February 23, 2026, Vantage Precision Health submitted a formal response to the U.S. Department of Health and Human Services' Request for Information on accelerating AI in clinical care. We support acceleration. But acceleration without governance infrastructure isn't progress it's accumulated risk transferred onto the most vulnerable patients and the providers who serve them.
This post summarizes our position, explains the policy context, and argues for what we believe is the only viable path forward: governance that evolves as fast as the technology it oversees.

The HHS RFI: What Was Being Asked
In December 2025, HHS published a Request for Information asking a focused question: what actions could HHS take to accelerate AI adoption in clinical care? The RFI identified three levers regulation, reimbursement, and research and development and explicitly sought input from organizations building AI tools, implementing them, and facing barriers to adoption.
The comment deadline was February 23, 2026. Dozens of organizations, from the American Hospital Association to individual health systems, submitted responses. VPH submitted ours as an organization working directly at the intersection of AI adoption and healthcare governance organizations that have seen, up close, what happens when technology acceleration outpaces the human and institutional systems required to deploy it responsibly.
Our position was not one of resistance. It was one of precision.
Speed Without Guardrails Is Not Innovation. It's Exposure.
The dominant narrative around healthcare AI in 2026 centers on removing barriers to adoption streamlining regulation, reducing friction, clearing the path for faster deployment. We understand that impulse. Administrative burden consumes an estimated 30+ hours per week of clinician time that should be spent on patient care. AI tools capable of reducing that burden deserve to reach clinical practice efficiently.
But "removing barriers" and "removing guardrails" are not the same thing and conflating them produces policy with predictable downstream consequences.
Healthcare AI deployment has already outpaced governance framework development. The FDA's recent flexibility on certain AI tool classifications has shifted responsibility for risk assessment onto individual healthcare organizations organizations that vary enormously in their capacity to manage that responsibility. In the absence of standardized governance infrastructure, the burden falls unevenly: large, well-resourced health systems can build internal AI governance capabilities; safety-net providers, behavioral health networks, and rural health organizations typically cannot.
This is not a hypothetical risk. It is a structural pattern with historical precedent.
We Have Seen This Before: The HITECH Lesson
The Health Information Technology for Economic and Clinical Health Act of 2009 was designed to accelerate health information technology adoption across the U.S. healthcare system. It largely succeeded for the organizations it included.
Behavioral health providers were largely excluded from HITECH's incentive structures. Aging services organizations were sidelined. The result was a technology adoption gap that persists today: physical health providers operate on sophisticated interoperable EHR infrastructure while behavioral health providers frequently rely on legacy systems incompatible with mainstream data exchange. Care coordination between physical and behavioral health precisely the integration that drives outcomes for patients with comorbid conditions remains fragmented, in part because a 2009 policy decision treated behavioral health as peripheral rather than essential.
AI is now positioned to repeat this pattern at larger scale and higher velocity. If behavioral health, aging services, and safety-net providers are not explicitly included in AI governance frameworks, reimbursement pathways, and standards development, the result will not be a neutral gap it will be an amplified inequity, embedded in algorithmic systems that will persist for years.
If we are not intentional now, AI will encode that inequity into clinical infrastructure.
Our Message to HHS: Three Non-Negotiables
VPH's response to the HHS RFI centered on three principles that we consider non-negotiable for responsible AI acceleration:
1. Evaluate AI on Outcomes, Not Downloads
The current dominant metric for AI adoption success is deployment breadth: how many organizations have purchased a tool, how many clinicians have accounts, how many sessions have been logged. These metrics measure access, not impact.
Healthcare AI must be evaluated on clinical and operational outcomes reduced diagnostic error rates, decreased documentation burden, improved care coordination efficiency, measurable patient outcomes in populations served by AI-assisted care. Reimbursement incentives and regulatory approval pathways should align with outcome demonstration, not feature deployment.
This matters especially for behavioral health AI, where the most important outcomes therapeutic alliance quality, symptom trajectory, patient engagement in care are harder to measure but more consequential than session counts or app downloads. Policy that rewards downloads over outcomes will systematically underinvest in the measurement infrastructure that makes AI accountability possible.
2. Protect Therapeutic Relationships Don't Automate Them Away
AI tools designed to support clinical care must be designed around the clinical relationship, not as a substitute for it. This distinction has particular urgency in behavioral health, where therapeutic relationship quality is itself a documented predictor of treatment outcomes not merely a process variable but a clinical mechanism.
Automation that replaces therapeutic contact under the guise of expanding access doesn't expand access. It replaces a lower-volume higher-quality resource with a higher-volume lower-quality one and calls the substitution an improvement. Policy frameworks must establish clear standards for when AI augments clinical relationships versus when it displaces them and require evidence that augmentation models produce equivalent or better outcomes before scaling.
This also has implications for informed consent, patient autonomy, and the right of patients to understand when AI is involved in their care. Governance frameworks must require transparent disclosure of AI involvement at the point of care, not buried in terms-of-service documentation.
3. Fund Safety-Net Providers Don't Sideline Them
AI governance cannot be equity-neutral by design and expect equitable outcomes. Safety-net providers Federally Qualified Health Centers, community behavioral health organizations, rural health clinics, aging services providers serve the populations with the greatest need and operate with the fewest resources to navigate AI adoption independently.
If HHS acceleration policy focuses on reducing regulatory burden for well-resourced health systems while safety-net providers lack the infrastructure to evaluate, implement, and govern AI tools responsibly, the acceleration will produce a two-tier system: sophisticated AI-assisted care for commercially insured urban populations, and under-resourced legacy care for everyone else.
Intentional funding grants, implementation support, technical assistance for safety-net providers is not a peripheral equity add-on. It is a prerequisite for AI acceleration that serves the full population HHS is mandated to protect.
What Responsible Acceleration Actually Requires
Supporting AI acceleration in healthcare means building the infrastructure that makes acceleration sustainable. Four elements are foundational:
Standards and interoperability. AI tools deployed across health systems must be built on interoperable standards that enable data exchange, performance benchmarking, and comparative effectiveness research. Proprietary systems that cannot be evaluated independently or integrated with existing infrastructure create vendor lock-in and prevent the outcomes measurement that accountability requires. HHS should prioritize standardization as a prerequisite for scaled AI deployment, not an afterthought.
Reimbursement pathways that reward demonstrated value. Current reimbursement structures do not adequately account for AI-assisted clinical work. Without clear pathways for reimbursing AI-augmented care particularly in behavioral health and aging services, where reimbursement rates are already inadequate providers cannot build the financial models required to invest in responsible AI adoption. Acceleration without reimbursement reform will concentrate AI adoption in high-margin service lines while underserved populations wait.
Independent outcomes research. AI vendors currently control most of the evidence about their own tools' effectiveness. This is not a sustainable governance model. HHS should fund independent comparative effectiveness research on AI clinical tools, with particular attention to performance variation across demographic groups, clinical contexts, and organizational resource levels. The research infrastructure required to answer questions about what works, for whom, and under what conditions does not exist at adequate scale and will not emerge from vendor-funded studies alone.
Governance that evolves at the speed of AI updates. AI systems in clinical care are not static deployments. They are updated, retrained, and fundamentally modified on timescales measured in months. Healthcare governance structures designed for annual policy reviews and quarterly committee meetings cannot detect or respond to the performance changes, drift patterns, and new failure modes introduced by continuous AI evolution. HHS must support the development of adaptive governance frameworks continuous monitoring, threshold-triggered review, tiered update protocols that can maintain oversight of AI systems as they evolve, not just at the moment of initial deployment.
The Governance Gap Is Not a Future Problem
Healthcare AI governance frameworks remain fragmented. A recent systematic review identified that most healthcare organizations lack the governance maturity to manage AI systems at even basic levels of oversight rigor. The FDA's flexibility has shifted responsibility to organizations that are not yet equipped to carry it. The gap between where governance infrastructure currently stands and where it needs to be to manage responsible AI acceleration at scale is not a future planning problem it is present and growing.
VPH's work exists at this intersection. The Continuous Implementation Framework™ addresses the operational dimensions of governance failure: the absence of continuous monitoring, the lack of feedback loops between users and decision-makers, the mismatch between AI evolution pace and organizational adaptation capacity. But operational frameworks alone cannot substitute for policy infrastructure. The governance gap requires both organizational capability development and federal policy action simultaneously.
What HHS does with the input it received from this RFI will shape the trajectory of healthcare AI for a decade. The organizations that submitted responses—clinical associations, health systems, advocacy organizations, consulting firms share a common interest in getting this right. We don't all agree on the specific mechanisms. But the principle that undergirds VPH's position is one we believe most stakeholders share: AI can reduce burden, expand access, and improve quality of life. But only if governance evolves as fast as the technology.
This is a defining policy moment. The decisions made in 2026 will determine whether healthcare AI delivers on its potential equitably or whether it becomes one more chapter in the long history of health policy that left the most vulnerable behind.
Vantage Precision Health submitted formal comments to the HHS RFI on February 23, 2026. If you are a healthcare organization, AI vendor, or policy stakeholder navigating AI governance challenges, we welcome the conversation. The framework for responsible acceleration exists. The question is whether we build it together or learn from its absence after the fact.

Frequently Asked Questions Healthcare AI Governance Frameworks
What was the HHS RFI on AI in clinical care?In December 2025, HHS published a Request for Information asking what actions it could take to accelerate AI adoption in clinical settings, focusing on regulation, reimbursement, and research and development. The comment deadline was February 23, 2026.
Why were behavioral health providers excluded from HITECH?The HITECH Act of 2009 provided incentive funding for EHR adoption but largely excluded behavioral health providers from eligibility. Peer-reviewed research documents that this exclusion created persistent care coordination gaps between physical and behavioral health systems that continue today.
What does "governance that evolves as fast as AI" mean in practice?It means replacing static annual reviews with continuous monitoring systems, tiered update protocols, and threshold-triggered governance reviews that can detect and respond to AI performance changes on monthly timescales matching the actual pace at which clinical AI systems are updated.
Why is behavioral health at particular risk in AI policy?Therapeutic relationship quality in behavioral health is a documented predictor of treatment outcomes, not merely a process variable. AI that automates rather than augments these relationships can degrade clinical effectiveness even while appearing to expand access.
What should healthcare organizations demand from AI vendors on governance?Organizations should require transparency on how and when AI is used in clinical processes, post-deployment performance monitoring, disclosure of training data demographics, and clear accountability pathways when AI outputs contribute to adverse outcomes.
What is VPH's position on AI acceleration in healthcare?VPH supports AI acceleration but argues it requires four non-negotiable prerequisites: interoperability standards, reimbursement pathways that reward demonstrated outcomes rather than deployment, independent outcomes research, and adaptive governance frameworks that evolve alongside AI systems.
People Also Ask
Is the HHS accelerating AI adoption in healthcare?Yes. In December 2025, HHS issued a formal Request for Information asking stakeholders how it should accelerate AI adoption across regulation, reimbursement, and research channels, with a response deadline of February 23, 2026.
What are the biggest risks of accelerating healthcare AI without governance?The primary risks include perpetuating and amplifying existing health disparities through biased training data, eroding therapeutic relationships through inappropriate automation, creating a two-tier system where safety-net providers lack AI access, and deploying systems that drift or degrade without detection mechanisms in place.
What is the HITECH Act and why does it matter for AI policy?The HITECH Act of 2009 accelerated EHR adoption in the U.S. but largely excluded behavioral health and aging services providers from its incentive structures. The resulting care coordination fragmentation is a documented cautionary precedent for how exclusionary AI policy could repeat the same inequity at larger scale.
What do healthcare organizations need before deploying AI?Before deployment, organizations should conduct AI readiness assessments, establish governance structures capable of ongoing monitoring, secure training resources for continuous competency development, and verify that vendor contracts include post-deployment performance accountability and transparency requirements.
What is adaptive AI governance in healthcare?Adaptive governance refers to oversight frameworks designed to monitor AI performance continuously rather than through periodic reviews and respond to model drift, algorithm updates, and emerging safety signals in near real-time, maintaining alignment between AI capabilities and organizational policy.
References
U.S. Department of Health and Human Services. HHS Announces Request for Information to Harness Artificial Intelligence in Clinical Care. December 2025. hhs.gov/press-room/hhs-ai-rfi.html
HealthIT.gov. HHS Wants Your Ideas to Accelerate AI in Clinical Care. December 18, 2025. healthit.gov/buzz-blog/ai-ml/hhs-wants-your-ideas-to-accelerate-ai-in-clinical-care
American Hospital Association. AHA Response to HHS RFI on Accelerating AI in Health Care. February 23, 2026. aha.org/2026-02-23-aha-response-hhs-rfi-ai-health-care
Cohen, D. Effect of the Exclusion of Behavioral Health from Health Information Technology (HIT) Legislation on the Future of Integrated Health Care. Journal of Behavioral Health Services & Research, 42(4), 534–539. 2015. doi:10.1007/s11414-014-9407-x. PMID: 24807647. pubmed.ncbi.nlm.nih.gov/24807647
Budhu, J.A. et al. Health Equity Considerations in the Age of Artificial Intelligence. Neurology, 105(12), e214356. November 2025. doi:10.1212/WNL.0000000000214356. PMC12636768. pmc.ncbi.nlm.nih.gov/articles/PMC12636768
Advancing Healthcare AI Governance. npj Digital Medicine. February 2026. doi:10.1038/s41746-026-02418-7. nature.com/articles/s41746-026-02418-7


Comments