Insights
This policy is our framework for responsible AI deployment that supports patient safety, operational reliability, and data protection.
PURPOSE
MD Ally uses artificial intelligence technologies to support operational workflows that improve documentation, coordination of care, and service efficiency while maintaining physician oversight and human decision-making.
The purpose of this policy is to describe the principles and governance practices that guide MD Ally’s use of AI systems within healthcare and public safety telehealth environments. The organization recognizes that AI technologies must be implemented carefully in settings that involve patient care, emergency services, and sensitive health information.
AI tools are designed to support human teams rather than replace them. Clinical judgment, care coordination decisions, and escalation decisions remain under the supervision of licensed clinicians and trained operational staff. This policy provides transparency to partners, patients, and stakeholders about how AI technologies may be used within MD Ally’s services.
SCOPE
This policy applies to artificial intelligence systems used within MD Ally platforms, operational workflows, and supporting technologies.
It governs how AI capabilities are evaluated, introduced, monitored, and managed across the organization. This includes systems that assist with documentation, workflow automation, information organization, or operational support functions used in connection with MD Ally services.
The policy applies to all MD Ally employees, contractors, and authorized personnel who develop, configure, manage, or interact with AI-enabled systems.
It also applies to third-party AI technologies that may be integrated into MD Ally platforms or used to support internal workflows. When external technologies are used, they are evaluated to confirm that they operate within MD Ally’s privacy, security, and compliance requirements.
AI USE PRINCIPLES
MD Ally applies several core principles when evaluating and deploying AI technologies.
Human Oversight
AI systems are designed to support the work of clinicians, care coordinators, and operational staff. Human professionals remain responsible for interpreting information, interacting with patients, and making operational or clinical decisions.
Patient Safety
AI capabilities are evaluated with patient safety as a primary consideration. Tools are introduced in ways intended to support safe workflows within healthcare and public safety environments.
Transparency
MD Ally aims to communicate clearly with partners and stakeholders about how AI systems support operational workflows. Documentation and governance practices help describe how these technologies function within the organization’s services.
Privacy Protection
AI systems operate within MD Ally’s existing privacy and security framework. This includes compliance with HIPAA requirements and internal policies governing the handling of protected health information.
Responsible Deployment
AI technologies are introduced through structured evaluation and testing processes. These processes are designed to assess operational impact, system reliability, and compatibility with existing clinical and operational workflows.
APPROVED AI USE CASES
AI technologies may be used within MD Ally services to support operational and administrative functions that assist care teams.
These tools help streamline routine tasks so clinicians and care coordinators can focus on patient care and service coordination.
Examples of supported AI-assisted activities may include:
• assisting with intake and structured information collection
• organizing encounter information during telehealth interactions
• generating summaries of calls or clinical documentation
• assisting with documentation workflows and note preparation
• supporting care coordination and follow-up activities
• identifying patterns or trends that support quality assurance review
AI-generated outputs are intended to support staff workflows. Final decisions regarding documentation, escalation, and patient care remain under human supervision.
CLINICAL GOVERNANCE
Clinical decision-making remains under physician authority within MD Ally services.
AI technologies may assist with organizing information collected during encounters or summarizing documentation. However, the assessment of patient symptoms, determination of appropriate care pathways, and clinical recommendations remain the responsibility of licensed clinicians.
Clinicians review relevant information before documentation is finalized or clinical actions are taken. AI systems function as operational tools that help organize information rather than systems that independently determine clinical outcomes.
MD Ally maintains clinical governance practices that support oversight of patient encounters, escalation protocols, and quality review processes.
HUMAN-IN-THE-LOOP OPERATIONS
AI systems operate within MD Ally’s existing care team structure.
Care concierges, physicians, and operational staff supervise AI-supported workflows and remain responsible for patient engagement and service coordination.
Human staff may review, modify, or override AI-generated information when appropriate. Escalation decisions, patient communication, and coordination with EMS or healthcare providers remain under the direction of trained personnel.
This human-in-the-loop model ensures that AI functions as a support tool while preserving professional judgment and operational oversight.
DATA PRIVACY AND SECURITY
AI systems used within MD Ally services operate within the organization’s established privacy and security framework.
Patient information processed through MD Ally systems is protected through security controls designed to support HIPAA compliance and safeguard sensitive healthcare data.
Security protections may include:
• role-based access controls that restrict system access to authorized personnel
• authentication and identity verification mechanisms for system access
• encrypted transmission of data across networks where appropriate
• secure infrastructure environments for system hosting and data storage
• monitoring and logging of system activity
MD Ally systems operate within a HIPAA-compliant and SOC 2 audited infrastructure environment designed to support secure handling of healthcare and public safety data.
RISK MANAGEMENT AND REVIEW
AI capabilities undergo evaluation before they are introduced into operational environments.
These evaluations consider factors such as system reliability, workflow compatibility, privacy considerations, and potential operational impacts. Pilot testing or staged implementation may be used to assess performance before broader deployment.
Risk management activities may also include reviewing vendor practices, evaluating system performance, and assessing potential safety considerations related to AI-supported workflows.
The goal of this process is to introduce AI capabilities in a controlled manner that supports safe and reliable system operation.
QUALITY ASSURANCE AND MONITORING
AI-supported workflows are subject to ongoing operational monitoring and quality review processes.
These oversight activities help confirm that systems are functioning as intended and supporting clinical and operational teams effectively.
Monitoring activities may include:
• review of AI-assisted encounters and documentation
• assessment of documentation accuracy and completeness
• monitoring of staff overrides or escalation events
• evaluation of operational performance trends
Insights from quality assurance activities may inform workflow adjustments, system updates, or additional training for staff.
VENDOR AND THIRD-PARTY CONTROLS
When AI technologies are provided by external vendors, MD Ally evaluates vendor practices related to privacy, security, and compliance.
Third-party providers that may access or process protected health information are required to meet applicable HIPAA requirements. When appropriate, contractual agreements may establish privacy and security responsibilities for vendors handling sensitive data.
MD Ally may also review vendor security practices, infrastructure protections, and compliance certifications when selecting technologies used within the platform.
POLICY GOVERNANCE
MD Ally maintains internal governance processes that support oversight of AI technologies used within its services.
Leadership responsible for compliance, information security, and clinical oversight may participate in reviewing AI capabilities and related operational practices. These governance activities help ensure that AI systems align with MD Ally’s safety, security, and operational standards.
Governance processes may include policy review, system monitoring, and coordination across operational, clinical, and security teams.
POLICY REVIEW
This policy may be reviewed periodically to reflect changes in technology, regulatory guidance, or operational practices.
Updates may occur when:
• laws or regulations related to AI or healthcare privacy evolve
• MD Ally introduces new AI capabilities or operational workflows
• operational reviews identify opportunities for improvement
Periodic review helps maintain alignment between technology deployment and the organization’s commitments to safety, privacy, and responsible system management.
ADDITIONAL INFORMATION
Questions regarding MD Ally’s use of artificial intelligence technologies may be directed to: