Virginia is on the verge of becoming the second state, following Colorado, to pass comprehensive legislation aimed at curbing discrimination linked to artificial intelligence (AI) decision-making. However, both states have taken distinct regulatory approaches to address this evolving challenge.
On February 12, 2025, the Virginia State Senate approved the High-Risk Artificial Intelligence Developer and Deployer Act (H.B. 2094). If Governor Glenn Youngkin signs it into law, the measure will regulate AI usage in multiple sectors, including its role in determining āaccess to employment.ā
The bill now awaits the governorās signature. If enacted, it will take effect on July 1, 2026, introducing new compliance requirements for companies deploying high-risk AI systems that impact Virginia āconsumers,ā including job applicants, according to National Law Review.
Defining High-Risk AI
Virginiaās legislation imposes a duty of reasonable care on businesses that utilize automated decision-making systems in key regulated areas, such as employment, financial services, healthcare, and other high-stakes industries.
This regulatory framework specifically applies to āhigh-riskā AI systems that are āspecifically intended to autonomouslyā make or significantly influence decisions. The lawās language limits its scope compared to Coloradoās broader approach.
A crucial distinction in Virginiaās bill is that AI must be the āprincipal basisā of a decision for anti-discrimination provisions to apply. This creates a higher threshold than Coloradoās āsubstantial factorā standard, meaning Virginiaās law will only apply in cases where AI plays a decisive role in an outcome.
Who Counts as a āConsumerā?
A primary objective of H.B. 2094 is to protect consumers from algorithmic discrimination, especially when automated systems make significant decisions about individuals.
The law defines a āconsumerā as a natural person residing in Virginia who acts in an individual or household context. However, similar to the Virginia Consumer Data Protection Act, H.B. 2094 excludes individuals operating in a commercial or employment context.
This exclusion creates a potential contradiction: how can āaccess to employmentā be classified as a consequential decision, while employees are excluded from being considered consumers?
A logical interpretation is that job applicants do not act as employees but as private individuals seeking employment for personal reasons. Thus, if an AI-powered hiring tool is used to evaluate a Virginia residentās job application, a strict reading of the legislation suggests that the applicant still qualifies as a consumer under the law.
However, once hired, the individualās interactions with AI tools in the workplaceāsuch as performance monitoring or productivity trackingāfall under the employment context. In such cases, the employee may no longer be considered a consumer under H.B. 2094.
High-Risk AI Systems & Key Decision-Making
The law strictly regulates AI systems classified as āhigh-riskāāthose that make autonomous or heavily weighted decisions in critical areas such as:
- Educational admissions and opportunities
- Loan approvals and financial services
- Housing and insurance determinations
- Hiring and employment-related decisions
The intent behind these provisions is to combat algorithmic discrimination, which refers to illegal disparate treatment or disproportionate negative outcomes caused by AI decision-making based on protected characteristics such as race, sex, religion, or disability.
Even if an AI system is not intentionally designed to discriminate, its use alone may trigger liability if it produces biased results.
Exemptions & Exclusions
H.B. 2094 specifically exempts 19 types of technologies from being classified as high-risk AI systems. One notable carve-out is āanti-fraud technology that does not use facial recognition.ā This is particularly relevant as fraudulent job applicants in remote work settings increase, prompting companies to adopt enhanced security measures.
Other exclusions include:
- Cybersecurity tools, anti-malware, and antivirus software
- Administrative AI tools used for efficiency, security, or quality measurement
- Spreadsheet software and calculators (so no, pivot tables arenāt going anywhere)
Compliance Obligations for AI Developers
Entities that create or substantially modify high-risk AI systems must adhere to a duty of reasonable care to protect consumers from known or foreseeable discriminatory harms.
Before making a high-risk AI system available to a deployer (an entity using AI in Virginia), developers must disclose key information, including:
- Intended uses of the system
- Known limitations and mitigation measures
- Steps taken to prevent algorithmic discrimination
- Guidance to assist deployers in monitoring AI behavior
If substantial modifications affecting risk are made, developers must update these disclosures within 90 days.
Additionally, developers are required to maintain extensive documentation on their high-risk AI systems, including impact assessments and risk management policies.
Virginiaās bill also appears to target deepfake technologies. If a developer uses generative AI to create synthetic contentāsuch as AI-generated audio, video, or imagesāthe law requires detectable markers to inform consumers that the content is AI-generated. However, the law makes exceptions for creative works and artistic expressions, ensuring that legitimate satire and fiction remain unaffected.
Compliance Obligations for AI Deployers
Companies using high-risk AI systems (deployers) must also uphold a duty of reasonable care to prevent algorithmic discrimination.
H.B. 2094 requires deployers to develop and maintain a risk management policy and program tailored to the high-risk AI system in use. These policies must align with established AI governance frameworks, such as:
The National Institute of Standards and Technology AI Risk Management Framework (NIST AI RMF)
ISO/IEC 42001 AI standards
Before deploying a high-risk AI system, companies must conduct impact assessments that examine:
- The systemās purpose
- Potential discriminatory risks
- Measures taken to mitigate bias
If an AI system makes an adverse decision (such as denying a loan or rejecting a job applicant), the company must disclose:
- The primary reasons for the decision
- Whether AI was the determining factor
- A process for individuals to appeal or correct errors
Additionally, deployers must notify consumers when AI is used to make consequential decisions about them.
Enforcement & Penalties
Only the Virginia Attorney General has the authority to enforce H.B. 2094. Companies found in violation could face:
- Civil investigative demands
- Injunctions to halt unlawful AI usage
- Financial penalties
- Non-willful violations: Fines of up to $1,000 per infraction, plus legal fees.
- Willful violations: Fines of up to $10,000 per infraction, plus legal fees.
Each violation is counted separately, meaning penalties could accumulate quickly if AI-driven decisions affect multiple individuals.
Whatās Next?
If signed into law, H.B. 2094 will go into effect on July 1, 2026. AI developers and deployers should start preparing now by aligning their practices with recognized standards like NIST AI RMF and ISO/IEC 42001.
Virginiaās High-Risk Artificial Intelligence Developer and Deployer Act marks a significant step in state-level AI governance, setting a precedent for future legislation. Its emphasis on documentation, transparency, and fairness reflects growing demands for responsible AI practices.
Both AI developers and businesses using high-risk AI must stay proactive in risk management, compliance, and consumer disclosure to avoid legal pitfallsāand help shape AIās future in an ethical and accountable manner.