by Henry Fina and Matthew P. Suzor

Left to proper: Henry Fina and Matthew P. Suzor (images courtesy of Miller Shah LLP)
The explosion of the Synthetic Intelligence market has drawn capital funding from virtually each nook of the financial system. The federal authorities isn’t any exception. Between FY 2022 and 2023, the potential worth of federal AI contracts elevated from roughly $356 million to $4.6 billion. In July 2025, the Trump Administration launched its AI Motion Plan, outlining authorities initiatives to aggressively deploy AI within the well being and protection sectors. Accordingly, the Division of Well being and Human Providers (HHS) and Division of Protection (DoD) have elevated funding allocations towards AI contracts. As contractors compete for more and more useful awards with restricted oversight, the potential for misrepresented capabilities and compliance gaps grows. Whereas the business’s sturdy tailwinds could translate into profitable alternatives for buyers and entrepreneurs, for qui tam litigators, the growth of publicly contracted AI companies alerts a brand new frontier for False Claims Act (FCA) enforcement. In flip, the FCA can be important in guaranteeing accountability as federal businesses steadily regulate oversight mechanisms to deal with the inconsistent reliability and restricted technological opacity of AI fashions.
Most FCA circumstances levied towards AI contractors would probably stem from false or fraudulent representations of a mannequin’s capabilities. Misrepresentations could embody inflated claims concerning the accuracy of a mannequin’s outputs, concealment of bias or artificial coaching, or inadequate knowledge privateness and safety requirements. Whether or not an AI mannequin is used for surveillance and intelligence by the DoD or for the Facilities for Medicare and Medicaid Providers (CMS) to evaluation claims, there are considerations past the technical effectiveness of AI outputs.
There are deeper considerations concerning the accuracy, accountability, and integrity of data-driven decision-making. For instance, if an AI contractor for the DoD fails to keep up the integrity of their program and permits the mannequin to make use of doctored or monitored knowledge, then the contractor could be liable underneath the FCA for false certifications of cybersecurity and danger administration compliance. Equally, an HHS contractor could be liable if it misrepresents the accuracy of the mannequin or conceals avenues for error or bias that materially have an effect on CMS fee choices, corresponding to AI recommending or justifying inappropriate diagnostic codes.
Whereas each examples mirror prior FCA circumstances concerning protection and healthcare fraud, in addition they show a rising pressure in FCA litigation between technological complexity and authorized accountability. AI fashions produce outputs by way of data-analytics from inputs authorities employees present. Since no tangible items are exchanged, the excellence between trustworthy errors and actionable fraud begins to blur. In AI contracts, hurt could manifest in delicate or delayed methods. Fashions may doubtlessly return biased predictions or present unreliable analytics that misinform resolution making. The downstream penalties of a mannequin’s flaws could also be tougher to determine. Since human decision-makers use AI outputs to information their actions, fairly than dictate them, defendants may argue that their judgement triggered the hurt fairly than the AI mannequin’s flaws.
Courts will quickly should outline falsity in AI contexts. In prior FCA circumstances, falsity concerned elements like misrepresented deliverables, inflated billing, or insufficient compliance. AI complicates defining the falsity of a declare in FCA circumstances. Relators may also face new challenges satisfying the scienter requirement as a contractor’s data, deliberate ignorance, or reckless disregard for the falsity of their declare turns into tougher to find out because of the autonomous nature of AI.
The autonomy of AI programs will make figuring out the intent of a defendant in FCA circumstances extra complicated. AI fashions’ opacity additional complicates the problem. Many AI fashions are “black field” programs, which means customers, and sometimes creators, can’t totally oversee the inner features of the AI’s data-analysis nor reasoning for a given output. The place historically FCA circumstances analyzed intent by way of a given firm’s inner communications or its worker’s actions, the layered company buildings and technical groups liable for the event and upkeep of a mannequin could not totally know the way precisely a deployed mannequin evolves or produces outcomes. Contractors may then moderately argue that they weren’t conscious of a mannequin’s bias or its false outputs as they have been emergent or the product of algorithmic drift fairly than human affect.
Discovery in FCA circumstances involving AI can be exceptionally complicated. With a view to seize related info, AI contractors might want to provide the mannequin structure, coaching knowledge, information of inputs and outputs, in addition to all different related supplies. Since AI fashions retrain and regulate to new knowledge, when litigation arises, the mannequin may feasibly not exist within the type it did throughout the related interval of the case. Because of this, preservation turns into important for the relator’s capacity to show what was false throughout the time of contracting and fee. Disputes invoking commerce secret and privateness protections for specific knowledge units will solely serve to additional complicate the method. These disputes will have an effect on the scienter analyses as relators could should depend on inner communications and necessities to find out if a defendant “knew” about flaws in a mannequin’s efficiency.
Federal businesses settle for a level of uncertainty in AI efficiency whereas investing within the emergent know-how. This uncertainty complicates the materiality factor of FCA circumstances since a declare is materials provided that the federal government had refused fee had it identified of the misrepresentation. When utilizing the precedent set by United Well being Providers v. Escobar, courts will wrestle to find out whether or not flaws in an AI mannequin can meet the edge for materials misrepresentation of a superb or service because the authorities could knowingly settle for such a danger. A contractor’s false claims concerning an AI mannequin’s operate alone could not fulfill the FCA’s materiality requirement if the federal government implicitly consented to a measure of inaccuracy within the system’s outputs.
Federal businesses might want to strengthen contractual oversight and set up clear mechanisms for monitoring the usage of AI. As the federal government develops the related insurance policies over time, FCA litigation seems poised to be the proving floor for a way the authorized system will deal with algorithmic accountability.
Henry Fina is a Venture Analyst and Matthew P. Suzor is an Affiliate at Miller Shah LLP.
The views, opinions and positions expressed inside all posts are these of the creator(s) alone and don’t characterize these of the Program on Company Compliance and Enforcement (PCCE) or of the New York College Faculty of Legislation. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this web site and won’t be liable any errors, omissions or representations. The copyright of this content material belongs to the creator(s) and any legal responsibility as regards to infringement of mental property rights stays with the creator(s).
by Henry Fina and Matthew P. Suzor

Left to proper: Henry Fina and Matthew P. Suzor (images courtesy of Miller Shah LLP)
The explosion of the Synthetic Intelligence market has drawn capital funding from virtually each nook of the financial system. The federal authorities isn’t any exception. Between FY 2022 and 2023, the potential worth of federal AI contracts elevated from roughly $356 million to $4.6 billion. In July 2025, the Trump Administration launched its AI Motion Plan, outlining authorities initiatives to aggressively deploy AI within the well being and protection sectors. Accordingly, the Division of Well being and Human Providers (HHS) and Division of Protection (DoD) have elevated funding allocations towards AI contracts. As contractors compete for more and more useful awards with restricted oversight, the potential for misrepresented capabilities and compliance gaps grows. Whereas the business’s sturdy tailwinds could translate into profitable alternatives for buyers and entrepreneurs, for qui tam litigators, the growth of publicly contracted AI companies alerts a brand new frontier for False Claims Act (FCA) enforcement. In flip, the FCA can be important in guaranteeing accountability as federal businesses steadily regulate oversight mechanisms to deal with the inconsistent reliability and restricted technological opacity of AI fashions.
Most FCA circumstances levied towards AI contractors would probably stem from false or fraudulent representations of a mannequin’s capabilities. Misrepresentations could embody inflated claims concerning the accuracy of a mannequin’s outputs, concealment of bias or artificial coaching, or inadequate knowledge privateness and safety requirements. Whether or not an AI mannequin is used for surveillance and intelligence by the DoD or for the Facilities for Medicare and Medicaid Providers (CMS) to evaluation claims, there are considerations past the technical effectiveness of AI outputs.
There are deeper considerations concerning the accuracy, accountability, and integrity of data-driven decision-making. For instance, if an AI contractor for the DoD fails to keep up the integrity of their program and permits the mannequin to make use of doctored or monitored knowledge, then the contractor could be liable underneath the FCA for false certifications of cybersecurity and danger administration compliance. Equally, an HHS contractor could be liable if it misrepresents the accuracy of the mannequin or conceals avenues for error or bias that materially have an effect on CMS fee choices, corresponding to AI recommending or justifying inappropriate diagnostic codes.
Whereas each examples mirror prior FCA circumstances concerning protection and healthcare fraud, in addition they show a rising pressure in FCA litigation between technological complexity and authorized accountability. AI fashions produce outputs by way of data-analytics from inputs authorities employees present. Since no tangible items are exchanged, the excellence between trustworthy errors and actionable fraud begins to blur. In AI contracts, hurt could manifest in delicate or delayed methods. Fashions may doubtlessly return biased predictions or present unreliable analytics that misinform resolution making. The downstream penalties of a mannequin’s flaws could also be tougher to determine. Since human decision-makers use AI outputs to information their actions, fairly than dictate them, defendants may argue that their judgement triggered the hurt fairly than the AI mannequin’s flaws.
Courts will quickly should outline falsity in AI contexts. In prior FCA circumstances, falsity concerned elements like misrepresented deliverables, inflated billing, or insufficient compliance. AI complicates defining the falsity of a declare in FCA circumstances. Relators may also face new challenges satisfying the scienter requirement as a contractor’s data, deliberate ignorance, or reckless disregard for the falsity of their declare turns into tougher to find out because of the autonomous nature of AI.
The autonomy of AI programs will make figuring out the intent of a defendant in FCA circumstances extra complicated. AI fashions’ opacity additional complicates the problem. Many AI fashions are “black field” programs, which means customers, and sometimes creators, can’t totally oversee the inner features of the AI’s data-analysis nor reasoning for a given output. The place historically FCA circumstances analyzed intent by way of a given firm’s inner communications or its worker’s actions, the layered company buildings and technical groups liable for the event and upkeep of a mannequin could not totally know the way precisely a deployed mannequin evolves or produces outcomes. Contractors may then moderately argue that they weren’t conscious of a mannequin’s bias or its false outputs as they have been emergent or the product of algorithmic drift fairly than human affect.
Discovery in FCA circumstances involving AI can be exceptionally complicated. With a view to seize related info, AI contractors might want to provide the mannequin structure, coaching knowledge, information of inputs and outputs, in addition to all different related supplies. Since AI fashions retrain and regulate to new knowledge, when litigation arises, the mannequin may feasibly not exist within the type it did throughout the related interval of the case. Because of this, preservation turns into important for the relator’s capacity to show what was false throughout the time of contracting and fee. Disputes invoking commerce secret and privateness protections for specific knowledge units will solely serve to additional complicate the method. These disputes will have an effect on the scienter analyses as relators could should depend on inner communications and necessities to find out if a defendant “knew” about flaws in a mannequin’s efficiency.
Federal businesses settle for a level of uncertainty in AI efficiency whereas investing within the emergent know-how. This uncertainty complicates the materiality factor of FCA circumstances since a declare is materials provided that the federal government had refused fee had it identified of the misrepresentation. When utilizing the precedent set by United Well being Providers v. Escobar, courts will wrestle to find out whether or not flaws in an AI mannequin can meet the edge for materials misrepresentation of a superb or service because the authorities could knowingly settle for such a danger. A contractor’s false claims concerning an AI mannequin’s operate alone could not fulfill the FCA’s materiality requirement if the federal government implicitly consented to a measure of inaccuracy within the system’s outputs.
Federal businesses might want to strengthen contractual oversight and set up clear mechanisms for monitoring the usage of AI. As the federal government develops the related insurance policies over time, FCA litigation seems poised to be the proving floor for a way the authorized system will deal with algorithmic accountability.
Henry Fina is a Venture Analyst and Matthew P. Suzor is an Affiliate at Miller Shah LLP.
The views, opinions and positions expressed inside all posts are these of the creator(s) alone and don’t characterize these of the Program on Company Compliance and Enforcement (PCCE) or of the New York College Faculty of Legislation. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this web site and won’t be liable any errors, omissions or representations. The copyright of this content material belongs to the creator(s) and any legal responsibility as regards to infringement of mental property rights stays with the creator(s).



















