by Danny Tobey, Ashley Carr, and Michael Atleson

From Left to proper: Danny Tobey, Ashley Carr, and Mike Atleson (photographs courtesy of DLA Piper)
A rising variety of state legal guidelines require corporations deploying chatbots for one-to-one client interactions to reveal that the customers usually are not speaking with people.
The legal guidelines all differ, nonetheless, when it comes to (1) the industrial context by which they apply and (2) the main points of when and the way a disclosure have to be made.
As well as, a first-of-its-kind legislation in New York requires advertisers to reveal the usage of “artificial performers” in conventional or different static promoting.
With state legislatures contemplating a lot of AI-related payments, this variable panorama of AI disclosure legal guidelines is about to develop into much more crowded and more durable to navigate over time.
Additional, as these new state legal guidelines come into impact, , the Federal Commerce Fee (FTC) Act stays fixed and can usually require corporations to reveal AI use with a purpose to keep away from client deception.
The federal commonplace is restricted to the industrial context (e.g., the sale of products or providers) and would apply provided that the chatbot’s presence is surprising and materials – that’s, more likely to have an effect on a client’s selection of or conduct concerning a product. State client safety legal guidelines barring misleading industrial practices would apply in the identical manner. This baseline commonplace for disclosure will stay, no matter whether or not AI-specific state legal guidelines apply or stay on the books within the wake of debates on federal preemption.
That stated, present, AI-specific state disclosure legal guidelines embody necessities which are each narrower and broader than the final deception commonplace. Some apply in additional restricted circumstances and/or apply with out regard to client expectation or materiality.
A Venn diagram of those state legal guidelines and the final deception commonplace could be a large number of concentric circles. Apart from the New York legislation, although, the state legal guidelines all seem to use solely to one-to-one interactions between chatbots and customers, slightly than static ads distributed through conventional channels.
Under we set out among the fundamentals and variations of present state AI disclosure legal guidelines involving client interactions, roughly from broader to narrower protection.
Maine
Any particular person utilizing a bot or different laptop expertise to interact in commerce or commerce with a client should make a transparent and conspicuous disclosure that the patron just isn’t partaking with a human, if use of the bot could mislead customers into pondering it’s a human. Word that plaintiffs usually are not required to show that any customers have been really misled or suffered any accidents on account of a violation.
New Jersey
Any particular person utilizing a bot to speak or work together with an individual involving the sale or promoting of merchandise or actual property should disclose it clearly and conspicuously and at the beginning of the interplay. The legislation thus applies solely to gross sales or promoting, slightly than all commerce and commerce, and is restricted to merchandise and actual property.
California
Any particular person utilizing a bot to speak or work together with one other particular person in a industrial transaction with intent to mislead the opposite particular person about its synthetic identification, for the aim of knowingly deceiving the particular person in regards to the content material and incentivize the transaction, should make a transparent and conspicuous disclosure. This legislation is transaction-based, and thus narrower in scope than the Maine and New Jersey legal guidelines. It’s also narrower as a result of it requires not solely a particular intent to mislead but additionally an intent to mislead for a specific goal and outcome – info which will usually be laborious to show.
Colorado
Per the Colorado AI Act, a deployer of a high-risk AI system should disclose its use for any high-risk client interactions, except it’s apparent to an affordable particular person that the interplay is certainly with an AI system. Such interactions are these involving “consequential selections” concerning training, employment, finance, important authorities providers, healthcare, housing, insurance coverage, or authorized providers. The legislation doesn’t specify when and the way the disclosure have to be made.
Utah
Utah has two related legal guidelines that every function numerous elements of the legal guidelines talked about above. First, a vendor utilizing generative AI to work together with people in client transactions should disclose the usage of AI if the patron asks, except the vendor has already disclosed its use clearly and conspicuously. Second, state-regulated professionals utilizing generative AI in high-risk interactions with people receiving skilled providers should make a disclosure prominently and at the beginning of the interplay.
As mentioned in prior DLA Piper consumer alerts, just a few states have additionally imposed AI-related client disclosure necessities outdoors the context of products and providers. Specifically, New York and California have legal guidelines requiring sure disclosures for companion chatbots, and Utah has a legislation requiring disclosures for psychological well being chatbots. These legal guidelines stem much less from considerations about industrial deception and extra from considerations of different potential hurt arising from interactions with such providers.
Lastly, New York’s groundbreaking legislation, S.8420-A/A.8887-B, requires industrial advertisers to reveal conspicuously to customers when a “artificial performer” is utilized in a visible or audiovisual commercial.
The definition of “artificial performer” refers to “a digitally created asset created, reproduced, or modified by laptop, utilizing generative synthetic intelligence or a software program algorithm, that’s meant to create the impression that the asset is partaking in an audiovisual and/or visible efficiency of a human performer who just isn’t recognizable as any identifiable pure performer.”
The legislation particularly excludes “audio ads” and, whereas not explicitly excluded, doesn’t seem to cowl the usage of actual performers enhanced by AI instruments. It additionally wouldn’t cowl deepfakes of actual performers the best way that Tennessee’s Guaranteeing Likeness Voice and Picture Safety (ELVIS) Act does. As an alternative, the legislation seems restricted to conditions by which the advertiser’s intent is to have folks suppose that the digital asset is only a random actor (e.g., not an identifiable superstar) performing within the commercial.
Regardless of its restricted software to sure sorts of adverts, generated content material, and advertiser intent, S.8420-A/A.8887-B is the one legislation in the US that particularly requires a disclosure for the usage of AI-generated content material in ads.
Additional, when the legislation does apply, it does so past the final deception commonplace within the FTC Act and state client safety legal guidelines. For the latter, the related enforcement company must present that, through the usage of the artificial performer, the advertiser is making a fabric misrepresentation that’s more likely to mislead affordable customers. The New York legislation dispenses with the necessity to make that exhibiting. However, it provides an intent requirement that client safety enforcers usually should not have to fulfill when holding somebody chargeable for misleading industrial conduct.
Extra state legal guidelines requiring AI-related disclosures are seemingly on the best way. If previous is prologue, new legal guidelines on this space will seemingly include parts of, however not be an identical to, these that different states have already handed. In the meantime, the tug of battle between state and federal AI regulation will proceed, and state attorneys normal could use broader client safety legal guidelines in AI-related promoting contexts.
Particularly underneath these state-specific and fact-dependent circumstances, and till the mud settles, advertisers are inspired to err on the aspect of transparency, letting customers know once they’re speaking to a chatbot or seeing a human-like avatar in an commercial. Advertisers are additionally inspired to make sure that customers will see and perceive any such disclosure.
Even when the bot-or-not query just isn’t at problem, advertisers ought to stay vigilant about whether or not different makes use of of AI, resembling when it’s used to depict product use and outcomes, could deceive customers.
Danny Tobey and Ashley Carr are Companions, and Michael Atleson is Of Counsel at DLA Piper. Brian Boyle is a Associate and James Stewart is an Affiliate at DLA Piper who additionally contributed to this text. This publish first appeared as a consumer alert for the agency.
The views, opinions and positions expressed inside all posts are these of the writer(s) alone and don’t characterize these of the Program on Company Compliance and Enforcement (PCCE) or of the New York College College of Regulation. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this web site and won’t be liable any errors, omissions or representations. The copyright of this content material belongs to the writer(s) and any legal responsibility as regards to infringement of mental property rights stays with the writer(s).
by Danny Tobey, Ashley Carr, and Michael Atleson

From Left to proper: Danny Tobey, Ashley Carr, and Mike Atleson (photographs courtesy of DLA Piper)
A rising variety of state legal guidelines require corporations deploying chatbots for one-to-one client interactions to reveal that the customers usually are not speaking with people.
The legal guidelines all differ, nonetheless, when it comes to (1) the industrial context by which they apply and (2) the main points of when and the way a disclosure have to be made.
As well as, a first-of-its-kind legislation in New York requires advertisers to reveal the usage of “artificial performers” in conventional or different static promoting.
With state legislatures contemplating a lot of AI-related payments, this variable panorama of AI disclosure legal guidelines is about to develop into much more crowded and more durable to navigate over time.
Additional, as these new state legal guidelines come into impact, , the Federal Commerce Fee (FTC) Act stays fixed and can usually require corporations to reveal AI use with a purpose to keep away from client deception.
The federal commonplace is restricted to the industrial context (e.g., the sale of products or providers) and would apply provided that the chatbot’s presence is surprising and materials – that’s, more likely to have an effect on a client’s selection of or conduct concerning a product. State client safety legal guidelines barring misleading industrial practices would apply in the identical manner. This baseline commonplace for disclosure will stay, no matter whether or not AI-specific state legal guidelines apply or stay on the books within the wake of debates on federal preemption.
That stated, present, AI-specific state disclosure legal guidelines embody necessities which are each narrower and broader than the final deception commonplace. Some apply in additional restricted circumstances and/or apply with out regard to client expectation or materiality.
A Venn diagram of those state legal guidelines and the final deception commonplace could be a large number of concentric circles. Apart from the New York legislation, although, the state legal guidelines all seem to use solely to one-to-one interactions between chatbots and customers, slightly than static ads distributed through conventional channels.
Under we set out among the fundamentals and variations of present state AI disclosure legal guidelines involving client interactions, roughly from broader to narrower protection.
Maine
Any particular person utilizing a bot or different laptop expertise to interact in commerce or commerce with a client should make a transparent and conspicuous disclosure that the patron just isn’t partaking with a human, if use of the bot could mislead customers into pondering it’s a human. Word that plaintiffs usually are not required to show that any customers have been really misled or suffered any accidents on account of a violation.
New Jersey
Any particular person utilizing a bot to speak or work together with an individual involving the sale or promoting of merchandise or actual property should disclose it clearly and conspicuously and at the beginning of the interplay. The legislation thus applies solely to gross sales or promoting, slightly than all commerce and commerce, and is restricted to merchandise and actual property.
California
Any particular person utilizing a bot to speak or work together with one other particular person in a industrial transaction with intent to mislead the opposite particular person about its synthetic identification, for the aim of knowingly deceiving the particular person in regards to the content material and incentivize the transaction, should make a transparent and conspicuous disclosure. This legislation is transaction-based, and thus narrower in scope than the Maine and New Jersey legal guidelines. It’s also narrower as a result of it requires not solely a particular intent to mislead but additionally an intent to mislead for a specific goal and outcome – info which will usually be laborious to show.
Colorado
Per the Colorado AI Act, a deployer of a high-risk AI system should disclose its use for any high-risk client interactions, except it’s apparent to an affordable particular person that the interplay is certainly with an AI system. Such interactions are these involving “consequential selections” concerning training, employment, finance, important authorities providers, healthcare, housing, insurance coverage, or authorized providers. The legislation doesn’t specify when and the way the disclosure have to be made.
Utah
Utah has two related legal guidelines that every function numerous elements of the legal guidelines talked about above. First, a vendor utilizing generative AI to work together with people in client transactions should disclose the usage of AI if the patron asks, except the vendor has already disclosed its use clearly and conspicuously. Second, state-regulated professionals utilizing generative AI in high-risk interactions with people receiving skilled providers should make a disclosure prominently and at the beginning of the interplay.
As mentioned in prior DLA Piper consumer alerts, just a few states have additionally imposed AI-related client disclosure necessities outdoors the context of products and providers. Specifically, New York and California have legal guidelines requiring sure disclosures for companion chatbots, and Utah has a legislation requiring disclosures for psychological well being chatbots. These legal guidelines stem much less from considerations about industrial deception and extra from considerations of different potential hurt arising from interactions with such providers.
Lastly, New York’s groundbreaking legislation, S.8420-A/A.8887-B, requires industrial advertisers to reveal conspicuously to customers when a “artificial performer” is utilized in a visible or audiovisual commercial.
The definition of “artificial performer” refers to “a digitally created asset created, reproduced, or modified by laptop, utilizing generative synthetic intelligence or a software program algorithm, that’s meant to create the impression that the asset is partaking in an audiovisual and/or visible efficiency of a human performer who just isn’t recognizable as any identifiable pure performer.”
The legislation particularly excludes “audio ads” and, whereas not explicitly excluded, doesn’t seem to cowl the usage of actual performers enhanced by AI instruments. It additionally wouldn’t cowl deepfakes of actual performers the best way that Tennessee’s Guaranteeing Likeness Voice and Picture Safety (ELVIS) Act does. As an alternative, the legislation seems restricted to conditions by which the advertiser’s intent is to have folks suppose that the digital asset is only a random actor (e.g., not an identifiable superstar) performing within the commercial.
Regardless of its restricted software to sure sorts of adverts, generated content material, and advertiser intent, S.8420-A/A.8887-B is the one legislation in the US that particularly requires a disclosure for the usage of AI-generated content material in ads.
Additional, when the legislation does apply, it does so past the final deception commonplace within the FTC Act and state client safety legal guidelines. For the latter, the related enforcement company must present that, through the usage of the artificial performer, the advertiser is making a fabric misrepresentation that’s more likely to mislead affordable customers. The New York legislation dispenses with the necessity to make that exhibiting. However, it provides an intent requirement that client safety enforcers usually should not have to fulfill when holding somebody chargeable for misleading industrial conduct.
Extra state legal guidelines requiring AI-related disclosures are seemingly on the best way. If previous is prologue, new legal guidelines on this space will seemingly include parts of, however not be an identical to, these that different states have already handed. In the meantime, the tug of battle between state and federal AI regulation will proceed, and state attorneys normal could use broader client safety legal guidelines in AI-related promoting contexts.
Particularly underneath these state-specific and fact-dependent circumstances, and till the mud settles, advertisers are inspired to err on the aspect of transparency, letting customers know once they’re speaking to a chatbot or seeing a human-like avatar in an commercial. Advertisers are additionally inspired to make sure that customers will see and perceive any such disclosure.
Even when the bot-or-not query just isn’t at problem, advertisers ought to stay vigilant about whether or not different makes use of of AI, resembling when it’s used to depict product use and outcomes, could deceive customers.
Danny Tobey and Ashley Carr are Companions, and Michael Atleson is Of Counsel at DLA Piper. Brian Boyle is a Associate and James Stewart is an Affiliate at DLA Piper who additionally contributed to this text. This publish first appeared as a consumer alert for the agency.
The views, opinions and positions expressed inside all posts are these of the writer(s) alone and don’t characterize these of the Program on Company Compliance and Enforcement (PCCE) or of the New York College College of Regulation. PCCE makes no representations as to the accuracy, completeness and validity or any statements made on this web site and won’t be liable any errors, omissions or representations. The copyright of this content material belongs to the writer(s) and any legal responsibility as regards to infringement of mental property rights stays with the writer(s).



















