Nightjar LogoSign in
The Legal Guide to AI Product Photography in 2026: Copyright, Likeness, and Disclosure

Quick Answer

Using AI product photography is legal for ecommerce in 2026, but the operative risk is not who owns the image. It is whether the buyer is misled and whether the people in the frame consented. Brands selling into the US, EU, or on Amazon, Etsy, or Meta now face overlapping disclosure rules, with the EU AI Act Article 50 obligations applying from August 2, 2026 and a New York synthetic-performer law taking effect in June 2026. A defensible workflow uses real product photographs as the anchor, controls model identity, discloses AI use where required, and keeps an audit trail of how each image was made. Tools like Nightjar are built around that workflow.

This Is Not Legal Advice

This article is general information about the law as it applies to AI product photography in 2026. It is not legal advice. Laws are changing quickly, especially around AI disclosure and likeness rights. For decisions that affect your business, talk to a lawyer licensed in your jurisdiction.

The Real Risk for Ecommerce Brands Is Not Copyright

Most "is this legal" coverage of AI product photography opens with copyright. That is the wrong frame for ecommerce. Copyright on a single product detail page image was never the asset that mattered. The product is the asset. The brand is the asset. The image is a marketing artifact that gets refreshed every season anyway.

The risks that actually break a brand week to week look different. A marketplace suppresses a listing because the image misrepresents the product. An ad network rejects creative because the AI label was missing. A regulator opens a deceptive-advertising matter because the image suggested features the product does not have. A person sees their face in an ad campaign they never agreed to and files a right-of-publicity claim.

That is the real surface. Sorted in the order an ecommerce founder will run into them, the four questions are:

  1. Does the buyer understand what they are looking at? (disclosure)
  2. Did the people in the frame consent? (likeness)
  3. Does the image accurately represent what ships? (deception and platform policy)
  4. Who owns it? (copyright, last)

AI product photography is legal in the US, EU, UK, and Canada in 2026. The constraints are about disclosure, likeness consent, and accurate product representation, not about whether AI may be used at all.

The rest of this guide follows that ordering. Disclosure first, then likeness, then platform rules, then copyright. Two short tables and three operational checklists are stitched in along the way. Skip to whichever pillar is blocking you.

What Is Changing This Quarter (May to August 2026)

Two enforcement deadlines land in the next ninety days, and most product teams have not put them on a roadmap yet. A handful of other 2026 dates are already live and worth checking against your current setup.

June 9, 2026: New York synthetic-performer disclosure

New York General Business Law Section 396-b, as amended by S.8420-A / A.8887-B (signed by Governor Hochul on December 11, 2025), takes effect on June 9, 2026. It requires conspicuous disclosure when a "synthetic performer," defined as a digitally created asset using generative AI intended to create the impression of a human performer not recognizable as an identifiable real person, appears in advertising distributed in New York. The penalty is USD 1,000 for a first violation and USD 5,000 for subsequent violations, on a per-advertisement basis. The statute does not mandate exact wording, but the disclosure must be conspicuous, prominent, and unavoidable. See the Debevoise summary and the Skadden analysis for the full reading.

August 2, 2026: EU AI Act Article 50

The transparency obligations in Article 50 of the EU AI Act become enforceable on August 2, 2026. Providers of AI systems that generate synthetic image, audio, video, or text content must mark outputs in a machine-readable format so the content is detectable as artificially generated. Deployers, the businesses using the AI to produce content, must disclose to users when the content qualifies as a deepfake, defined broadly enough to capture AI imagery that could be mistaken for a conventional photograph. Fines can reach EUR 15 million or 3% of global annual turnover, whichever is higher.

The European Commission published a second draft of the Code of Practice on marking and labelling AI-generated content on March 5, 2026, and a final draft is expected in June 2026. The Code anticipates a multi-layer marking strategy: digitally signed metadata in the file, imperceptible watermarking embedded in the content, and logging as a fallback. See the HSF Kramer note for a working summary.

Other 2026 dates already live

  • December 22, 2025: the FTC issued its first round of warning letters under the Consumer Review Rule, citing ten companies for fake or AI-generated reviews.
  • January 14, 2026: Etsy's updated Seller Policy on AI disclosure took effect.
  • March 2026: Meta's global ad-disclosure policy for AI content went live.
  • March 2, 2026: the Supreme Court denied certiorari in Thaler v. Perlmutter, leaving the human-authorship requirement settled in US copyright law.

Two AI-disclosure rules become enforceable in summer 2026: New York's synthetic-performer disclosure law on June 9, and the EU AI Act Article 50 transparency obligations on August 2.

1. Disclosure: When You Have to Tell the Buyer

Disclosure is the pillar that has changed most in the last twelve months and the one most likely to surface in a marketplace audit or a regulator letter. The rules stack rather than replace each other. A brand selling into both the US and the EU will sit under FTC deception standards, EU AI Act Article 50, and the rules of every platform it lists on.

The US baseline: FTC Section 5 and the Endorsement Guides

The FTC's Endorsement Guides (16 CFR Part 255), updated in 2023, govern when consumers must be told that something is a paid endorsement, a fabricated review, or otherwise not what it appears. Disclosures must be clear and conspicuous. Product photography is generally not an endorsement, so 16 CFR 255 does not directly require an "AI-generated" label on a PDP image.

Section 5 of the FTC Act is the broader rule and the one that bites: deceptive acts or practices in or affecting commerce. A product image that materially misrepresents what the buyer will receive (color, size, features, packaging, included accessories) is deceptive whether AI was used or not. The case where AI-specific disclosure is firmest is when the image features a person. If a synthetic AI model could read as a real customer, a real spokesperson, or a real testimonial, the analysis edges back into endorsement territory and a clear disclosure becomes the safer posture.

New York: synthetic-performer disclosure (June 2026)

The June 2026 statute is covered above. In short: any advertisement distributed in New York that features a synthetic AI-generated person in commercial advertising needs a conspicuous disclosure. Audio-only content, pure-translation use of AI, and marketing for expressive works where the synthetic performer regularly appears in the underlying work are out of scope.

EU AI Act Article 50 (August 2, 2026)

Two layers, two parties. The provider (the AI tool) must mark outputs in a machine-readable format. The deployer (the brand) must disclose deepfake-style content to users in a clear and distinguishable manner. For product imagery that depicts a real product but uses a synthetic model or a synthetic scene, the safer reading is that user-facing disclosure is required when the imagery could be mistaken for a conventional photograph.

Platform-mandated disclosure already in force

  • Meta Ads (March 2026): apply Meta's AI Content Label in Ads Manager when the creative is AI-generated, AI-modified beyond standard filters, uses synthetic voiceover, or uses generated video.
  • Etsy (January 14, 2026): tick the "I used AI-generative technology" checkbox, select "Designed by" instead of "Made by" when the primary visual is machine-generated, and state AI use in the description.
  • Amazon (2026 policy): AI-assisted edits like background replacement, color correction, and lighting adjustments are permitted; AI generation that misrepresents the product's physical characteristics is not. Substantial AI modifications should be disclosed.

Disclosure copy templates

Copy these, edit to fit, and route through your own counsel before publishing.

Product page footer (when AI is used substantially)
Some imagery on this page was created or enhanced using AI.
The product shown represents the item we ship.

Etsy listing description block
This listing's primary image was designed using AI-generative
technology. The product itself is hand-made / printed / sewn /
assembled by [shop name].

EU buyer-facing label (post Aug 2, 2026)
AI-generated image.

Synthetic model in NY-distributed advertising (post Jun 2026)
Featuring a synthetic performer. No real person depicted.

Meta ad (post Mar 2026)
Use Meta's "AI-generated" label in Ads Manager. No separate caption
disclosure required if the in-platform label is applied correctly.

For the platform-specific reading on Etsy and Shopify, the help-desk spoke on Etsy and Shopify AI disclosure covers the practical steps a seller needs to take in the listing form. The Etsy AI photo guide has more on how the "Designed by" attribution interacts with handmade and print-on-demand workflows.

2. Likeness: Whose Face and Body You Are Allowed to Use

This pillar is where ecommerce brands tend to be most exposed and least aware. Right of publicity is a state-by-state tort with real teeth, and the AI era has made the question of what counts as "identifiable" a lot more fluid.

US right of publicity

Right of publicity is a state-level cause of action protecting an individual's name, image, likeness, and (in Tennessee, New York, and California) voice and digital replica from unauthorized commercial use. Damages can include disgorgement, statutory penalties, and injunctive relief. An AI-generated person who is identifiable as a real individual can trigger a claim, even if the brand never intended to depict that person. Identifiability is the live question, not intent. See Blank Rome's overview for the doctrinal map.

A few state developments worth tracking:

  • Tennessee ELVIS Act (effective July 1, 2024): expands the state's right of publicity to expressly cover voice (including simulations) and creates a cause of action against distributors of services whose primary purpose is creating unauthorized facsimiles of a particular person's voice or likeness. See the Proskauer summary.
  • NY Fashion Workers Act: requires clear, conspicuous prior written consent before creating or using a model's digital AI avatar. Consent must specify scope, purpose, duration, and compensation. Routine retouching is excluded. See the Benesch analysis.
  • California AB 2602 / AB 1836: cover digital replicas of performers, alive and deceased. See the Fenwick summary.

EU: GDPR treats likeness as personal data

Under GDPR Article 4(1), personal data is any information relating to an identified or identifiable natural person. A photograph (real or synthetic) that allows identification of a living person is personal data. A purely synthetic face that is not used to identify a real individual is generally not biometric data. But if a synthetic image substantially replicates an identifiable real person, processing that image (training a model on it, storing it, publishing it) is processing of that person's personal data and requires a lawful basis, typically consent.

Model release forms in the AI era

Standard model release language ("all media now known or hereafter devised") is usually insufficient to authorize creation of a digital replica or AI-generated derivative. New York's Fashion Workers Act explicitly requires separate, scoped consent for AI use. For any new model contract, three clauses are worth adding:

  1. An explicit prohibition on AI-generated replicas or derivatives without separate written consent and separate compensation.
  2. A scope clause covering which uses, which categories, which geographies, and which durations are permitted.
  3. Audit and takedown rights so the model can flag and remove any AI use that drifts beyond the scope.

Most pre-2024 model contracts do not address AI use. Check the contract before reusing any frame to fine-tune or condition an AI model.

The lower-risk posture

Three tiers, ranked by exposure:

  • Lowest risk: a synthetic AI model that is not based on any specific real person, with a brand-owned identity used consistently across the catalog.
  • Medium risk: a custom synthetic model based on a real person, backed by written, scoped consent covering AI use, duration, geography, and category.
  • Highest risk: prompt-only generation with no model identity control, where every Generation rolls a new face that may by chance resemble a real person.

An AI-generated model that is identifiable as a real person can trigger a right-of-publicity claim, even if the brand never intended to depict that person. The defensible posture is a fixed, brand-owned synthetic model used consistently across the catalog, not a new face on every Generation.

3. Disclosure on Each Platform

The platforms have moved faster than the legislatures. Here is the per-platform decision a founder can act on this week. Read columns left to right.

PlatformAI imagery permittedDisclosure requiredWhere the disclosure goesEffective
Shopify (storefront)YesNot platform-mandated; FTC deception rules still applyOptional product page footerLive
AmazonYes for AI-assisted edits; not for AI-misrepresented productSubstantial AI modifications should be disclosedListing description / image alt2026 policy
EtsyYesYes: checkbox + "Designed by" attribution + description statementListing formJan 14, 2026
Walmart MarketplaceYes if accurateNot platform-mandated; standard listing-quality rulesN/ALive
TikTok ShopYes if accurateRealistic AI-generated content of people or scenes must be labelledVideo / listingLive
Meta Ads (Facebook + Instagram)YesYes: AI Content Label in Ads ManagerAds Manager toggleMarch 2026
Google AdsYesNot for commercial ads (required for political / election only)N/ALive
EU sales (any platform)YesYes: Article 50 transparency obligationBuyer-facing AI labelAug 2, 2026
NY-distributed advertising with synthetic modelYesYes: synthetic-performer disclosureAd creativeJun 9, 2026

For Amazon specifically, the help-desk spoke on Amazon's AI image policy walks through what counts as AI-assisted versus AI-generated under the 2026 rules and what to put in the listing. For Shopify and Etsy, see the help-desk spoke on Etsy and Shopify AI disclosure.

A note on regulated categories

Cosmetics under FDA, food labelling, financial products, and health claims carry their own substantiation rules whether or not AI was used to make the image. A serum bottle that looks the way it actually looks does not solve a claim that the serum reverses fine lines. The skincare and beauty AI photography guide covers the FDA and FTC overlay for that category in more detail.

4. Copyright: Who Owns an AI-Generated Product Image

Last because it matters least week to week, even though it is the question that gets the most airtime.

The human-authorship rule (US)

The US Copyright Office's AI hub and its January 2025 Part 2 report on copyrightability reaffirm that human authorship is the bedrock of copyrightability. Works "entirely generated by AI" are not copyrightable. The Office is direct on prompts: "the mere selection of prompts, even if those prompts are detailed and are the product of some human effort, does not itself yield a copyrightable work." Where a work mixes human and AI-generated content, only the human contributions are potentially copyrightable, and applicants must disclose the AI-generated content and explain the human author's contribution when registering.

Thaler v. Perlmutter is now settled

On March 2, 2026, the Supreme Court denied certiorari in Thaler v. Perlmutter, leaving intact the D.C. Circuit's ruling that the Copyright Act requires a human author. A purely AI-generated image sits in a gray zone for copyright registration. The legal question is, for now, closed at the federal level.

What this actually means for a brand

The product, the brand name, and the trade dress are protected by separate IP regimes (trademark, design rights), independent of the image's copyright status. Reusing and editing real product photographs uploaded by the brand preserves a stronger human-authorship narrative than generating an image from a prompt alone. Outputs incorporating substantial human creative selection (composition decisions, edits, retouching) have a better case for partial copyright than purely prompt-generated work.

The practical upshot: copyright on a single PDP image is rarely the asset that matters. The product is the asset, and the product is protected by other regimes that are not going anywhere. For more on this, see the help-desk spoke on AI image copyright in the US and EU.

Training-data lawsuits, summarized

The training-data legal question is mostly an upstream provider problem, not a downstream-brand problem. A brand using a generative tool to create product imagery is generally a downstream user and is not on the hook for training-data infringement claims unless the output itself is substantially similar to a specific copyrighted work (visible watermarks, recognizable copyrighted characters, near-identical reproduction).

A quick read on the live cases:

  • Getty Images v. Stability AI (UK, decided November 4, 2025): the High Court largely rejected Getty's copyright claims; limited trademark infringement was found for some watermark outputs.
  • NYT v. OpenAI (US): summary judgment scheduled April 2026; pending.
  • Andersen v. Stability AI (US): trial set September 8, 2026; pending.
  • Bartz v. Anthropic (US): settled for USD 1.5 billion, the largest copyright settlement in US history; final fairness hearing rescheduled to May 14, 2026 with nearly 120,000 author claims filed.

The AI Lawsuit Tracker keeps the running list. The procurement implication is in the vendor checklist below.

Jurisdiction Quick Reference

International brands need to route between jurisdictions quickly. The table below collapses the four pillars into one row per jurisdiction.

JurisdictionCopyright on AI outputAI disclosure on product imageryLikeness rules to know
US (federal)Human authorship required; pure AI not registrableFTC Section 5 deception standard; no blanket label ruleState-by-state right of publicity
US, New YorkSame as federalSynthetic-performer disclosure (Jun 2026, statewide)Fashion Workers Act consent requirements
US, TennesseeSame as federalNone state-specificELVIS Act covers voice and likeness
US, CaliforniaSame as federalNone state-specific for commercial product imageryAB 2602 / AB 1836 digital replicas of performers
EUNational laws vary; AI Act adds transparency layerArticle 50 transparency obligations from Aug 2, 2026GDPR treats likeness as personal data
UKSome computer-generated works protected (CDPA s.9(3)); position evolvingNo equivalent of Article 50 yet; ASA truthfulness standards applyImage rights via passing off and defamation
CanadaAuthorship position still movingNo federal AI-specific labelling yet; Competition Act deception standards applyTort of misappropriation of personality

How a Defensible AI Product Photography Workflow Actually Looks

A workflow that holds up to a marketplace audit or a regulator letter has three properties: it anchors on real product input, it controls model identity, and it leaves a record. Two artifacts make that concrete.

Audit-trail checklist

For every marketing image, keep a record of:

  1. The source product photograph used as input.
  2. The AI tool and tool version used.
  3. The prompt or Recipe applied.
  4. The human edits made after generation.
  5. The ingredient identity (Photography Style, Composition, Fashion Model, Background) where applicable.
  6. The publication date and the channels where the image ran.
  7. A retention horizon at least as long as the statute of limitations for the most relevant cause of action in your jurisdictions, typically three to six years for advertising claims and longer for IP.

Vendor due-diligence checklist

Take this list to any AI photo vendor, including Nightjar. Treat it as procurement, not pitch. The tools roundup is a starting point for shortlisting vendors to put through these questions.

  1. Where is the model trained? Is the training data licensed, public-domain, or scraped?
  2. Does the tool embed C2PA or other provenance metadata in outputs?
  3. Does the contract include any indemnification for output IP claims, and on what terms?
  4. Does the tool let the brand fix model identity (reuse the same synthetic person across images), or does it generate a new face every time?
  5. Does the tool retain a record of the source product photo, the prompt or Recipe, and the output?
  6. Can the brand export a list of every AI-generated asset for marketplace audit?
  7. How does the tool handle takedown requests for outputs alleged to resemble a real person?
  8. Does the tool produce outputs that match marketplace product-fidelity expectations (no invented features, no distortions, no color shifts)?

Why a controlled, ingredient-based system makes the workflow easier

Prompt-only tools encode every variable in text and reroll on each Generation. That breaks model identity and makes audit trails thin. A controlled system separates the variables a brand cares about (Photography Style, Composition, Fashion Model, Background) so each one can be locked, reused, and recorded. A locked Fashion Model across a hundred PDPs is an audit trail by itself: same person, same rights record, same disclosure obligation.

The legally correct workflow does not change the cost story either. Same input photographs, same number of outputs, same time on retouching. For the underlying numbers, see the breakdown of real product photography costs.

Where Nightjar fits

Nightjar is structurally suited to this kind of workflow rather than positioned as legal protection. Four design choices line up with the four pillars above.

  • Reusable Fashion Models the brand controls. A brand can build a roster of Fashion Models it has the rights to use and reuse them across the catalog instead of relying on a generic prompt that may produce a face resembling a real person on each Generation. Nightjar's product documentation states explicitly that a custom Fashion Model can be based on a real person only when the user has the right to use that person's likeness. That constraint is the legally correct posture, baked into the tool.
  • Product preservation by design. The system is built so the generated image represents the actual product the brand uploaded, not a reinterpretation of it. That aligns with Amazon's accurate-representation rule and with the FTC deception standard.
  • Recipes as audit-trail substrate. A Recipe saves the structured Create-form setup (ingredients, Custom Directions, output settings). Two images from the same Recipe look like the same shoot, and the Recipe itself is the structured record of how the image was made.
  • Team Library as chain of custody. Generated Assets, source product Assets, and metadata stay together in the Library, so a brand can demonstrate what was generated, when, and from what source if a marketplace or a regulator asks.

For the operational walkthrough of how Recipes and the Library work together to keep image output consistent across a catalog, see the consistent AI product photography guide.

A note on what Nightjar is not. Nightjar does not provide legal indemnity, does not currently advertise C2PA support, and does not make a brand "compliant" with anything. The tool is designed for the careful version of the workflow. The compliance work is still the brand's, with its own counsel.

Reminder: Still Not Legal Advice

The legal landscape around AI imagery is moving every quarter. The dates and rules in this guide reflect what is on the books as of May 11, 2026. Treat this as a starting point for your own diligence, not the final word. For decisions that affect your business, talk to a lawyer licensed in your jurisdiction.

The 2026 Bottom Line for Ecommerce Founders

AI product photography is a working tool for serious brands in 2026. The legal framework around it is more layered than it was twelve months ago, but it is also more legible. A brand that anchors on real product input, locks model identity to a synthetic person it has rights to, keeps a structured record of how each image was made, and discloses where required is doing the work that the FTC, the EU AI Act, Amazon, Etsy, and Meta are now asking for in different vocabularies.

  • AI product photography is legal in the US, EU, UK, and Canada in 2026.
  • The operative risks are disclosure (FTC, EU AI Act, NY, Etsy, Meta) and likeness (right of publicity, GDPR), not copyright.
  • Copyright on a marketing product image is rarely the asset that matters. The product, the brand, and the trade dress are protected separately.
  • The two enforcement deadlines to plan for now: the New York synthetic-performer law on June 9, 2026 and EU AI Act Article 50 on August 2, 2026.
  • A defensible workflow uses real product photographs as the anchor, locks model identity to a synthetic person the brand has rights to, keeps a Recipe and Library audit trail of how each image was made, and discloses where required.

Frequently Asked Questions

Is it legal to use AI-generated product photos for my online store? Yes, in the US, EU, UK, and Canada. The constraints are about disclosure, likeness consent, and accurate product representation, not about whether AI may be used. Brands selling into the EU need to plan for Article 50 transparency obligations from August 2, 2026.

Do I own the copyright to AI-generated product images? In the US, only the human-authored portions are potentially copyrightable. A purely AI-generated image cannot be registered. The product, the brand, and the trade dress are protected by separate IP regimes (trademark, design rights), independent of the image's copyright status.

Do I have to disclose that my product photos are AI-generated? It depends on the jurisdiction and the platform. The US has no blanket rule for product imagery, but the FTC prohibits deception. The EU AI Act adds transparency obligations from August 2, 2026. New York requires synthetic-performer disclosure from June 2026. Etsy and Meta Ads require disclosure today.

Can I use AI to generate models that look like real people? Generally no, without consent. An AI-generated person who is identifiable as a real individual can trigger a right-of-publicity claim, even if you never intended to depict that person. The defensible posture is a fixed synthetic model the brand has rights to, used consistently across the catalog.

What does the EU AI Act require for AI product images? From August 2, 2026, providers of AI image systems must mark outputs in a machine-readable format. Brands deploying AI imagery that could be mistaken for conventional photographs must disclose to users that the content is AI-generated. Fines reach up to EUR 15 million or 3% of global turnover.

Does Amazon allow AI-generated product images? Yes for AI-assisted edits like background replacement, color correction, and lighting adjustments. AI generation that misrepresents the product's physical characteristics is prohibited. Substantial AI modifications should be disclosed. The help-desk page on Amazon's policy has more.

Are AI product photos legal on Shopify and Etsy? Yes on both. Shopify does not mandate an AI disclosure label, but FTC deception rules still apply. Etsy requires sellers to check the AI checkbox, select "Designed by," and state AI use in the description, effective January 14, 2026. The Etsy and Shopify disclosure spoke walks through the listing-form steps.

What does the FTC say about AI in advertising? The FTC applies its existing deception standards (Section 5 of the FTC Act and the Endorsement Guides) to AI content. A product image that materially misrepresents what the buyer will receive is deceptive whether AI was used or not. AI-generated reviews are explicitly prohibited under the Consumer Review Rule, and the first warning letters under that rule went out on December 22, 2025.

Do I need a model release for an AI-generated person? For a fully synthetic model not based on a real individual, no traditional model release applies. For a custom model trained on or derived from a real person, you need scoped written consent covering AI use. Standard "all media now known or hereafter devised" language in older contracts is usually insufficient for AI-generated derivatives or digital replicas.

Can I be sued if an AI image accidentally resembles someone? Yes, in jurisdictions with right-of-publicity statutes, including New York, California, and Tennessee. The plaintiff must show identifiability, commercial use, and lack of consent. A controlled workflow with a fixed synthetic model the brand owns reduces the chance of accidental resemblance.

Are AI images copyright-free or public domain? Not exactly. Pure AI outputs cannot be registered for copyright in the US, but that does not place them cleanly in the public domain. Other parties may use similar outputs, and the underlying product, brand name, and trade dress remain protected by separate IP regimes. Treat AI imagery as commercially usable, not as freely reusable by competitors.

What is the New York AI Disclosure Law and does it apply to product images? New York General Business Law Section 396-b, as amended by S.8420-A / A.8887-B, requires conspicuous disclosure when an advertisement features a synthetic performer. It applies to any commercial advertising distributed in New York. Product images that include an AI-generated person fall within scope. Effective June 9, 2026; penalty USD 1,000 first violation, USD 5,000 subsequent.


References

Primary sources

Legal analysis

Industry context