Why the ‘Generated using A.I.’ Credit Matters for Editorial Visuals
A New Yorker illustration of Sam Altman carried an explicit 'Generated using A.I.' credit. That choice signals practical editorial and product decisions founders and operators should parse.
Key takeaways
- Public AI credits communicate provenance, shift audience expectations, and operationalize policy.
- Maintain internal provenance records that map public credits to contracts and model usage.
- Standardize disclosure wording and placement for consistency across products.
- Be mindful that simple disclosures can obscure complex creative practices.

The New Yorker ran an illustrated portrait of OpenAI CEO Sam Altman that explicitly credited the image as generated with AI. The artwork — a blue-sweater Altman surrounded by disembodied, distorted faces and one face held in his hands — was credited to artist David Szauder with the line:
"Visual by David Szauder; Generated using A.I."
Szauder is described as a mixed-media artist who has worked with collage, video, and generative art processes that predate commercial AI tools for over a decade. The juxtaposition of an established editorial title, an artist with a long practice in generative techniques, and a blunt credit line raises concrete operational questions for teams that commission, productize, or display imagery.
What the credit actually signals
Labels like "Generated using A.I." do three distinct things at once: they communicate provenance, they set audience expectations, and they act as a design decision. For an operator-minded reader, these are not rhetorical; they create downstream constraints and opportunities.
Provenance and decision trails
Explicitly indicating AI involvement creates a visible provenance trail. That matters when editors, legal teams, or users need to audit how a piece was produced. A plain-line credit is a minimalist form of documentation that can be captured in asset metadata, content databases, and release notes. It’s not a substitute for fuller records, but it functions as a public-facing token tied to an internal record.
Expectation-setting for the audience
When readers see a disclosure, their interpretation of the image changes. They evaluate intent differently, and they may ask questions about authorship, authenticity, and editorial standards. From a product standpoint, that alters trust signals and content perception metrics — engagement, time on page, and complaint rates could shift when users know AI played a role.
Design as policy
Choosing to put an AI credit in a footer or as a parenthetical note is a small design choice with policy implications. It’s a deliberate act to surface process to the user interface, and it creates a precedent: editors and creatives will expect consistent handling of similar materials in the future.
Operational implications for teams
Any team that touches visual assets should treat AI credits as part of the content lifecycle. This affects intake, procurement, rights management, and UX.
- Intake and commissioning: Briefs need explicit fields for whether creators used AI tools and what outputs were post-processed by humans.
- Procurement and contracts: Statements of work and licensing agreements should clarify allowed uses of AI in deliverables and who is credited publicly.
- Rights and attribution: The public credit is one layer; legal rights and source files are another. Maintain an internal ledger that maps public attributions to contract terms and model/asset provenance.
- UX and moderation: Decide where and how to display AI credits in the interface, and include them in content moderation and labeling pipelines.
Signal vs. noise: how explicit credits change creative economies
An explicit AI credit alters the market conversation about creative labor even when the artist has a deep history with generative methods. In the New Yorker example, the artist David Szauder is noted to have worked with collage, video, and generative processes well before commercial AI tools existed. That context complicates a simple narrative that AI replaced a human process.
For founders and product leaders this is important: a one-line disclosure is a public simplification. It reduces a complex workflow — an artist’s decades-long practice intersecting with modern tools — into a binary label. That simplification can be useful for clarity, but it can also obscure nuance that matters for community relations, talent retention, and brand positioning.
Practical checklist for teams shipping imagery
If your company commissions or integrates imagery — editorial websites, marketing, in-product illustrations — treat AI credits as a small but systemic component of your content operations. Use the following checklist as a baseline to adapt:
- Require a provenance field in all creative briefs noting any AI tools and the extent of human editing.
- Store that provenance in a searchable asset database tied to licensing documents and creator agreements.
- Define a visible disclosure standard (exact wording, placement, and styling) to ensure consistency across platforms.
- Train product and editorial teams on how disclosures affect user perception and metrics you care about (engagement, complaints, trust signals).
- Maintain a reconciliation process so public credits match internal records; audits should be possible in minutes, not days.
Trade-offs and open questions
There are no neutral choices here. A clear public credit favors transparency and may protect against reputational risk, but it also simplifies complex artistic workflows. Hiding AI involvement avoids simplified public narratives but increases the risk of later disputes or community backlash.
Teams must weigh these trade-offs against their product goals. If trust and editorial integrity are core to your brand, surface the information and back it up with internal documentation. If seamless user experience is the priority, consider whether minimal but honest disclosures provide enough context without overloading the interface.
What This Means For You
If you run a product, newsroom, or creative studio, treat AI credits as part of your operating system, not a marketing decision. Implement the checklist above and make the disclosure a component of your asset schema. That way you get the benefits of transparency — clearer audits, fewer disputes, and predictable user expectations — without leaving editorial or product teams to improvise each time.
Key Takeaways
- Public AI credits communicate provenance, shift audience expectations, and operationalize policy through design.
- Keep internal provenance records that map public credits to contracts, model usage, and creator workflows.
- Standardize disclosure wording and placement to avoid inconsistent precedent-setting across products.
- Balance transparency with nuance: a one-line AI credit can obscure complex, pre-existing artistic practices.
Next move
Continue the operator thread — or move from reading to execution.
Continue reading
More Originae insights from the same operating thread.

Florida AG Opens Probe After Report ChatGPT Was Used in Campus Shooting
Florida's attorney general has opened an investigation after reports that ChatGPT was used to plan an attack that killed two and injured five at Florida State University; a victim's family plans to sue OpenAI.

Operational security lessons from attacks on Sam Altman's home
Two incidents at Sam Altman's Russian Hill home in one weekend — a shooting and a Molotov attack — expose rapid escalation risks and concrete operational gaps for high‑profile individuals.

Teaching in the age of ChatGPT: an operator’s view
A long-time adjunct’s account: teaching asynchronous Earth science brought fulfillment until generative AI increased friction. Here’s a concise operational read on why recorded courses struggle and what to consider.