How developers can spot AI-generated 3D models

Game developers had mixed reactions to the advancement of generative artificial intelligence in video game development. Some are eager for machine learning-based tools to help solve complex problems, others are tired of being told that AI will replace workers or make their lives easier (when it could simply create more jobs).

One issue with implementing generative AI tools is the transparency (or lack of transparency) of when a game or game asset was made using AI. Hiding this fact can create surprises in the hiring or publishing process if an asset unexpectedly appears in a game. It can also create difficulties when purchasing 3D assets from stores like Fab. It's now well known that developers can spot AI-generated 2D art by “looking at fingers” (or other common cues), but how do you do the same for 3D models?

If modelers making 3D assets with generative AI were required to label their models as AI-made, it would be easy. But on Fab they are not.

Over at Bluesky, veteran 3D artist Liz Edwards answered this question. In a short thread, he explored the common traits in AI-generated 3D models and how those traits often make such models inferior for general use in game development. She was kind enough to let the game developer recreate her insights to help you learn how to spot the machines work.

Related:Stellaris' new AI characters are powered by AI voices

Poor textures and hazy UVs

Edwards started by examining a 3D model of a penguin model he spotted on Fab, which at first glance might not seem so strange. Veteran developers might look closer and notice some quirks like the shape of the feet and strange lines on the stomach, but Edwards pointed out deeper flaws.

The model, he wrote, has “telltale signs” of the AI ​​generation. 3D-generated models made by AI usually have built-in lighting, he explained, and textures are projected from a 2D image. The artifacts from that 2D image remain on the skin.

A post by Liz Edwards showing a 3D model of a penguin made with generative AI. It has terrible built-in lighting and bad texture projection artifacts.

Image by Liz Edwards via Bluesky

Edwards had access to the model and observed the wireframe and UV maps, noting that the wireframe resembled a “dense automesh” and that the maps were automatically discarded, leaving a “fuzzy capacitance” in their wake.

lizedwardsgenai2.png

Image by Liz Edwards via Bluesky

She he compared it to another penguin model made by “Pinotoon”, which possessed more familiar traits, such as a clean UV layout and better positioned eyeballs and beak.

Edwards used another example of a strange cabinet hybrid to illustrate a common trait with 3D-generated models: an incredibly high polygon count. Elsewhere on Bluesky, he highlighted how genAI 3D model creators will place objects crates on the Fab market that has 50,000 triangular polygons, where the average case in a video game only needs 500 triangular polygons at the top end.

A post by Liz Edwards showing an AI-generated 3D model of a piece of furniture.

Image by Liz Edwards via Bluesky

3D models made with generative AI can be confusing

Edwards cautioned that the above characteristics alone do not automatically identify a model as generated by AI tools. For example, 3D models captured using photogrammetry share the same traits. The difference is that these models have natural textures, are usually free of artifacts, and have consistent naturalistic details.

A close-up look at a 3D model of a lion statue with fuzzy UV maps and dense mesh.

Image by Liz Edwards via Bluesky

If you're not animating a certain 3D asset or aren't concerned about polygon count, you might shrug off a model that looks like it was created with photogrammetry, but be careful. As Edwards has demonstrated, these models can still contain inconsistent details that appear disturbing or inappropriate when viewed up close.

A post by Liz Edwards showing a 3D model made with generative AI that appears to be captured via photogrammetry.

Image by Liz Edwards via Bluesky

He also explained that meshes on generative 3D models made with AI are “rarely” symmetrical and are often fused together into “shapeless blobs.” Those “blobs” often weld feet or arms together on animals, monsters, and humanoids, making it impossible to pose or animate them.

A post by Liz Edwards showing how 3D models made with generative AI will weld limbs together.

Image by Liz Edwards via Bluesky

This issue highlights the major risk of using models that trade quality for effectiveness in creation: if the object looks “good enough” but has too many polygons or is impossible to animate, an object may interfere with the rest of the work a team needs. to play a great game.

Know how to spot machines

Game developers should make their own Voigt-Kampf test identify whether visual art is made via generative artificial intelligence? For now it would be a “yes”. There are many ways that an art asset made with generative AI could slip through your pipeline, and if it isn't captured quickly, it could cause problems for a team not ready to capture it.

Not all generative AI technology is made with deception in mind. But it has been pointed out that a broad use case of generative AI is to deceive others, especially when using image, text or video generation.

A developer looking for a solid penguin model could waste precious time trying to make this buggy version work. A recruiter inexperienced in 3D modeling might forward AI-generated models to a candidate if he or she can't see the imperfections created by the machine. For now, all developers can do is arm themselves with the necessary tools and evaluate whether an AI-generated model meets their needs.

If you need to sort through a little faster, be sure to review Edwards' full thread (and other posts on 3D art) on Bluesky.

Comments

No comments yet. Why don’t you start the discussion?

Leave a Reply

Your email address will not be published. Required fields are marked *