25 Apr 2026

The line we draw on AI in medical and regulated animation

The line we draw on AI in medical and regulated animation

At Animara, we work across some of the most scrutinised sectors in commercial production. Medtech, pharma, defence, biotech. The visual standards in regulated industries are higher than most studios are used to, because the audiences aren’t casual.

At Animara, we work across some of the most scrutinised sectors in commercial production. Medtech, pharma, defence, biotech. The visual standards in regulated industries are higher than most studios are used to, because the audiences aren’t casual.

At Animara, we work across some of the most scrutinised sectors in commercial production. Medtech, pharma, defence, biotech. The visual standards in regulated industries are higher than most studios are used to, because the audiences aren’t casual. They’re regulators, procurement teams, clinical reviewers, investors with sector-deep knowledge. They evaluate visual credibility with the same scrutiny they apply to your claims.

That context shapes a decision we’ve made about AI in our pipeline. We won’t use it to draw anatomy or any other hyper-realistic scientific imagery. Here’s why.

This week, OpenAI shipped another image model. Within hours, working anatomists, surgeons and medical illustrators were on LinkedIn pulling its outputs apart. A “stunning” heart with four vessels coming off the aortic arch instead of three. Inferior vena cava missing. Atrial appendage in the wrong place. A UK plastic surgeon ran a separate test on hand anatomy that same week and posted the same conclusion. The thread filled with one variation of the same response, professional after professional. Sigh. So many.

That’s the gap nobody at the tool companies is closing. Generative models don’t understand biology. They understand how biology tends to look. The difference doesn’t matter if you’re making a logo or a stylised marketing illustration. It matters a great deal when your audience is a clinician, a regulator, or an industry expert who’s spent twenty years working in the field.

AI is part of our toolkit at Animara. So is everything else. 2D, 3D, motion graphics, illustration, XR, AR, VR. We pick the tool that’s right for the job, not the tool having its moment in the press. For regulated industries and high-stakes scientific imagery, particularly anatomy, the right tool isn’t AI. Not yet, and possibly not ever.

How we actually use AI

Most studios won’t tell you this, so I will. The animation industry runs on a mixed pipeline now. Almost every serious shop combines traditional craft with AI-assisted production somewhere in the workflow. After Effects, Cinema 4D, Blender, Unreal, Houdini, plus AI tools for specific tasks. They’re all software. The question isn’t whether you use AI. It’s where you use it, who’s holding the steering wheel, and whether the output can stand up to the audience it’s going to.

For most of what we make at Animara, AI is genuinely part of the toolkit. Used in the right places, it speeds up specific parts of the pipeline. It doesn’t replace the craft. It supports it. We deliver high-quality animated explainers within 2 to 4 weeks of storyboard sign-off, with a senior animator directing every shot.

That matters more in some industries than others. If you’re a medtech firm, a pharma brand, a defence contractor or anyone operating under regulatory review, the cost of a credibility failure is high. Pulled campaigns. Compliance flags. Lost investor confidence. Regulators don’t forgive sloppy visuals on the same product they’re being asked to approve. Procurement teams remember which suppliers cut corners. The visual production has to clear the same bar as the science behind it.

Where AI breaks, and what it costs you when it does

Here’s the awkward part. AI is loudest about its capability in exactly the place where it fails most. Photorealism in biology. New model demos always feature a glossy heart, a brain with light catching the cortex, a tumour sitting prettily in surrounding tissue. They look beautiful in the press release. Run them past a clinician and the wheels come off in seconds.

This is the failure mode that’s fatal for high-stakes audiences. A film shipped to a regulator, a clinician or an investor that gets the chamber wall wrong doesn’t just irritate them. It tells them you don’t know what you’re doing. Which means the product or science behind the film is probably suspect too. Trust gone. Credibility gone. The film won’t get used, and the buyer won’t come back.

We’ve watched this happen across the industry. AI films torn apart on LinkedIn by the exact community they were trying to reach. The thread I mentioned at the start wasn’t an isolated case. There’s now a working community of anatomists, surgeons and medical illustrators who actively test every new image model the day it lands. They’re organising publicly around the failures. If you ship middle-zone AI medical content into that audience, you’re not gambling on a few people noticing. You’re shipping into a community trained to notice and motivated to share.

The reason this happens is simple. AI models are trained on images. They’ve never dissected anything. They don’t know what’s underneath. So they generate what’s plausible at a glance. A glance is exactly the resolution at which a non-expert audience consumes content. Specialists work at a different resolution entirely. They notice things you and I would never see.

What we do instead

For hyper-realistic anatomy, we use traditional 3D, built by humans who actually know the structures they’re modelling. Our medical 3D work draws on more than twenty years of experience in the field and a network of top-tier specialists, many of them PhD-trained medical illustrators. The cost goes up. So does the timeline. That’s the trade-off, and we’ll tell you that openly when the brief calls for it.

For everything else, where the audience needs to understand a process rather than recognise a structure, we use designed metaphor. Golden threads for neural connections. Particle systems for molecular movement. Geometric abstraction for cellular processes. The visual is honest about what it is. It’s a representation, not a pretend photograph. No anatomist has standing to say “that’s not what a neuron looks like” when the visual was never claiming to be a neuron in the first place.

This is also where AI helps most. Stylised, designed, repeatable visual language sits exactly where AI tools are strong. Backgrounds, transitions, ambient detail, atmosphere. The scientific accuracy lives in the structural decisions a human animator makes. The AI fills in around it, under direction.

The principle

Every project we take on starts with a question. What does this audience need to see, and what’s the right tool to show them? Sometimes that’s a hybrid AI production. Sometimes it’s a traditional 3D build with a medical illustrator on the team and a longer timeline to match. Sometimes it’s XR, sometimes it’s print, sometimes it’s a 30-second motion graphic. The tool follows the brief. Not the other way around.

What we won’t do is force an unsuitable tool onto a job because the unit economics are tempting. We won’t ship AI anatomy to a clinical or regulatory audience and hope nobody notices. They will notice. They always notice.

If you’re a brand director, a regulatory affairs lead or a comms team weighing up where to take your project, there’s a question worth asking the studio you’re talking to. Where are you using AI on this, and where are you not? The answer should be specific, and it should be tied to the audience the work is going to. That’s the answer we’d give you, because it’s how we actually work.


Recent posts

Recent posts

Latest insights and creative inspiration.

Latest insights and creative inspiration.