Episode 123: AI Content Is Now Undetectable Without AI — Max Eisendrath, Red Flag AI

Co-Host

Aytekin Tank

Founder & CEO, Jotform

Co-Host

Demetri Panici

Founder, Rise Productive

About the Episode

In this episode of the AI Agents Podcast, host Demetri Panici sits down with Max Eisinger, Founder and CEO of Red Flag AI, to break down content protection in the age of AI, deepfakes, and large-scale digital piracy. They talk about how piracy has evolved from classic reuploads to live stream leaks, and why AI-generated content is making attribution and authenticity harder than ever. Max shares how Red Flag AI approaches detection at scale, why watermarking/fingerprinting and provenance tracking matter, and what platforms like YouTube are doing to respond. They also cover Red Flag’s upcoming “Shield” concept (designed to make training on protected content way more expensive), the arms race of filters/edits meant to evade detection, and why soon humans won’t reliably tell what’s real without advanced verification. This episode is a must-watch for creators, media teams, and AI builders who want a clear view of where content ownership, monetization recovery, and authenticity standards are headed.

Oh, I mean there's no way people are going to on their own be able to tell, probably even now.

I think it's reaching a point where without advanced types of AI, you need a comparable detection to tell because some of it is just so good.

Hi, my name is Demetri Panici and I'm a content creator, agency owner, and AI enthusiast.

You're listening to the AI Agents podcast brought to you by Jotform and featuring our very own CEO and founder Aytekin Tank.

This is the show where artificial intelligence meets innovation, productivity, and the tools shaping the future of work. Enjoy the show.

Hello and welcome back to another episode of the AI Agents podcast. I'm here with Max Eisendrath, the founder and CEO of Red Flag AI.

How you doing today, Max?

Hey, Demetri. Good to be with you. Good to be chatting. So first of all, tell us a little bit about how you got into the world of AI and what got you interested in making Red Flag.

Sure. Well, it's a long story that I won't bore you with all the details, but suffice it to say, I've been working in software development and AI related use cases for over eight years now.

Originally, it was focused on sentiment analysis and kind of large-scale social listening projects, crawling the web at scale for specific pieces of content.

That morphed somewhat naturally into content protection, which is the focus at Red Flag these days.

The AI originally was to handle efficiency in crawling and content detection for the business cases we were dealing with.

But more and more, we're dealing with AI generated content, separating that out from original, non-AI generated content and validating the antipiracy results that our system is finding.

So it's taken on a bunch of different meanings, which is pretty interesting, ranging from core efficiencies to what we're actually looking to detect at Red Flag today.

What would you say is the thing that sparked your interest in that area specifically?

Well, it's always interesting to try and do stuff at scale in an efficient and cheap way. That's kind of a classic problem in software.

The main use case was how can we accomplish something that's been sought after for a long time but really didn't scale in a practical way.

People weren't able to do it through typical SaaS applications and software. There's a lot of manual intervention still required for some of these use cases until very recently.

That was kind of the initial interest. Now, the problems of authenticity and proving out the origination and ownership of content is incredibly important and only going to become more of a problem.

Did you have a background in content at all? Creating it or reviewing it?

I just watched a lot of it. Not creating it. Definitely came at it more from the technical side.

It's an interesting area to learn about, talking to a lot of creators, publishers, and content owners these days. It's good education.

What do you think is the main issue right now with the world of content?

Give people a little bit of background who maybe don't know about that situation and how it's causing these copyright issues.

We started working on content protection almost four years ago, and the original focus was more classic antipiracy, like movies, software, music not being leaked or shared where it shouldn't be.

It's the endless whack-a-mole game of trying to remove it and having it pop back up, securing the content, and automating that process to make it efficient and scalable.

But the world has changed a lot since then. Now it's a lot more about live content, especially live streams and live events, sports primarily, handling how to trace premium live leaks for content owners and broadcasters.

That's become a big focus in the last couple of years.

The creator economy has really blown up, especially in the last 1-3 years, and the share of people making a majority of their livelihood from their content has exploded.

The concern about protecting content creators from unauthorized re-uploads and losing ad revenue when their audience watches their content elsewhere is huge.

We've been working with YouTube and other platforms directly to support creators concerned about that.

On the AI specific angle, AI generated content and deep fakes are concerns, whether generated for malicious purposes or not.

Differentiating between AI generated and non-AI content is more of a concern, as is name, image, likeness, and copyright attribution.

What do you think are the main areas where people are stealing pieces of content?

It's everything you'd think of and more. Especially on the creator side, it's not just top premium shows or events.

Classic piracy involves leaks and people watching stuff around paywalls, but much of this is based on audience capture and shifting revenue from the original creator to anyone who can re-upload it.

Anything with significant audience is being ripped off and uploaded on all platforms by other parties.

In some cases, there are well-developed businesses doing this at scale by farms of accounts stealing content from large audience channels and cycling through different geographies.

As with piracy and cybersecurity, there are some very competent bad actors.

How are you stepping in to help, and what practical changes are you seeing on platforms to help with this?

One of our main priorities is to make access very readily available for anyone who wants to use it, taking enterprise-level security solutions that required onboarding and handholding and making them self-service for the general public.

This solution will be requested by more people, from small creators to large enterprises.

Platforms are taking this more seriously. YouTube is the best at handling and prioritizing this, but Meta, X, and TikTok are starting to take it seriously too.

We're working directly with YouTube and handle coverage on other platforms. There's an interesting NIL solution coming to YouTube in spring or summer to protect name, image, and likeness from unauthorized use.

What are you looking to improve in your product this year compared to the past?

Besides onboarding and self-service, we're focusing on handling all types of content for all devices, including watermarking, fingerprinting, and detection.

We want to avoid false positives and pride ourselves on rigorous testing to ensure accurate matching.

Content is shared in many formats, protocols, codecs, and devices worldwide, and we cover 99% but will add more, especially for codecs used in India and Southeast Asia.

You're working directly with YouTube. What's that experience been like?

It's been great. We started in earnest a few months ago. They're collaborative and appreciate help supporting content creators with concerns.

Third parties doing this for them is a priority. Other platforms are also increasing focus this year and next.

Classic piracy and re-uploading issues are more important to more people, not just big studios.

AI generated content is crazy, and nobody really knows how to handle it, monetize it, or be concerned about it. It's an open problem.

We're releasing a Red Flag Shield product mid-year to make video content difficult and expensive to train AI models on, using encryption and keys to incentivize companies to pay content owners before training.

How does that work to get them to pay?

We work with publishers, including book publishers, on antipiracy. The major Anthropic lawsuit last year settled training on their content.

Research shows little has been done to protect content from malicious training beyond static images in academia.

The idea is to introduce noise that can't be removed to confuse training and prevent models from effectively training on content quickly.

This can be circumvented with money but becomes economically cumbersome, discouraging overspending.

What's your favorite thing you've done to help someone out?

Content creator work has been gratifying, returning significant money to creators who lost revenue to re-uploads.

In one case, we returned more money to a channel than they made in a month legitimately.

Creators are happy about that, and it hopefully leads to wider adoption across platforms and the internet.

Removing content is necessary sometimes, but piracy is contentious. Creators should be compensated for their work.

The whack-a-mole approach is not the best solution. Monetizing results and compensating people is more impactful long-term.

Is commercial content repurposing a concern? Like clips of TV shows flooding feeds?

Yes, on TikTok and other platforms, people use filters and edits to get around detection, but it's an arms race.

New techniques arise, but accurate detection is getting very granular and effective, especially with watermarking.

The days of simple filters and bass drops fooling systems are numbered.

Soon, AI generated versions of the same content will be everywhere, making attribution more important than shutting everything down.

Recognizing the whole and attributing money with economic incentives will be key.

What's the best part about what you do?

Working with diverse creative people, from niche small creators worldwide to big studios and broadcasters, seeing production and distribution workflows is fascinating.

What are you doing to make your product more accessible?

Creators will be able to sign up and onboard in minutes to test how much money they can recover through monetization and takedowns.

For large enterprises, onboarding time for watermarking and tracking live sports and premium content has been reduced from over a year to weeks.

How do you grow with platforms as their internal systems improve?

We work closely with internal systems, layering on top and informing platforms.

Interoperability across platforms is important, providing a central view for content owners to see monetization and protection across all platforms.

What's your opinion on AI content generation's positive and negative impacts?

It's amazing what people can create now. It's the beginning of a new art form with people creating content directly from models and wanting to protect it.

There's intermixing of AI generated and traditionally generated content, especially in advertising.

The most popular streamer on Twitch is AI generated now, which is a crazy development.

AI content is moving at a crazy pace and is exciting.

What AI generated content is allowed because it's unique and interesting?

A lot of AI generated content is super creative, and people want to own attribution and rights like any other art.

There's high-quality videos, images, and music created with AI, and platforms will likely have sections dedicated to this content with filtering options.

Filtering AI generated content will need to be nuanced and accurate.

Will AI generated content become so unique that it can't be detected?

People probably won't be able to tell on their own, even now. Advanced AI detection is needed because some AI content is just so good.

You're working as a counter to AI generated content by analyzing it. How does AI determine if content is AI generated?

Detection can be top-down by analyzing pixels and aberrations, but it's imperfect and behind cutting-edge generation.

The surefire way is bottom-up, stamping or watermarking content at creation to indicate how it was made, tracking provenance and ownership.

This is crucial for news organizations verifying authenticity.

A series of checks on provenance and top-down detection together will help determine authenticity.

It may never be 100% solved, but shared standards around authentication will emerge in the next 2-3 years.

What are cool AI trends this year beyond content?

AI tools in software development are exciting, saving time on busy work and boilerplate tasks.

We use automated software development tools like Codex and CodeRabbit for coding, PR reviews, and testing, saving 20-30% of time.

Do you expect leaner teams this year?

Yes, and I'm optimistic about AI making work easier by reducing annoying tasks and busy work.

People want AI to help but not take away enjoyable parts of work, and so far that hasn't been a problem.

What's your favorite AI tool internally?

Mostly big tools like Codex for coding, CodeRabbit for PR review and testing, and GPT for market research and sales.

I also like Granola, a note-taking app that intelligently transcribes and organizes calls with follow-ups.

I use Grain for meeting recording and am exploring building my own tools for email management.

It's a fun time with many new tools, and it's easy to hack together solutions for low cost.

Where can people learn more about Red Flag AI?

Visit our website redfogi.co to request a demo, reach out by email, or set up a meeting. Soon, self-service will be available.

If you see no-bot edits of Shameless on YouTube, mark them as spam rather than downvoting, as dislikes help the algorithm.

Thanks for watching. Check out Red Flag AI, leave a like and comment your thoughts on the product and AI in general.

Let us know if you've seen no-bot edits or if you're the only one scarred by them on your feed. We'll see you in the next one. Peace.

Stay Ahead with the AI Agents Podcast

Get the latest insights on AI agents, their future, and developments in the AI form industry.