When did CAPTCHA become the boss fight of the internet?

Lately, every time I check the “I’m not a robot” box, I’m immediately promoted to unpaid image analyst.

Select all bikes.
Now hydrants.
Now buses.
Now crosswalks.

Now do it again because one pixel might still contain a tire.

What was designed as a security layer now feels like a friction layer.

CAPTCHA used to be a quick verification tool. A minor precaution. Now it’s a recurring interruption in the user journey. And the irony? The more often it appears, the less it feels like safety and the more it feels like system inefficiency.

From a UX standpoint, this is interesting.

Security is essential.
Fraud prevention matters.
Bot protection is real.

But when the protection becomes a barrier, we have to ask: are we optimizing for safety at the expense of experience?

As marketers and digital leaders, we talk a lot about reducing friction, smoothing funnels, and improving conversion rates. Yet one of the most common experiences across the web right now is forced micro-labour.

And here’s the bigger thought:

If AI can now mimic human behaviour well enough to trigger constant CAPTCHA checks, maybe the solution isn’t more image grids. Maybe it’s smarter behavioural detection, adaptive risk scoring, or invisible verification layers.

Security should feel seamless.
Protection should not feel like punishment.

Curious how others are thinking about this balance between UX and security. Are you seeing higher abandonment tied to verification friction?