Why a password no longer says anything about the person

If digital identity becomes unreliable, how do we verify high-risk interactions?

© Photo: Matt Fowler KC / Shutterstock.com

For years, we have talked ourselves into a sense of security: “If the login credentials are correct and the MFA code has been entered, then everything is fine.” But let’s be honest: that logic no longer holds up.

Due to the explosion of Generative AI, identity theft has not only become easier, it has become scalable. Where a hacker used to toil for days on a single convincing phishing email, AI now spits out thousands that are indistinguishable from the real thing.

The new reality: The machine is a master imitator

We must face the fact that a valid credential (such as a password or token) is no longer a guarantee that there is a real human being on the other end.

  • Voices are being cloned: With three seconds of audio from your YouTube video or a LinkedIn webinar, AI can perfectly mimic your voice. Calling the helpdesk? They hear you, but they are talking to a script.
  • Video is no longer sacred: Real-time deepfakes in video calls are no longer science fiction. They exist.
  • Tone of voice is replicable: AI analyzes the CEO’s writing style and sends an email to the administration that is so convincing even the most alert employee starts to doubt.

Why our current ‘walls’ are showing gaps

Our traditional security is static. It checks a key, but not the person holding the key.

  1. MFA is a threshold, not a wall: Attackers use AI to manipulate employees until they give away that one code. The technology works, but the human factor remains the weak point.
  2. The ‘gut feeling’ is disabled: We taught people to look out for typos or vague requests. But AI makes no typos and is never vague. The classic red flags have disappeared.

Where do we go from here? From ‘Technology’ to ‘Human Assurance’

We need to go back to the drawing board. If digital identity becomes unreliable, how do we verify high-risk interactions? The solution lies in Human-Centric Assurance.

1. Introduce ‘Healthy Friction’

Sometimes speed is the enemy of security. For critical actions (think of large payments or changing administrative rights), we must build in processes that are not purely digital. A simple phone call to a pre-arranged number, or a physical check, can make the difference.

2. Behavior as biometrics

Instead of looking at what you know (password), we should start looking at how you act. How does someone move their mouse? How do they type? AI can steal a password, but the unique human behavioral pattern behind a computer is much harder to simulate.

3. The ‘Live’ Check

During crucial conversations, we must draw the AI out. For example, ask someone in a video call to suddenly turn their head or hold a specific object in front of the camera. Many real-time deepfakes still show digital artifacts or glitches at that moment.

Conclusion: Trust must be earned (again)

The era of “The computer says it’s okay” is over. We must redesign our strategies around the fact that anything digital can be faked.

I believe that technology helps us move forward, but that the human element is the ultimate security. It is time to stop seeing identity as an IT checkbox and start seeing it as an ongoing process of human confirmation.

Previous Article

The 'Brussels Method': How influencers hack algorithms with negativity

Next Article

Loud Budgeting: Why Saying "No" Is the Latest Status Symbol

Write a Comment

Leave a Comment

Your email address will not be published. Required fields are marked *

Stay informed

Sign up for my newsletter and receive the latest articles directly in your inbox.
100% inspiration, 0% spam ✨