Welcome to the next leap in irreality.

Confused? That increasingly may be the status quo in terms of media use going forward. Doubtless, most anyone with a cursory knowledge of the digital world is aware of its inherent risks-everything from new forms of identity fraud to spiking rates of mental health illness associated with social media usage.

At a basic level, I think most people can grasp how perception shapes our realities. But, while the mind does well enough on its own to muddle the waters, we as a species are reaching a watershed point where technology is passing beyond the limits of human ability, into something else entirely.

Our tools to differentiate between the real and the fake-our sense of touch, smell, hearing, but especially sight-are rapidly falling into obsolescence. Objective reality, factual truth, as they're understood to be now, look to be undermined in coming years.

And that, as a wise man once said, is like childhood innocence meeting the concept of death for the first time. It's a deeply disquieting experience.

Call me an alarmist, but there should be enough prior evidence that illustrates such an erosion on our shared reality. I won't digress here, but as for the present and immediate future, the most eye-opening-and certainly most concerning-is the advent of deepfake.

Deepfake-a portmanteau of "deep learning" and "fake"-typifies software that can be used to alter or superimpose features of one person over another, often with uncanny detail and accuracy.

It's also been used as a descriptor for advanced voice technology that can mimic real people's voices with eerie accuracy, all the way down to the tiniest inflections, cadences and other idiosyncrasies. It is these tiny details, mostly unregistered by the conscious mind, that give our conversations authenticity over the stilted, synthetic-sounding voices of traditional robot-speak, a la Stephen Hawking's text-to-speech synthesizer.

But, that's all going by the wayside.

This is lightyears ahead of Photoshop. Essentially, deepfake technology is about using artificial intelligence to cobble together photos, audio clips, video and other forms of data, upon which convincing fakes can be generated. Often, without explicitly knowing that one is experiencing a fake, the technology is quickly becoming advanced enough to hoodwink even trained human eyes.

And this technology is widely available to the public, requiring relatively little skill to use because of its artificial intelligence basis and ability to self-learn. In a matter of hours, a competent professional can teach themselves to use the software effectively, in less than a day they churn out frauds like a production line.

And the results are stunning. There are plenty of examples for the curious to see for themselves -recent videos by Jordan Peele posing as former President Barack Obama, or the weirdly funny clip of Jennifer Lawrence with Steve Buschemi's gaunt, bag-eyed face superimposed over her own.

Sure, it's a little odd. Barack Obama's mouth and lips move with a certain stiffness and there's nothing behind the eyes. This "Stennifer" face has a strange fuzziness, or zombified quality that's difficult to articulate. But if the average American isn't acquainted with Obama or Lawrence-Buscemi, how would they know any different? And, with how the technology is progressing by leaps and bounds, how long before visual analysis is not enough?

In fact, Samsung is currently working on software that can take a single image-even a non-living caricature, such as the Mona Lisa-and use a framework to create a striking impression without the aid of 3D models. If all it takes is a single internet image of Michael Jackson to bring back the King of Pop, the same goes for the rest of us.

But wait, there's more! Pundits may point out that in the world of sentient bipeds things like sight and hearing are all we got, but for computers that operate in a completely different matrix, a clip's authenticity can be verified through its code imprint. After all, everything from defending copyright to combating the spread of child pornagrophy is done in this way, right?

Maybe so for now. Maybe not for long. The world turns, technology advances by leaps and bounds, and new realities present themselves. And, if we're faced with a footrace between deepfake's ability to create counterfeit realities and society's ability to create a failsafe, where does that leave us?

It's largely speculation, but there's enough current data to paint a vague, if disturbing image of how reality could be altered going forward. This leads to a litany of uncomfortable questions.

Could campaigns of misinformation harness themselves to greased lightning, take on new accelerations, dimensions and scope? Could interactions via video-based social media or applications-such as, say, Skype-become a real-time ruse, even to the wary? Do we lose the ability to trust the basics of communicating with each other?

Already, there are plenty of cases where people's faces and identities were spliced onto pornographic clips without their consent. Do criminal, official or legally-binding actions follow suit if they're recorded via video, audio and so on?

Edited images and videos-alterations on par with cave paintings next to a Da Vinci-have affected political discourse going back to Stalin, so does that mean a well-executed deepfake has the potential to upend an election and circumvent democracy?

On the other hand, if skeletons are pulled out of the closet and a candidate's past comes to light, can they simply pass it off as "fake news" and dub it a deepfake to avoid the consequences of their actions?

Will identity theft and fraud evolve into new echelons where even the deceased can be used and abused as reanimated counterfeits they never consented to be in death? Will video and audio-in the absence of an accessible and affordable form of verification-become inadmissible as evidence in court?

How does that erode the concept of recording video or audio? In a world where video evidence has often served as a primary, foundational indicator of truth, how does it erode the concept of recording anything?

The conclusion may be these questions point to a reality that's unrecognizable to us, but the real nightmare isn't these far flung speculations, it's if the irreality becomes familiar, recognizable and indisputable, when it doesn't matter one way another, six out of twelve and half a dozen of the other.