Artificial Intelligence | News | Insights | AiThority
[bsfp-cryptocurrency style=”widget-18″ align=”marquee” columns=”6″ coins=”selected” coins-count=”6″ coins-selected=”BTC,ETH,XRP,LTC,EOS,ADA,XLM,NEO,LTC,EOS,XEM,DASH,USDT,BNB,QTUM,XVG,ONT,ZEC,STEEM” currency=”USD” title=”Cryptocurrency Widget” show_title=”0″ icon=”” scheme=”light” bs-show-desktop=”1″ bs-show-tablet=”1″ bs-show-phone=”1″ custom-css-class=”” custom-id=”” css=”.vc_custom_1523079266073{margin-bottom: 0px !important;padding-top: 0px !important;padding-bottom: 0px !important;}”]

Hold Onto Your Hats, Things Are About to Get a Lot Weirder

You know how whenever kids hang out they eventually wind up sitting in silence, texting each other (and friends who aren’t physically there)? Well, hold onto your hats. It’s going to get weirder. I contend that digital telepathy is an inevitability in our lifetimes. That’s right, I said it. In fact, I’ll wager that within two decades, we will be regularly communicating back and forth, via thought, in any language, with anyone in the world.

It might seem a bold statement, but I further assert that at some point (and relatively soon) nobody’s going to have to talk to anybody anymore. Digital telepathy will become the norm and talking out loud will seem cumbersome and archaic.

Here’s how it’s going to work.

Phase I: Things will begin simply enough, with a voice-input/audio output device on each end (e.g. an Apple EarPods/AirPods) and a language translator app in-between. The earbud records your words, the app translates them into another language and sends the text data to your recipient’s phone, which plays it one of the synthesized text-to-speech voices currently available. And vice versa. Easy-peasy. So far so good, right?

Phase II is more exciting. Here’s where you’ll be able to hear the text translated into any language, but in the sender’s (simulated, but still convincing) voice. But Steve, you’re thinking, hold up. How does that happen? Well, this is where some Deepfake-type AI comes in. Check out what Descript’s AI division is doing with Lyrebird. Scroll down on their website, input some text and blow your mind.

“But Steve,” you’re thinking, “I don’t get it. What’s the advantage? If the sender is talking into their device, why not just send the audio?” The answer is simple: file size. The size of a snippet of text is infinitesimally smaller than the size of an audio file (e.g. a four-minute mp3 is bigger than Shakespeare’s complete works (3.5 million characters) of plain ASCII text). So, the advantages include eliminated storage needs and much faster, more reliable data transmission (this becomes even more relevant when we get to the mind-reading phase). Also, for any language translation to happen, the audio first needs to be converted to text before being converted back to audio in another language.

Considering what Descript is doing. Given that they’re already asking people to create their own vocal signatures to run their application, it’s perfectly reasonable to assume that at some point very soon, everyone will be able to access a copy of their own vocal signature on their mobile devices; Apple will likely store them in iCloud where they’ll hopefully be safe from those with nefarious intent.

Read more: A Cyber Approach to Coronavirus Containment

Phase III: Next, we take away the voice-input aspect and replace it with a brainwave reader capable of turning your thoughts into texts. Think we’re entering the realm of science fiction? Think again. Back in June, Daily Mail asserted a “‘ Mind-reading’ chip unveiled in China could soon let you control your smartphone or PC with your thoughts” You may not be surprised to read that China would be interested in reading peoples’ minds, but that aside, they claim to have created a “mind-reading chip” that works by “picking out small electrical pulses in the brain and quickly decoding them into signals that a computer can interpret.” Unveiled at China’s third World Intelligence Congress on May 1, 2019, the operations this chip is capable of carrying out are relatively crude.

The “Brain Talker chip” “allows for the faster operation of various technologies” and may “assist people with disabilities, for example by letting an individual drive an electric wheelchair just by thinking.” My buddy John Hershey, Google Research Scientist in Machine Perception (and Sound-Separation Wizard) asserts the same Machine Learning strategies and the technology used to isolate a single voice at a crowded cocktail party could be used to isolate and decipher the brain pulses that relate to each word and distinguish them from all the background noise.”

Related Posts
1 of 658

As with voice reproduction, a brain-reading technology might require an initial training period that happens at the individual level, where the user is asked to think about certain words or sounds, and the device constructs an alphabet or glossary of brain waves from this information. Regardless of how it works, this device would still sit outside the body, so as not to be too creepily invasive.

But perhaps there’s another way we might accomplish this (at least at first). Similar to how the handwriting recognition of early Personal Digital Assistants (PDAs) couldn’t interpret actual handwriting, but required users to learn a specially written shorthand (e.g. Palm OS’s Graffiti), perhaps instead of trying to read language from our normal thought patterns, we will need to alter our thought patterns—the way we think words—to make the signals more detectable/unique/broadcast-able to the system. For instance, what if we could somehow think our words more loudly? Or tag them with a specific thought-signal?

Also note that, in this phase, while text messages would be sent by simply thinking them, on the receiving end, users would still hear the audio messages via ‘traditional’ means (e.g. a Bluetooth earbud).

Read more: How Much Artificial Intelligence Has Changed the Forex Trade

Phase IV: Full-on brain-computer integration. This is where things get creepy. Messages are sent right from one brain and transmitted directly to the recipient’s brain (with brain wave and language translation in-between). Naturally, this holds the most immediate promise for the deaf, dumb, and physically challenged. And while I’m not super-psyched about this, a recent Frontiers of Neuroscience paper asserts groundbreaking developments in “Human Brain/Cloud Interfaces” in the coming decades.

In this case, telepathy is almost a minor feature of a bigger computing revolution, an after-thought (sorry), where users may even surf the web from the comfort of their own heads. “A stable, secure, real-time system may allow for interfacing the Cloud with the human brain.” Their envisioned ‘human brain/Cloud interface’ (‘B/CI’), would employ a new technology they frighteningly refer to as “neuralnanorobotics.”

According to Robert Freitas, “a fleet of nanobots embedded in our brains would act as liaisons to humans’ minds and supercomputers, to enable ‘matrix style’ downloading of information.

‘These devices would navigate the human vasculature, cross the blood-brain barrier, and precisely auto-position themselves among, or even within brain cells. They would then wirelessly transmit encoded information to and from a Cloud-based supercomputer network.’

This mirrors the “singularity” Ray Kurzweil predicted in 2014 as well as the endeavors Elon Musk claims to be pursuing with his “Neuralink” brain-machine interface startup which aims to “merge people with AI.” What’s more, a network of brains could form a “global superbrain” that would allow for the collective consciousness. So, get ready, everybody.

At some point in the not-too-distant future, you might not just be sending telepathic messages to your friend down the street, you might broadcast them to everyone on the planet, in every language there is.

Read more: Top 5 Wearable Tech Trends in 2020

Comments are closed, but trackbacks and pingbacks are open.