In a TikTok video with over 110,000 likes, a woman unlocks her house with a microchip in the back of her hand. Chipgirlhere, as she’s known on TikTok, says she lives in a smart house equipped with radio-frequency identification sensors. Instead of using a key card to unlock drawers or doors, she uses an implant under her skin. It looks pretty convenient.

A darker tale emerges from a company called Neuralink, owned by Elon Musk. While Musk roiled the internet with wild changes of direction and bewildering new rules of engagement at Twitter, Neuralink has been under investigation for allegations of animal abuse and suffering resulting from Musk’s orders to rush animal testing. The company is trying to invent implanted brain devices that will cure neurological impairments like enabling paralyzed people to walk again.

These may seem like something out of a sci-fi novel, but the underlying technology for both is not far-fetched. 

Implantable technology has evolved from life-sustaining things like pacemakers to what Dr. Katina Michael calls life-enhancing tech.

Michael is a professor at the School for the Future of Innovation in Society at Arizona State University. She is a leading scholar in emerging technologies, and has interests in how technologies are used for national security and the social implications that may result. I sat down to talk with her about the risks that come with that life-enhancing tech being implanted in the human body. This conversation has been edited for length and clarity.

Devices like Apple Watches or Fitbits aren’t new. Can you give us a brief history of the blurring of the line between human bodies and technology?

Humans used to fit inside the machine, if you look back into the 1940s and the ENIAC, the first general purpose computer. And then silicon came into the picture and changed everything. More and more, we went from large-scale computers to mini computers to microcomputers to wearable computers. And now we are looking at biomedical devices, for example, that are embedded in the human body. So we’ve juxtaposed being inside the machine and the machine being inside us. 

I think what has happened is computers used to store what we think; computers now know what we feel. There’s a complete difference here. I used to have to ask somebody via telephone, “What’s your opinion on this? How does it make you feel?” But with embedded or wearable sensors, you don’t have to ask that question anymore. We know when someone has sweaty palms, if they’re stressed, if their pulse rate is high or low. 

The greatest invention that has allowed this to come closer and closer to the body, of course, is the smartphone. But now, that’s not enough because it’s too cumbersome, and it’s not as accurate as if it could be embedded. There were many patents in the 1990s for personal digital assistants before smartphones came onto the scene. They used to have embedded PDAs in the upper tricep or wearable PDAs on a head-mounted unit. But now we’re saying, well, why do we need this clunkiness? Why not brain-to-brain interfaces? Why not brain-to-computer interfaces?

Are there certain technologies that are emblematic of the trend of tech increasingly becoming part of the human body?

You can’t go past Elon Musk’s Neuralink. But to be honest, this is really late in the game. He created Neuralink in 2017. 

But going back 20 years, there was a company on the New York Stock Exchange by the name of VeriChip. The parent company was Applied Digital Solutions. They had a patent for an implantable device in the right tricep that indicated they would be exploring a variety of applications, such as being able to secure a physical space. Instead of using a radio-frequency identification card, you would use the implantable. There was a veripay system. Instead of using a credit card that you had to lug around in a purse, you could actually use a free arm to get a read on an implantable device.  

I visited the Baja Beach Club in Barcelona, Spain in 2009 that was using a system for frequent visitors. There were over a hundred patrons that were chipped in that club. There have been hundreds of use cases since then. There’s been SJ Railways in Sweden that created implantables for VIP customers on their trains. So instead of somebody coming past and scanning your ticket to see if you were a valid passenger, they would simply scan your arm. 

These are all companies that exist or have existed that have tinkered in the implantable space. So I’m not really looking for a Musk-like figure to come in and say, “Oh, look, we’ve demonstrated the business case for implants.” People have been demonstrating that since around 1998. We just don’t know about it because it comes and goes as a fad. The thing is, when will the market be ready for that next leap?    

At what point does this become more widespread? 

It could be in a number of different scenarios. I often talk about the three C’s — control, care and convenience. There’s a fourth one called the cool factor.

We just have to look at the pandemic. The Western Australian police bought $3 million worth of anklet devices to put on people that weren’t adhering to quarantine if they had Covid-19.

It could be we’re just fed up with all the wires. I have a device for this and that. The big suppliers are probably waiting for that one-stop-shop security device. You don’t have to remember passwords or use biometrics or two-factor authentication. It’s just you. Of course, there are risks with that kind of approach. The other line of thinking is, well, it’s time. Why are we still holding our keys in our hands? We could enter our vehicle and our home with a proximity chip. It can’t be that bad. Surely, someone’s not going to chop off our arm to get access to a physical location. So there are all these reasons we’re being told the benefits far outweigh the risks. 

What are those risks?

I am concerned about control. I’m worried about how care and convenience will be used as an argument: I’m doing this because I care about you, and you need it because you have dementia or schizophrenia or you’re a harm to yourself or you’re incarcerated. So I’m doing this for your own good, and I care about you. 

In totality, it points to what we define as “uberveillance.” 

Uberveillance. What is that? 

It’s embedded surveillance. It’s the ultimate form of surveillance. Big Brother on the inside looking out. Which is subject to misinformation, information manipulation and misrepresentation.

The problem is the context is missing. You can have this near real-time omnipresence, but never omniscience. You can’t play God. You can’t pretend to be a star in the heavens that can see everything that’s going on and know what you’re thinking. Uberveillance attempts to infer your context without asking you what you are doing. 

Okay, maybe 95% of the time I might get the story right because of inferences, patterns of behavior, habits. Human activity monitoring will show us each week that we participate in almost exactly the same things at almost exactly the same time. That other 5% we can’t infer and shouldn’t infer. Context is missing. We can’t take innocent people and convict them of crimes. We need to give people their space. We’re not robots. 

Here’s an example. I was awarded an Australian Research Council grant to study location-based services and implantable devices. I gave my students three days of my personal data: GPS data showing my altitude, the direction of travel, time stamps every 30 seconds and the X and Y coordinates of my location. And I told them, you tell me what I’ve been doing in these three days. I wanted to see how different their responses would be. 

They found I was in Kiama in Australia one day and then in Derby in the U.K. the next day and then came back to Kiama. They asked me, how did you travel to the U.K. and come back? Now, obviously, I had not. There were errors in the GPS data that happened naturally. For some reason, the GPS locked me to Derby. I wasn’t there. Then they said, oh, my goodness, you were speeding at 260 kilometers per hour. No, I was not. My car was old. It does not go that fast. But nobody questioned the data. They were questioning me about my ability to be in ten places at once. They all got it wrong because we trust the data. 

What does that tell you about the potential risk of relying heavily on implantable technology like this? 

Most of the proponents of these technologies basically say deploy now, worry about the risk later. I’m not of that opinion. You don’t want to go down a path or a point of no return. 

We start to try to constrain people in a particular way. They can’t breathe. When you can’t breathe, you just want out. And so my fear is that this kind of technology will be used to track in an inhuman way. It’s inhuman. We’re not machines. We shouldn’t be replicating models of machines.

And even if we do place digital technology in the human body, the brain is not stupid. We will override based on our intuition and intelligence. Our brain is smarter than the machine. We know we’re being mechanized and digitized. 

What’s the path forward with this technology?

Many of the people I’ve spoken to through research have said the ideal scenario is to keep that technology on the outside. So if you want to rip it off your head or you want to do away with it for 24 hours because you just don’t want to be connected to the grid, you can. You have the autonomy. 

In the end, that’s what we’re talking about — human dignity, human rights, autonomy, our ability to make decisions for ourselves. But if you embed devices, this notion of switching off is not so easy. You can switch off, but remotely, someone’s still tracking it. That’s not switching off completely. 

Once you embed that device, the manipulation heightens. The ability to control others is much more heightened than we realize. You can have control, convenience, care and cool. But the dominant theme is control. It doesn’t matter what application. The underlying concept is control.