Unintended consequences. So many people who want to do good simply do not think beyond the immediate need to what the consequences of their actions will be. It’s true of all kinds of government programs, and it’s true in many corporate initiatives.
A recent initiative by Facebook has some people concerned about how this will impact all of us in the real (non-digital) world. The initiative is Facebook’s use of artificial intelligence (AI) to try to recognize posts and images by people who are suicidal and to get first responders to that person in the real world before that person can injure themselves.
I won’t argue that the intention is not a good intention. Reducing the number of suicides is certainly a good thing. The problem is what happens when this type of AI mis-“diagnoses” someone as suicidal and, now that mental health professionals are at that person’s house, what are the further ramifications? A writer calling themselves Sebastian expressed these concerns:
In many states, even an observational trip to the loony bin will land you a state and federal prohibition. Pennsylvania is one of them. If you ever find yourself in a situation where first-responders show up concerned that the Facebook AI has determined you’re going off the deep end, make sure you go voluntarily. Make sure you tell everyone you deal with you’re there voluntarily. Because if they take you against your will, now you have much bigger issues if you own guns. Even if they let you go, if you haven’t made arrangements, if you arrive home to a safe full of guns, congratulations, you’ve just made yourself a felon in addition to having to endure contact with the state mental health system.
Think about that. You could lose your gun rights because an AI thinks you’re suicidal and sends someone out to check on you. Even you aren’t suicidal and will never be suicidal, any mistake this AI makes could put your rights in jeopardy.
To make it worse, Facebook says that users cannot opt out of scanning by this AI. So, how is Facebook going to prevent this scanning from being abused? Josh Constine tells us Facebook’s unsettling answer:
The idea of Facebook proactively scanning the content of people’s posts could trigger some dystopian fears about how else the technology could be applied. Facebook didn’t have answers about how it would avoid scanning for political dissent or petty crime, with Rosen merely saying “we have an opportunity to help here so we’re going to invest in that.”
I don’t blame Facebook for wanting to save lives. That is a good thing. The problem, though, is that the potential for abuse is massive here. How many people have been locked up in totalitarian regimes for “mental health issues” when the only issue is that the person disagreed with a government policy? How many anti-gunners already think that gun rights supporters are literally insane?
This use of technology is something to keep an eye on and to be concerned about because, as Sebastian noted, if they even think you may have at one time been (slightly) crazy, they can take your guns from you forever.