Recent unrest tests, pushes new boundaries for media, tech
By Laura Haight
The last couple of weeks have been difficult. As a society we struggled to make sense of one senseless act after another. The bandage we have had covering our mistrust, our anger and our frustration got yanked off, revealing that even after more than 150 years the scab remains, the wound can still bleed.
Unlike social unrest of the past, we didn’t read about it in the next day’s newspaper or watch carefully edited 30-second clips on the nightly news. No, we lived it. We sat in the front seat of the car and watched Philando Castille die; and on the streets of Dallas and Baton Rouge as snipers took down one officer after another. In Dallas, finally dying in a robot’s suicide mission; in Baton Rouge, in a shootout with police.
And just as social media gave us an unrelenting view of the violence, it also inspired us to act. Around the country, including Greenville, people were brought together because they saw a post on Facebook or Twitter.
In fact, in a week of controversy, the role played by social media and technology also provided significant room for debate about ethical, legal and moral issues.
When Facebook released the live streaming capabilities of its social media platform early in 2016, it’s unlikely it envisioned this week’s events.
Millions of people have watched the video streamed by Reynolds as Philando Castile lay dying in the car next to her after being shot by a police officer in the St. Paul suburb of Falcon Heights. We’ve seen videos of police actions before; in fact video is remaking law and justice in many respects.
But live streaming – especially to an audience of 1.65 billion – is different and poses a number of questions both ethical and legal. Until now, there has been no such thing as a truly open mic. There is always someone with their finger on the 30-second delay or the kill switch. Live streaming, now on Facebook and Twitter via Periscope, goes beyond even YouTube, Vimeo and other video aggregators.
In this new world of personal broadcasting, who should determine what’s acceptable? The internet is already full of content that most of us don’t want to see - beheading videos posted by ISIS live alongside Pizza Rat and Dog Shaming. But we choose what we want; and avoid what we don’t. But in the case of Facebook’s live streaming, content is pushed to us, just popping up in our news feed.
For a brief time on Wednesday, the video was offline eliciting a social media backlash and charges of censorship, and police interference. Facebook claimed “technical” issues, but when the video returned it was marked with a “Disturbing” label, raising the question of whether or not there should be an arbiter.
In a short period of time, live streaming has also become a tool, perhaps not surprisingly, latched onto by the distressed and disenfranchised: A live stream on Periscope taken by a woman in Paris as she threw herself under a train, a Facebook Live video of a killer’s manifesto after committing a double murder.
“While traditional TV broadcasters are subject to “decency” standards overseen by the Federal Communications Commission — and have a short delay in their broadcasts to allow them to cut away from violent or obscene images — Internet streaming services have no such limitations,” wrote Reuters in an analysis (goo.gl/e2iaSE).
It’s a scenario that might not surprise futurists Isaac Asimov and Philip Dick. In Dallas this week, we came just a little bit closer to a world previously only envisioned in imagination.
Of course, we breathed a sigh of relief when the sniper’s assault was ended by a bomb-squad’s robot delivering, rather than disarming, a bomb. But while that’s an easily justified situation, some saw it as a precursor to less clear-cut situations in the future.
“The legal framework for police use of force assumes human decision-making about immediate human threats,” Elizabeth Joh, a professor of law specializing in policing and technology at the University of California Davis, told HuffPost (goo.gl/TlNQd5). “What does that mean when the police are far away from a suspect posing a threat? What does ‘objectively reasonable’ lethal robotic force look like?”
Many will say that this is an extreme conclusion, that this could “never happen.” But once a door is opened, it can’t ever truly be closed. There may be many reasons to explore a greater role for robotic helpers in urban policing, just as we found it advisable to have pilots inside bunkers in Las Vegas launch missile strikes at targets in the middle East. But as has happened with drones, it is nearly impossible to develop well-thought out plans and procedures for every potential situation.
What could the future of technology and law enforcement look like? Let your imagination run wild. RoboCops, artificial intelligence informing human actions, or potentially a Minority Report-like future where technology can prevent crime by stopping an as-yet innocent future perp.
Technology has a way of exceeding the boundaries of its original design. When that happens, we are often confronted with the unintended circumstances and unforeseen situations that make us ask: Should we just because we can? Can we stop, if we want to?