To begin with, we should not be alarmed to observed people sharing the Christchurch shooting videos on Facebook. It is what Facebook does: making it easy for people to share and re-share content.
In the wake of Christchurch terrorist accident, now the real questions is that 'Is Facebook ready to start altering its business model and make it harder for selective people to share content particularly the Live streamed content.' Here are some scenarios.
Pre-live restrictions (preemptive blocking)
Is it plausible to preemptively restrict some user’s capabilities to “go live” based on their browsing behavior? Yes, technically speaking. But, it will, for example, require 24/7 surveillance of Facebook users’ behaviors and selectively adding them to a “restricted list." However, this option comes with several downfalls. What if a person is profiled incorrectly? How about the hard-core privacy implications that come with this type of surveillance? To what extent Facebook is allowed to constantly monitor and profile people based on their browsing behaviors, views, affiliations, etc. How hard it is for a person to create a fake profile and overcome these restrictions? Creating a fake profile, for example, is very easy given that Facebook already has millions of fake profiles.
Live Restrictions (active blocking)
Restricting a live stream (as it begins) is the most effective option but extremely difficult to achieve. It will require real-time sophisticated voice and video analytics capabilities. Facebook may have some of the needed technologies (.e.g., voice and image recognition) but implementing these at such a large scale and real-time will be extremely difficult, if not possible.
Post-Live Restriction (ex anti blocking)
Once the content is added to the social media realm, the followers of the person who posts it decide its fate. If you observe the Christchurch accident, only a handle full of individual watched the live stream, the real culprits were the other people re-sharing and re-branding it. Unless Facebook can read people’s mind, it is practically impossible for Facebook to figure out who will re-share a content next. Hence, making it virtually impossible to stop further distribution of a prohibited or undesirable content, which is already on the network. It can only be taken down once it is re-shared. However, by that time it is too late as many people will have already seen it or even saved on their computers. Facebook can, to some extent, stop propagation of undesirable content by erecting real-time content scanning (or manual review) mechanism (like done in the case of Facebook ads). However, it will very challenge and demanding given that more than 2 billion people sharing trillions of content every minute.
In short, we believe that technology can play limited role in prohibiting the distribution of undesirable content, the ultimate responsibility falls on the social media user who needs to make good decisions regarding what to post and what not to post.