Instagram utilized an offense ‘rape’ threat to advertising their app on Facebook recently as the latest in the examples of social media algorithms to boost offensive content. Guardian reporter, ‘Olivia Solon is the one who discovered it considering the advertisement was made from an email which threatened her with rape and murder. The ad was a screenshot of an email received by the journalist who then posted it on her twitter account with the caption that Instagram was using one of her most engaging posts to advertise its service to others on the parent platform, Facebook.
Instagram inadvertently selected the screenshot which she had posted a year in the past to advertise the photo sharing platform to Solon’s sister, using the message, “See Olivia Solon’s photo and posts from friends on Instagram”.
Instagram Ad Mishap Shows Worrying Trend
The original post on Instagram by Solon actually got three likes and over a dozen comments. The automated form of algorithms which were used by social media to boost offensive is generating more scrutiny, so this mishap could not have come at a worse time for Instagram.
Pro Publica which is a nonprofit news firm in New York claimed it was possible to buy Facebook advertisements which were targeted to individuals who on their profiles had placed anti-Semitic topics subjects as their fields of interest. Once the people put these phrases within their Facebook profiles, the topics automatically changed to the advertising platform in a manner to show information of educational or work profiles would be useful to the marketers. As a result, Facebook disabled some of the targeting capabilities in the past week due to the bad press caused by the inquiry.
There is also increasing instances of news pertaining to a range of bigoted and derogatory terms which Facebook allowed for the purposes of ad targeting and that Twitter and Google were in the same boat.
Facebook chief operating officer Sheryl Sandberg issued a mea culpa and claimed the company was fast moving toward a change policy. The company has since disabled the targeting system behind the offensive categories and Sandberg stated that the site would only allow targeting options as reviewed by human beings in the future.
Human Oversight Becoming a Key Priority
Clearly, this is not the first time parent company Facebook has come under fire for allowing advertisers to target users who were in the offensive categories. Just a week ago, the theme was ‘Jew hater’, and ‘how to burn Jews’, which was listed in some profiles. As a result, the social media platform restricts the interests that users would be able to list in meeting community standards.
Again, the emphasis was placed on human oversight to the automated processes, especially for Instagram. This begs the question of investment in machine learning and artificial intelligence and whether they are at the level at which sensitivity to racism and intolerance are priorities. Of course, these processes are at the base level, devoid of the human experience and morality which would mean it would be a tall order to place them in key decision posts just yet. Not all processes can be automated basically, at least not just yet.