THE DARK SIDE OF SOCIAL MEDIA AND CHILD GROOMING: How tech, designed to connect, can destroy lives

- Advertisement -

SOCIAL media, primarily designed to connect people, have now inadvertently become tools for disinformation, character assassination, prostitution. Criminals thrive in it and predators exploit it to groom children.

Grooming–a word bastardized by the predators who manipulate and exploit chidren and minors, usually for sexual reasons, has proliferated because digitalization has allowed these predators to find fertile grounds for such illicit activities.

Facebook and Messenger’s extensive reach makes it a prime target for predators, this according to the National Society for the Prevention of Cruelty to Children (NSPCC). In 2022, the Philippines was ranked second globally with over 2 million children being victims of various sexual abuses listed under online child sexual exploitation and abuse (OCSEA). A research project released that year called “Disrupting Harm in the Philippines” sheds light on the risks faced by internet-using children aged 12-17 and the grave instances of abuse they experience.

- Advertisement -

Updates to that document report that the online world has made it easier for offenders to target children for sexual abuse and exploitation, where, shockingly, 20 percent of internet-using children aged 12-17 in the Philippines have experienced online sexual exploitation and abuse.

These experiences include grooming, offers of gifts or money in exchange for sexual acts, and threats or blackmail to engage in sexual activities. Disturbingly, a significant number of children reported receiving offers and requests from individuals they didn’t know, with social media being the most common platform for such interactions.

Of great concern is the fact that Facebook was involved in over half of the 5,441 cases of child grooming reported in England and Wales between April 2018 and March 2019. Predators often use Facebook Messenger to initiate contact with children, posing as peers or trustworthy adults to gain their trust.

In 2020, a BBC investigation revealed that child sexual abuse images were being widely shared on Instagram, often using coded language and hashtags to evade detection.

More recently, TikTok, a platform popular among younger users, has also been implicated in grooming cases. Its features, such as direct messaging and live streaming, make it an attractive tool for predators. In 2021, The New York Times reported multiple lawsuits against TikTok, alleging that it failed to protect young users from sexual exploitation.

In light of growing scrutiny and the increasing number of grooming cases, social media platforms have implemented various measures to combat this issue.

Facebook has introduced several safety features, including the use of artificial intelligence (AI) to detect grooming behavior. This AI monitors for signs such as adults sending friend requests to numerous children. Facebook also collaborates with organizations like the National Center for Missing and Exploited Children (NCMEC) to track and report suspicious activity. Despite these efforts, the sheer volume of users makes comprehensive monitoring difficult, if not impossible.

Instagram has made significant changes to protect younger users. These include restricting direct messages between teens and adults they don’t follow and encouraging teens to set their accounts to private. The platform employs machine learning algorithms to proactively detect and remove harmful content. However, the sophistication of predators who continuously adapt their methods poses ongoing challenges.

TikTok has implemented stricter privacy settings for younger users, such as setting accounts of users under 16 to private by default and disabling direct messaging for users under 16. The platform uses AI to identify and remove inappropriate content swiftly. It also provides educational resources to inform users about the dangers of online grooming and how to stay safe online

Despite these measures, social media platforms face significant challenges in eradicating online grooming. One of the primary issues is the balance between user privacy and safety. Increased monitoring and data collection to detect grooming behavior can raise concerns about user privacy and data security.

Governments worldwide are pushing for stricter regulations to hold social media companies accountable for the content on their platforms.

The Philippines seems to be in the forefront of policy creation. The Anti-Online Sexual Abuse or Exploitation of Children (OSAEC) and Anti-Child Sexual Abuse or Exploitation Materials (CSAEM) Act, also known as Republic Act No. 11930, is a penal law in the Philippines that was enacted on July 30, 2022. The law aims to punish the online sexual abuse of children, as well as the production, distribution, possession, and access of child sexual abuse or exploitation materials (CSAEM). It also amends the Anti-Money Laundering Act of 2001 and the Anti-Child Pornography Act of 2009.

In the UK, the proposed Online Safety Bill aims to impose a duty of care on social media companies to protect users from harmful content, with substantial fines for non-compliance. Similar legislative efforts are underway in other countries, signaling a global shift towards increased accountability for tech companies.— research by Deriq T. Bernard

 

Author

- Advertisement -

Share post: