Social Media Wilderness
Risks to children online outpace efforts to establish effective guard rails
December 11, 2023 at 4:00 p.m.
Children’s time spent online climbed steadily during the first two decades of the 2000s. Then the COVID pandemic hit and, with school buildings closed, even the school day went online. As things have normalized post-COVID, a handful of school systems around the country have banned cell phone use by students during the school day. But that only masks one critical problem: How do we keep our children safe while they’re online?
While our nation has taken care to protect children from many risks – such as seat belts in automobiles and child-safety – there are few, if any, effective guardrails protecting minors once they’ve switched on their laptop, cell phone or tablet.
And the need for meaningful protections has never been greater. In 2021, research from Common Sense Media indicated those ages 8-12 in the United States spent, on average, 5.5 hours per day and those ages 13-18 spent, on average, 8.5 hours per day looking at screens, just for entertainment – not including any school or educational usage.
Additionally, 88% of teens 13-18 have their own smartphone, 57% of children 8-12 have their own tablet. 94% of families surveyed with kids ages 8-18 have at least one smartphone in the home, and 74% have a tablet in the home. The proliferation of devices has led to another calculation that some heavily online minors are exposed to 1,260 ads each day.
Evidence of harm to children through online use is growing as quickly as screen times. Barely a week can pass without some new revelation about corporate giants actively ignoring warning signs about the potential damage to young minds, especially social media titans like Meta (Facebook and Instagram); X (Twitter), and TikTok.
Thirteen seems to be the age of consent when it comes to gaining access and having a presence on social media, despite recent evidence showing that brains are not fully formed by age 18, when U.S. society considers them adults, or even by 21, when states permit them to purchase alcohol.
Tech companies have financial incentives to not only allow youngsters unfettered social media access but to drive content that will keep underage eyeballs glued to their sites. Advertisers are happy to buy into this, since the younger the audience they can sway, the more likely they’ll be fans of their products and services for, maybe, a lifetime.
But abuse, harassment and misogyny have also found their way onto these platforms. Arturo Bejar, who headed Facebook’s security team from 2009 to 2015, testified in November to a Senate committee that he was distressed when his teen daughter reported to him that she was being harassed on Meta-owned Instagram.
Bejar returned to Facebook in 2019 as a consultant and discovered that virtually all of the protections his team had constructed during his earlier tenure there were gone. Before he permanently left two years later, he sent a memo to Meta’s leaders – including Mark Zuckerberg and Sheryl Sandberg –citing surveys that showed the number of people reporting that they had a negative experience on Instagram “was 51% every week, but only 1% of those reported the offending content and only 2% of those succeeded in getting the offending content taken down.”
Strengthening Safeguards through Legislation
The Children’s Online Privacy Protection Act, which took effect in 2000, was meant to shield kids from online threats – one of the most pernicious being the marketing of their personal information to others for the financial gain of both the buyers and sellers of such data. Today, parents are all too aware of additional threats to their children’s well-being, including but hardly limited to cyberbullying, self-esteem issues such as eating and attractiveness, and online sexual predators.
That law, known in shorthand as COPPA, names the Federal Trade Commission as the enforcement agency. However, the FTC can’t act until after a violation has occurred, much like closing the barn door after the horse has escaped.
However, this summer, the Senate Committee on Commerce, Science and Transportation unanimously passed the Children and Teens’ Online Privacy Protection Act, which has been dubbed “COPPA 2.0.” It has yet to be acted on by the House and full Senate.
The bill would prohibit internet companies from collecting personal information from 13- to 16-year-old users without their consent and ban targeted advertising to children and teens. The legislation would cover platforms that are “reasonably likely to be used” by children and would protect users who are “reasonably likely to be” children or minors. It would further create an “Eraser” button for parents and kids to eliminate personal information about a child or teen when technologically feasible.
COPPA would establish a “Digital Marketing Bill of Rights for Teens” that limits the collection of personal information of teens and would establish a youth marketing and privacy division at the FTC.
In tandem with COPPA 2.0, the same Senate committee unanimously passed the Kids Online Safety Act, also referred to as KOSA, which would establish a duty of care for social media companies to protect minors from such consequences as mental health harm, sexual trafficking and narcotics exposure through online activity. It would also require companies to go through independent external audits, allow researcher access to platform data assets, and create substantial youth and parental controls to create a safer digital environment.
Both bills were introduced with bipartisan support, which could augur well for their passage in both chambers. KOSA seemed to be on its way to passage last year but lawmakers ran out of time to consider it.
With the threats against kids online a universal problems, some nations have made a bit more progress than the United States has in establishing effective safeguards. Great Britain passed a sweeping law in September to regulate online content, introducing age-verification requirements for pornography sites and other rules to reduce hate speech, harassment and other illicit material.
The 300-page Online Safety Bill, which also applies to terrorist propaganda, online fraud and child safety, would require TikTok, YouTube, Facebook and Instagram to introduce features that allow users to choose to encounter lower amounts of harmful content, such as eating disorders, self-harm, racism, misogyny or antisemitism.
State-based Action
Some U.S. states are taking a stand as well to protect their youngest residents. A Utah law that will take effect next March will require that parents give consent for their kids to access social media outside the 6:30 a.m.-10:30 p.m. window, and that social media companies build features enabling parents to access their kids’ accounts.
The state has also sued TikTok, alleging that the platform misrepresents itself as independent of China and is designed to “hook users” into its endless feed. Utah Gov. Spencer Cox said TikTok “illegally baits children into addictive and unhealthy use” with features that encourage young users to scroll endlessly in order to make more advertising money.
Less than two weeks after Utah filed suit in October, 41 states and the District of Columbia filed suit against Meta – whose social media platforms also include WhatsApp and Messenger -- contending that the company knowingly used features on its platforms to cause children to use them compulsively, even as the company said that its social media sites were safe for young people.
In another court case, a federal district judge ruled in November that discovery can proceed in a suit documenting individual cases involving hundreds of children and teens allegedly harmed by social media use across 30 states. The Big Tech defendants cannot claim that the immunity clause of the Communications Decency Act of 1996 shields them from complaints that social media platform designs are defective and harming children and teen users.
While cases to wind their way through the courts, state houses and Congress, concerned parents and other adults can check out the Strong Catholic Family Faith Project, a collaborative effort by lay leaders working in dioceses in four states. The group has combed through an array of websites that make the case for the online safety of minors, and has its own page filled with links to the sites they’ve judged to be of the greatest value. You can access that page at https://www.catholicfamilyfaith.org/using-media-and-technology.html.
Interested parties are also encouraged to contact their elected officials in Washington to advocate for the latest round of proposed legislation to protect children online. To learn more about KOSA, visit www.commonsensemedia.org and search for kosa-one-pager.pdf
Mark Pattison, a freelance writer based in Washington, D.C., is the former media editor of Catholic News Service.
Related Stories
Saturday, November 23, 2024
E-Editions
Events
Children’s time spent online climbed steadily during the first two decades of the 2000s. Then the COVID pandemic hit and, with school buildings closed, even the school day went online. As things have normalized post-COVID, a handful of school systems around the country have banned cell phone use by students during the school day. But that only masks one critical problem: How do we keep our children safe while they’re online?
While our nation has taken care to protect children from many risks – such as seat belts in automobiles and child-safety – there are few, if any, effective guardrails protecting minors once they’ve switched on their laptop, cell phone or tablet.
And the need for meaningful protections has never been greater. In 2021, research from Common Sense Media indicated those ages 8-12 in the United States spent, on average, 5.5 hours per day and those ages 13-18 spent, on average, 8.5 hours per day looking at screens, just for entertainment – not including any school or educational usage.
Additionally, 88% of teens 13-18 have their own smartphone, 57% of children 8-12 have their own tablet. 94% of families surveyed with kids ages 8-18 have at least one smartphone in the home, and 74% have a tablet in the home. The proliferation of devices has led to another calculation that some heavily online minors are exposed to 1,260 ads each day.
Evidence of harm to children through online use is growing as quickly as screen times. Barely a week can pass without some new revelation about corporate giants actively ignoring warning signs about the potential damage to young minds, especially social media titans like Meta (Facebook and Instagram); X (Twitter), and TikTok.
Thirteen seems to be the age of consent when it comes to gaining access and having a presence on social media, despite recent evidence showing that brains are not fully formed by age 18, when U.S. society considers them adults, or even by 21, when states permit them to purchase alcohol.
Tech companies have financial incentives to not only allow youngsters unfettered social media access but to drive content that will keep underage eyeballs glued to their sites. Advertisers are happy to buy into this, since the younger the audience they can sway, the more likely they’ll be fans of their products and services for, maybe, a lifetime.
But abuse, harassment and misogyny have also found their way onto these platforms. Arturo Bejar, who headed Facebook’s security team from 2009 to 2015, testified in November to a Senate committee that he was distressed when his teen daughter reported to him that she was being harassed on Meta-owned Instagram.
Bejar returned to Facebook in 2019 as a consultant and discovered that virtually all of the protections his team had constructed during his earlier tenure there were gone. Before he permanently left two years later, he sent a memo to Meta’s leaders – including Mark Zuckerberg and Sheryl Sandberg –citing surveys that showed the number of people reporting that they had a negative experience on Instagram “was 51% every week, but only 1% of those reported the offending content and only 2% of those succeeded in getting the offending content taken down.”
Strengthening Safeguards through Legislation
The Children’s Online Privacy Protection Act, which took effect in 2000, was meant to shield kids from online threats – one of the most pernicious being the marketing of their personal information to others for the financial gain of both the buyers and sellers of such data. Today, parents are all too aware of additional threats to their children’s well-being, including but hardly limited to cyberbullying, self-esteem issues such as eating and attractiveness, and online sexual predators.
That law, known in shorthand as COPPA, names the Federal Trade Commission as the enforcement agency. However, the FTC can’t act until after a violation has occurred, much like closing the barn door after the horse has escaped.
However, this summer, the Senate Committee on Commerce, Science and Transportation unanimously passed the Children and Teens’ Online Privacy Protection Act, which has been dubbed “COPPA 2.0.” It has yet to be acted on by the House and full Senate.
The bill would prohibit internet companies from collecting personal information from 13- to 16-year-old users without their consent and ban targeted advertising to children and teens. The legislation would cover platforms that are “reasonably likely to be used” by children and would protect users who are “reasonably likely to be” children or minors. It would further create an “Eraser” button for parents and kids to eliminate personal information about a child or teen when technologically feasible.
COPPA would establish a “Digital Marketing Bill of Rights for Teens” that limits the collection of personal information of teens and would establish a youth marketing and privacy division at the FTC.
In tandem with COPPA 2.0, the same Senate committee unanimously passed the Kids Online Safety Act, also referred to as KOSA, which would establish a duty of care for social media companies to protect minors from such consequences as mental health harm, sexual trafficking and narcotics exposure through online activity. It would also require companies to go through independent external audits, allow researcher access to platform data assets, and create substantial youth and parental controls to create a safer digital environment.
Both bills were introduced with bipartisan support, which could augur well for their passage in both chambers. KOSA seemed to be on its way to passage last year but lawmakers ran out of time to consider it.
With the threats against kids online a universal problems, some nations have made a bit more progress than the United States has in establishing effective safeguards. Great Britain passed a sweeping law in September to regulate online content, introducing age-verification requirements for pornography sites and other rules to reduce hate speech, harassment and other illicit material.
The 300-page Online Safety Bill, which also applies to terrorist propaganda, online fraud and child safety, would require TikTok, YouTube, Facebook and Instagram to introduce features that allow users to choose to encounter lower amounts of harmful content, such as eating disorders, self-harm, racism, misogyny or antisemitism.
State-based Action
Some U.S. states are taking a stand as well to protect their youngest residents. A Utah law that will take effect next March will require that parents give consent for their kids to access social media outside the 6:30 a.m.-10:30 p.m. window, and that social media companies build features enabling parents to access their kids’ accounts.
The state has also sued TikTok, alleging that the platform misrepresents itself as independent of China and is designed to “hook users” into its endless feed. Utah Gov. Spencer Cox said TikTok “illegally baits children into addictive and unhealthy use” with features that encourage young users to scroll endlessly in order to make more advertising money.
Less than two weeks after Utah filed suit in October, 41 states and the District of Columbia filed suit against Meta – whose social media platforms also include WhatsApp and Messenger -- contending that the company knowingly used features on its platforms to cause children to use them compulsively, even as the company said that its social media sites were safe for young people.
In another court case, a federal district judge ruled in November that discovery can proceed in a suit documenting individual cases involving hundreds of children and teens allegedly harmed by social media use across 30 states. The Big Tech defendants cannot claim that the immunity clause of the Communications Decency Act of 1996 shields them from complaints that social media platform designs are defective and harming children and teen users.
While cases to wind their way through the courts, state houses and Congress, concerned parents and other adults can check out the Strong Catholic Family Faith Project, a collaborative effort by lay leaders working in dioceses in four states. The group has combed through an array of websites that make the case for the online safety of minors, and has its own page filled with links to the sites they’ve judged to be of the greatest value. You can access that page at https://www.catholicfamilyfaith.org/using-media-and-technology.html.
Interested parties are also encouraged to contact their elected officials in Washington to advocate for the latest round of proposed legislation to protect children online. To learn more about KOSA, visit www.commonsensemedia.org and search for kosa-one-pager.pdf
Mark Pattison, a freelance writer based in Washington, D.C., is the former media editor of Catholic News Service.