DMCA.com Protection Status
Tech

Amid Sextortion’s Rise, Computer Scientists Tap A.I. to Identify Risky Apps

Almost weekly, Brian Levine, a computer scientist at the University of Massachusetts Amherst, is asked the same question by his 14-year-old daughter: Can I download this app?

Mr. Levine responds by scanning hundreds of customer reviews in the App Store for allegations of harassment or child sexual abuse. The manual and arbitrary process has made him wonder why more resources aren’t available to help parents make quick decisions about apps.

Over the past two years, Mr. Levine has sought to help parents by designing a computational model that assesses customers’ reviews of social apps. Using artificial intelligence to evaluate the context of reviews with words such as “child porn” or “pedo,” he and a team of researchers have built a searchable website called the App Danger Project, which provides clear guidance on the safety of social networking apps.

The website tallies user reviews about sexual predators and provides safety assessments of apps with negative reviews. It lists reviews that mention sexual abuse. Though the team didn’t follow up with reviewers to verify their claims, it read each one and excluded those that didn’t highlight child-safety concerns.

“There are reviews out there that talk about the type of dangerous behavior that occurs, but those reviews are drowned out,” Mr. Levine said. “You can’t find them.”

Predators are increasingly weaponizing apps and online services to collect explicit images. Last year, law enforcement received 7,000 reports of children and teenagers who were coerced into sending nude images and then blackmailed for photographs or money. The F.B.I. declined to say how many of those reports were credible. The incidents, which are called sextortion, more than doubled during the pandemic.

Because Apple’s and Google’s app stores don’t offer keyword searches, Mr. Levine said, it can be difficult for parents to find warnings of inappropriate sexual conduct. He envisions the App Danger Project, which is free, complementing other services that vet products’ suitability for children, like Common Sense Media, by identifying apps that aren’t doing enough to police users. He doesn’t plan to profit off the site but is encouraging donations to the University of Massachusetts to offset its costs.

Mr. Levine and a dozen computer scientists investigated the number of reviews that warned of child sexual abuse across more than 550 social networking apps distributed by Apple and Google. They found that a fifth of those apps had two or more complaints of child sexual abuse material and that 81 offerings across the App and Play stores had seven or more of those types of reviews.

Their investigation builds on previous reports of apps with complaints of unwanted sexual interactions. In 2019, The New York Times detailed how predators treat video games and social media platforms as hunting grounds. A separate report that year by The Washington Post found thousands of complaints across six apps, leading to Apple’s removal of the apps Monkey, ChatLive and Chat for Strangers.

Apple and Google have a financial interest in distributing apps. The tech giants, which take up to 30 percent of app store sales, helped three apps with multiple user reports of sexual abuse generate $30 million in sales last year: Hoop, MeetMe and Whisper, according to Sensor Tower, a market research firm.

In more than a dozen criminal cases, the Justice Department has described those apps as tools that were used to ask children for sexual images or meetings — Hoop in Minnesota; MeetMe in California, Kentucky and Iowa; and Whisper in Illinois, Texas and Ohio.

Mr. Levine said Apple and Google should provide parents with more information about the risks posed by some apps and better police those with a track record of abuse.

“We’re not saying that every app with reviews that say child predators are on it should get kicked off, but if they have the technology to check this, why are some of these problematic apps still in the stores?” asked Hany Farid, a computer scientist at the University of California, Berkeley, who worked with Mr. Levine on the App Danger Project.

Apple and Google said they regularly scan user reviews of apps with their own computational models and investigate allegations of child sexual abuse. When apps violate their policies, they are removed. Apps have age ratings to help parents and children, and software allows parents to veto downloads. The companies also offer app developers tools to police child sexual material.

Dan Jackson, a spokesman for Google, said the company had investigated the apps listed by the App Danger Project and hadn’t found evidence of child sexual abuse material.

“While user reviews do play an important role as a signal to trigger further investigation, allegations from reviews are not reliable enough on their own,” he said.

Apple also investigated the apps listed by the App Danger Project and removed 10 that violated its rules for distribution. It declined to provide a list of those apps or the reasons it took action.

“Our App Review team works 24/7 to carefully review every new app and app update to ensure it meets Apple’s standards,” a spokesman said in a statement.

The App Danger project said it had found a significant number of reviews suggesting that Hoop, a social networking app, was unsafe for children; for example, it found that 176 of 32,000 reviews since 2019 included reports of sexual abuse.

“There is an abundance of sexual predators on here who spam people with links to join dating sites, as well as people named ‘Read my picture,’” says a review pulled from the App Store. “It has a picture of a little child and says to go to their site for child porn.”

Hoop, which is under new management, has a new content moderation system to strengthen user safety, said Liath Ariche, Hoop’s chief executive, adding that the researchers spotlighted how the original founders struggled to deal with bots and malicious users. “The situation has drastically improved,” the chief executive said.

The Meet Group, which owns MeetMe, said it didn’t tolerate abuse or exploitation of minors and used artificial intelligence tools to detect predators and report them to law enforcement. It reports inappropriate or suspicious activity to the authorities, including a 2019 episode in which a man from Raleigh, N.C., solicited child pornography.

Whisper didn’t respond to requests for comment.

Sgt. Sean Pierce, who leads the San Jose Police Department’s task force on internet crimes against children, said some app developers avoided investigating complaints about sextortion to reduce their legal liability. The law says they don’t have to report criminal activity unless they find it, he said.

“It’s more the fault of the apps than the app store because the apps are the ones doing this,” said Sergeant Pierce, who offers presentations at San Jose schools through a program called the Vigilant Parent Initiative. Part of the challenge, he said, is that many apps connect strangers for anonymous conversations, making it hard for law enforcement to verify.

Apple and Google make hundreds of reports annually to the U.S. clearinghouse for child sexual abuse but don’t specify whether any of those reports are related to apps.

Whisper is among the social media apps that Mr. Levine’s team found had multiple reviews mentioning sexual exploitation. After downloading the app, a high school student received a message in 2018 a from a stranger who offered to contribute to a school robotics fund-raiser in exchange for a topless photograph. After she sent a picture, the stranger threatened to send it to her family unless she provided more images.

The teenager’s family reported the incident to local law enforcement, according to a report by Mascoutah Police Department in Illinois, which later arrested a local man, Joshua Breckel. He was sentenced to 35 years in jail for extortion and child pornography. Though Whisper wasn’t found responsible, it was named alongside a half dozen apps as the primary tools he used to collect images from victims ranging in age from 10 to 15.

Chris Hoell, a former federal prosecutor in the Southern District of Illinois who worked on the Breckel case, said the App Danger Project’s comprehensive evaluation of reviews could help parents protect their children from issues on apps such as Whisper.

“This is like an aggressively spreading, treatment-resistant tumor,” said Mr. Hoell, who now has a private practice in St. Louis. “We need more tools.”

Source link

Mohammad SHiblu

©2023 NEWSTV24.NET. | All Right Reseved. A News Platform by NEWS TV 24. Publisher: Mohammad SHiblu Contact: [email protected]

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

Back to top button
DMCA.com Protection Status