AI faking students' nude photos
The Westfield, New Jersey, high school, a red-brick facility with a scoreboard outside that boldly welcomed guests to the "Home of the Blue Devils" sports teams, had a routine board meeting in late March.
However, Dorota Mani's day was not going as planned.
A few girls in the 10th grade at Westfield High School, including Francesca, the 14-year-old daughter of Ms. Mani, informed authorities in October that guys in their class had created and circulated fictitious photos of themselves using artificial intelligence software. After five months, the district has not publicly addressed the doctored photographs or updated school regulations to prevent exploitative use of artificial intelligence, according to the Manis and other families.
Ms. Mani, the creator of a nearby preschool, chastised board members during the meeting, saying, "It seems as though the Westfield High School administration and the district are trying to mute the incident."
The school system released a statement stating that it had alerted and spoken with the police right after, started an "immediate investigation" as soon as it learned about the event, and given the sophomore class group therapy.
Westfield Public Schools superintendent Raymond González stated in the statement, "All school districts are grappling with the challenges and impact of artificial intelligence and other technology available to students at any time and anywhere."
Facing an unexpected surge in popularity last year, schools across the US scrambled to control chatbots with artificial intelligence (A.I.) capabilities, such as ChatGPT, to prevent students from cheating. Schools are now being shaken by an even more worrying A.I. image-generating phenomena.
Boys in many states have turned authentic, recognizable pictures of their dressed female peers attending functions like school proms into graphic, convincing-looking pictures of the ladies with exposed A.I.-generated genitalia and breasts by using publicly accessible "nudification" applications. According to school and police investigations, there were instances where males exchanged the phony photos in the school lunchroom, on the school bus, or through group conversations on social media sites like Instagram and Snapchat.
These digitally modified photos, also referred to as "deepfakes" or "deepnudes," might have disastrous effects. Experts on child sexual exploitation believe that when young women are harassed, humiliated, and bullied using non consensual, AI-generated photos, it may be detrimental to their physical and emotional well-being, as well as jeopardize their chances of getting into college and pursuing careers.
The Federal Bureau of Investigation issued a warning last month about the illegality of disseminating computer-generated content related to child sexual abuse, including photos of identifiable youngsters participating in sexually explicit behavior that appear realistic and are produced by artificial intelligence.
However, because the use of exploitative A.I. applications by students in schools is so recent, some districts appear less equipped than others to deal with it. This may put pupils' safety at jeopardy.
"This phenomenon has emerged out of nowhere and could be taking many school districts by surprise, leaving them unsure of how to respond," stated Riana Pfefferkorn, a research fellow at the Stanford Internet Observatory who specializes in legal matters pertaining to computer-generated imagery of child sexual assault.
A police detective investigating parent complaints about explicit A.I.-generated images of their 14- and 15-year-old daughters at Issaquah High School near Seattle last fall reportedly asked an assistant principal why the school never went to the police.
In answer to the official's inquiry about "what was she supposed to report," the investigator then told the official that schools are required by law to reveal sexual abuse, including likely child sexual abuse information, according to the police document.
According to the police report, the school then reported the incident to Child Protective Services. (The police report was obtained by The New York Times via a request for public documents.)
The Issaquah School District stated in a statement that as part of its investigation into the deepfakes, it had spoken with kids, families, and law enforcement. Additionally, the district "shared our empathy," according to the letter, and offered assistance to the impacted pupils.
The statement stated that, "according to our legal team, we are not required to report fake images to the police," and that, "out of an abundance of caution," the district had sent the "fake, artificial-intelligence-generated images to Child Protective Services."
Administrators at Beverly Vista Middle School in Beverly Hills, California, alerted the authorities in February after discovering that five boys had produced and distributed obscene photos of female peers using artificial intelligence. District records indicate that the expulsion of five kids was authorized by the school board two weeks later. (The district said that it was unable to verify if the pupils who were expelled were the ones who had created the pictures due to California's education legislation.)
The Beverly Hills Unified School District's superintendent, Michael Bregy, stated that he and other school administrators want to establish a nationwide standard prohibiting students from creating and disseminating sexually explicit photos of their peers.
According to Dr. Bregy, the sexual photos were "disturbing and violative" to girls and their families, and that constitutes serious bullying in schools. "This is something that we will never, ever tolerate."
Beverly Hills and Westfield, two small, wealthy neighborhoods, had schools that were among the first to openly identify occurrences of deepfake. The specifics of the instances, as reported in court documents, congressional hearings, school board meetings, and district correspondence with parents, highlight how differently schools have responded to these situations.
The Westfield incident started last summer, allegedly, when a male high school student sought to befriend a female classmate, 15, who had a private Instagram account. The young woman and her family then filed a lawsuit against the kid and his parents. (The Manis stated that they have no involvement in the legal matter.)
According to court records, the male student copied pictures of her and a few other female classmates from their social media accounts after she granted the request. Subsequently, according to court records, he created sexually explicit, "fully identifiable" photos of the females using an AI tool and shared them with classmates via a Snapchat group.
Westfield High started looking into this in late October. Francesca Mani said that although officials surreptitiously pulled a few lads away to interview them, they called her and the other females in the tenth grade who had been the targets of the deepfakes to the school office by announcing their names over the loudspeaker.
Parents received an email from Westfield High School head Mary Asfendis that week warning them of "a situation that resulted in widespread misinformation." The deepfakes were described as a "very serious incident" in the email that followed. Additionally, it stated that the school thought that "any created images have been deleted and are not being circulated," in spite of students' worries about potential image-sharing.
Administrators at Westfield, according to Dorota Mani, informed her that the male student who was charged with creating the fake photos had been suspended for a day or two by the district.
She and her daughter soon started discussing the experience in public and pushed Congress, state legislators, and school districts to pass legislation and rules that expressly forbade blatant deepfakes.
As said in a recent interview, Francesca Mani, who is now fifteen, "we have to start updating our school policy." "Because students like me would have been protected if the school had A.I. policies."
Due to the obscene photos, parents, including Dorota Mani, also filed harassment complaints with Westfield High last autumn. But Ms. Mani informed the school board members in March that the high school still hadn't given parents an official report on what happened.
For concerns of student confidentiality, Westfield Public Schools stated that it was unable to comment on any disciplinary proceedings. The district is stepping up its efforts "by educating our students and establishing clear guidelines to ensure that these new technologies are used responsibly," according to a statement from superintendent Dr. González.
Schools in Beverly Hills have adopted a more adamant public position.
Upon discovering in February that eighth-grade boys at Beverly Vista Middle School had produced explicit images of their female classmates who were twelve and thirteen years old, administrators promptly sent a message to all district parents, staff, and middle and high school students with the subject line "Appalling Misuse of Artificial Intelligence." The statement asked neighbors to provide the school with information in order to make sure that kids' "disturbing and inappropriate" usage of artificial intelligence "stops immediately."
Additionally, it forewarned that the district was ready to impose harsh penalties. A recommendation for expulsion was included in the message, which said that "any student found to be creating, disseminating, or in possession of AI-generated images of this nature will face disciplinary actions."
The superintendent, Dr. Bregy, stated that the misuse of artificial intelligence was making pupils feel insecure in classrooms, and that's why politicians and schools needed to move immediately.
"In schools, there's a lot of talk about physical safety," he said. "But this invasion of students' personal and emotional safety is something out of the discourse."
AI Catalog's chief editor