Saturday, December 4, 2021

School monitoring of students through laptops may be doing more harm than good

Since the start of the pandemic, more and more public school students are using laptops, tablets or similar devices issued by their schools.

A September 2021 report shows teachers who reported their schools provided such devices to their students had doubled from 43% before the pandemic to 86%.

In a sense, it can be tempting to celebrate how schools are doing more to keep their students digitally connected during the pandemic. The problem is that schools are not only providing computers to children to complete their school work. Instead – in a trend that can easily be described as Orwellian – most schools are also using tools that keep tabs on what students are doing in their personal lives.

Indeed, 80% of teachers and 77% of high school students reported that their schools had installed artificial intelligence-based monitoring software on these devices to monitor students’ online activities and what is stored in computers.

This student monitoring – at the expense of the taxpayer – is taking place in cities and school communities throughout the United States.

For example, in the Minneapolis school district, school officials paid more than $355,000 to use equipment provided by student monitoring company Gaggle by 2023. Three-quarters of reported incidents – that is, cases where the system flagged students’ online activity – occurred outside of school. hours.

In Baltimore, where the public school system uses the GoGuardian monitoring app, police officers are sent to children’s homes when the system detects students typing keywords related to self-harm.

security vs privacy

Vendors claim that these devices keep students safe from self-harm or online activities that can lead to trouble. However, privacy groups and news outlets have questioned those claims.

Vendors often declined to explain how their artificial intelligence programs were trained and the types of data used to train them.

Privacy advocates fear these tools could harm students by criminalizing mental health problems and inhibiting free expression.

As a researcher who studies privacy and security issues in a variety of settings, I know that intrusion surveillance techniques cause emotional and psychological harm to students, disproportionately punish minority students and undermine online security. does.

Artificial intelligence is not intelligent enough

Even the most advanced artificial intelligence lacks the ability to understand human language and context. This is why student monitoring systems raise so many false positives instead of real problems.

In some cases, these surveillance programs have marked students discussing music deemed suspicious, and even talking about the novel “To Kill a Mockingbird”.

harm to students

When students know they are being monitored, they are less likely to share true thoughts online and are more mindful about what they find. This may discourage vulnerable groups, such as students with mental health issues, from receiving essential services.

When students learn that their every move and reading is being monitored, they are also less likely to develop into adults with high levels of self-confidence. In general, monitoring has a negative effect on students’ ability to function and use analytical reasoning. It also hinders the development of the skills and mindset required to exercise their rights.

more adverse effects on minorities

American schools disproportionately discipline minority students. African American students are three times more likely to be suspended than their white peers.

After evaluating flagged content, vendors report any concerns to school officials, who take disciplinary action on a case-by-case basis. The lack of monitoring in the use of these devices in schools could further harm minority students.

Read Also:  As Election Day draws closer, most American adults say the future of democracy is in danger

The situation is made worse by the fact that black and Hispanic students rely more on school equipment than their white peers. This in turn makes minority students more likely to be monitored and exposes them to greater risk of interference of any kind.

Students of color rely more on school-issued laptops than their white peers.
Igor Alexander / E+ via Getty Images

When both minority students and their white peers are monitored, the former group is more likely to be punished because the training data used in developing artificial intelligence programs often fails to include enough minorities. Artificial intelligence programs are more likely to flag languages ​​written and spoken by such groups. This is due to the low representation of languages ​​written and spoken by minorities in the datasets used to train such programs, and the lack of diversity of people working in this field.

Tweets written by African Americans in the leading AI model are 50% more likely to be flagged as “offensive” than those written by others. They are 2.2 times more likely to flag tweets written in African American slang.

These devices also more disproportionately affect sexual and gender minorities. Gaggle has reportedly flagged “gay,” “lesbian” and other LGBTQ-related words as being associated with pornography, even though the words are often used to describe one’s identity.

increased security risk

These surveillance systems also increase the cyber security risk of the students. First, in order to comprehensively monitor students’ activities, monitoring vendors force students to install a set of certificates known as core certificates. As the highest-level security certificate installed in the device, the root certificate acts as the “master certificate” to determine the security of the entire system. One drawback is that these certificates compromise the cyber security checks built into these tools.

[You’re smart and curious about the world. So are The Conversation’s authors and editors. You can read us daily by subscribing to our newsletter.]

Gaggle, which scans the digital files of more than 5 million students every year, installs such certificates. This strategy of establishing certificates is similar to the approach that authoritarian regimes, such as the government of Kazakhstan, use to monitor and control their citizens and that cybercriminals use to lure victims to infected websites.

Second, surveillance system vendors use vulnerable systems that can be exploited by hackers. In March 2021, computer security software company McAfee found several vulnerabilities in student monitoring system vendor Netop’s Vision Pro Education software. For example, Netop did not encrypt communications between teachers and students to block unauthorized access.

The software was used by more than 9,000 schools around the world to monitor millions of students. The vulnerability allowed hackers to gain control of the webcams and microphones in students’ computers.

Finally, students’ personal information stored by vendors is susceptible to breaches. In July 2020, criminals hacked into online proctoring service ProctorU and stole 444,000 students’ personal data – including names, email addresses, home addresses, phone numbers and passwords. This data was then leaked online.

To look more closely at the harms caused by student surveillance and to question whether they actually make students more secure – schools would do well – or less.

This article is republished from – The Conversation – Read the – original article.

Nation World News Deskhttps://nationworldnews.com
Nation World News is the fastest emerging news website covering all the latest news, world’s top stories, science news entertainment sports cricket’s latest discoveries, new technology gadgets, politics news, and more.
Latest news
Related news
- Advertisement -