Just when we thought the #MeToo and #TimesUp viral campaigns were dealing deathblows to institutionalised sexism; another spectre has emerged to threaten female social and economic progress.

The advances being made in Artificial Intelligence (AI) applications and machine learning are not gender neutral. According to research by Dr. Brahnam, Assistant Professor in Computer Information Systems at Missouri State University, users direct more sexual and profane comments towards female-presenting chatbots than their male counterparts and this harassment of female chatbots may well contribute to the entrenchment of our existing societal trend of females sexual harassment.

Brahnam’s research highlights the potential of AI to perpetuate gender bias, role stereotyping, prejudices and abusive behaviour towards women – in virtual and flesh and blood arenas.

A recent survey undertaken by the British Science Association found that the average UK person has little knowledge of the current impact of AI advances and are apprehensive of its potential negative impacts. Tackling low general awareness of how AI is already being embedded into our daily routines may well be the next biggest challenge for the female equality movement.

We have all experienced how the most popular and widely adopted AI virtual assistants such as Siri, Alexa, and Google Home, have all been designed and programmed with socially prescribed female personas. Dr Brahnam suggests that the design and coding of these virtual assistants perpetuate the stereotype that women are subservient to males. With the ever-growing widespread use of chatbots across industries, academic research shows that users direct more sexual and profane comments towards female-presenting chatbots as well as attributing negative stereotypes. They are also more often the objects of implicit and explicit sexual attention and swear words.

Leah Fessler in Quartz reviewed how different female sounding bots responded to various forms of harassment. Her findings suggest that the design and coding responses indicate coders were anticipating sexual harassment but had decided not to tackle the anticipated virtual sexual harassment by coding more socially responsible responses. Instead the existing bot response, Leah suggests, helps entrench sexist bias through their passivity. Does not doing something to address a known social bias help perpetuate that bias? Is this one of the ethical issues of AI that needs to be seriously addressed and very quickly as the march of AI is currently outstripping regulators ability to put boundaries around it?

A precedent has already been established in the world of virtual games where for example, World of Warcraft (WoW) users get an immediate suspension if players use offensive language or bully others and still others, deliberately coded to promote positive social interactions and positive female role models. It has been suggested that the WoW game, for example, is coded within a virtue ethics framework.

Leah suggests that the R& D companies and manufacturers who are designing, and marketing digital female stereotypes perhaps have a much higher accountability because of their potential global social impact. Instead of ignoring the problem. She suggests designers could adopt a positive and proactive role in addressing harassment by coding potential responses that challenge the bias. Responses such as “harassment is unacceptable” or “ are you aware that denigrating females is a human rights issue” or “please observe appropriate standards when interacting with females in the virtual and physical worlds. Or like the gaming world, they could “sinbin” users for anything from 3 – 72 hours and/or even totally suspend users until better behaviour is demonstrated.

There is ample research to show how females are under-represented in the technology industry and how they hold disproportionately fewer tech-related jobs throughout the developed world Are we already seeing the ethical implications of this in the design of the current crop of chatbots with their unemancipated female personas?

The tech giants of Silicon Valley have been accused of operating with an ethos of “Build first” and ask for forgiveness later. Females may have to pay a much higher price for this approach. Perhaps we need a new #ethicalAI campaign to emerge to persuade regulators that inclusion here also is a social imperative. How can we ensure that techos do not decide the pace of social change for half of humankind?