Can ethical AI surveillance exist? Data scientist Rumman Chowdhury doesn't think so

Rumman Chowdhury said during a recent talk that she does not believe ethical AI surveillance can exist, noting in a later interview that the issue is hugely concerning to her.

Rumman Chowdhury, the former director of machine learning ethics, transparency and accountability at Twitter, said at a recent talk that she does not believe ethical artificial intelligence surveillance can exist. 

"We cannot put lipstick on a pig," the data scientist noted at New York University’s School of Social Sciences. "I do not think ethical surveillance can exist."

In an interview published Monday in The Guardian – which spotlights that statement – Chowdhury warned that the rise of surveillance capitalism is hugely concerning to her. 

She asserted that it is a use of technology that, at its core, is unequivocally racist and, as such, should not be entertained. 

'GODFATHER OF AI' SAY THERE'S A 'SERIOUS DANGER' TECH WILL GET SMARTER THAN HUMANS FAIRLY SOON

In a recent op-ed for Wired referenced in the piece, Chowdhury also said that only an external board of people can be trusted to govern AI

"We’re getting all this media attention," she told The Guardian, "and everybody is kind of like, ‘Who’s in charge?’ And then we all kind of look at each other and we’re like, ‘Um. Everyone?’"

In the interview, she lamented what she calls "moral outsourcing," or reallocating responsibility for what is built onto the products themselves. 

Her approach to regulation is that "mechanisms of accountability" should exist – and she says lack of accountability is a problem.

"There is simply risk and then your willingness to take that risk," she explained, stating that when the risk of failure becomes too great, it moves to an arena where the rules are bent in a specific direction.

OPENAI CEO ALTMAN BACKTRACKS AFTER THREATENING TO EXIT EUROPE OVER OVERREGULATION CONCERNS

"There are very few fundamentally good or bad actors in the world," she continued. "People just operate on incentive structures." 

The Harvard University Responsible AI fellow said she aimed to bridge the gap of understanding between technologists who "don't always understand people, and people [who] don't always understand technology." 

"At the core of technology is this idea that, like, humanity is flawed and that technology can save us," she said.

Notably, Chowdhury is working on a red-teaming event – during which hackers and programmers are encouraged to try and curtail safeguards and push tech to do bad things – for Def Con, which is a convention hosted by the hacker organization AI Village. The "hackathon" is supported by industry leaders – including OpenAI, Google and Microsoft – and the Biden administration.

She said she believes that it Is only through such collective efforts that proper regulation and enforcement can occur, although cautioning that overregulation could lead models to overcorrect. 

The outlet said Chowdhury added that it is not easy to define what is toxic or hateful. 

"It’s a journey that will never end," she said. "But I’m fine with that."

Data & News supplied by www.cloudquote.io
Stock quotes supplied by Barchart
Quotes delayed at least 20 minutes.
By accessing this page, you agree to the following
Privacy Policy and Terms and Conditions.