AI BIAS AND DISCRIMINATION
Other aspects of AI that need to be discussed are bias and discrimination, said Dr Chowdhury.
“I think what’s quite important for a lot of people … is to understand that this technology will not be as ubiquitous as we think it is, unless we deal with kitchen table issues like bias and discrimination,” said the data scientist who runs an organisation providing ethical AI solutions.
She noted that AI models are trained on the data of the internet, which is the data of the Western world.
“Thinking through applications in Asia will require us thinking through what the data and the sources and the biases are that could be very Western-focused, or just not appropriate for use in this region,” she said.
“What we’re trying to build AI models for, for example, is improving agricultural techniques. What will happen if these models are producing biased output because they don’t understand the crops in the region or the language that’s being spoken?”
She added that the priorities that are being pushed forwards globally surrounding standards and use cases are largely driven by only a few countries.
This means that solutions to improve crop yields because of climate, for instance, are not top of mind.
Dr Chowdhury noted that these are problems faced by farmers in nations that do not have a seat at the same table, such as Bangladesh – where her family is originally from – and Vietnam.
“It’s really critical that we focus on what is useful for people and the rest of us … broadband access, connectivity, mobile access, all of these things are important in a population being engaged in the AI future,” she said.
Funding is largely coming from industry, which in turn puts it in the position of agenda setter, noted Dr Chowdhury. She added that philanthropies need to play a bigger role.
This post was originally published on here