Teja Manakame, VP, IT India, Dell Technologies. Photo: Special Arrangement

Some believe artificial intelligence (AI) could be more biased than humans as the technology may display stronger racial and gender biases than people do. How can banks, hospitals, governments, and public service bodies deploy Generative AI to run their functions globally and yet ensure fair treatment of their customers and citizens?

Well, Teja Manakame, VP, IT India, Dell Technologies says, biases can inadvertently be introduced in the system based on the prompts used to generate results. She advises exercising caution while designing prompts to avoid such unfair bias to guard against discrimination.

Talking to The Hindu about ethical AI, she said, ‘‘AI must be honest, fair, and equitable. It must avoid unfair bias and guard against prejudice, and marginalisation of vulnerable populations.” Ms. Manakame also cautioned that ‘equitable’ may not always mean ‘impartial’, and that AI systems may require human oversight or intervention to ensure equitable results.

She said outcomes from the use of any technology are based on how we use it. AI is just another technology without any human cognition. The system inherently does not have any bias but depends on how it is trained.

The outcomes will depend on how we train the models, the data sets we use, and the design of prompts etc. Hence it is crucial that the data sets used for training models are fair and cover all scenarios avoiding all possible biases, she added.

When asked to comment on Elon Musk’s statement: that AI is a ‘destructive force’, she said AI is now ubiquituous in businesses of all sizes, and in consumer and residential applications.

‘‘We see the usage of AI in every aspect of life, right from purchasing something online to self-driving vehicles. This technology makes our lives easier and enhances customer experiences. As with any technology we must leverage this to augment human intelligence and enhance our productivity,” Ms. Manakame added.



Source link