The story so far: On March 1, the Ministry of Electronics and Information Technology (MeitY) issued an advisory to the Artificial Intelligence industry. It said that all generative AI products, like large language models on the lines of ChatGPT and Googleâs Gemini, would have to be made available âwith [the] explicit permission of the Government of Indiaâ if they are âunder-testing/ unreliableâ.
What is the governmentâs stand?
The advisory represents a starkly different approach to AI research and policy that the government had previously signalled. It came soon after Rajeev Chandrasekhar, the Minister of State for Electronics and Information Technology, reacted sharply to Googleâs Gemini chatbot, whose response to a query, âIs [Prime Minister Narendra] Modi a fascist?â went viral. Mr. Chandrasekhar said the ambivalent response by the chatbot violated Indiaâs IT law.
How has it been received?
The advisory has divided industry and observers on a key question: was this an âadvisoryâ in the classic sense that was reminding companies of existing legal obligations, or was this a mandate? âIt sounds like a mandate,â Prasanth Sugathan, Legal Director at the Delhi-based Software Freedom Law Centre said at an event on Thursday. The document, sent to large tech platforms, including Google, instructed recipients to submit an â[a]ction taken-cum-status Report to the Ministry within 15 days.â Mr. Chandrasekhar insisted that there were âlegal consequences under existing laws (both criminal and tech laws) for platforms that enable or directly output unlawful content,â and that the advisory was put out for firms âto be aware that, platforms have clear existing obligations under IT and criminal law.â Mr. Chandrasekhar referred to rule 3(1)(b) of the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, which prohibits unlawful content like defamation, pornography, disinformation and anything that âthreatens the unity ⦠and sovereignty of India.â He added that the rules were intended for large tech firms and wouldnât apply to startups.
![](https://th-i.thgim.com/public/news/national/20yn8s/article67929784.ece/alternates/SQUARE_80/RishiJaitly.jpg)
The government hasnât elaborated in detail on how IT laws can apply to automated AI systems in this way. Pranesh Prakash, a technology lawyer who is an affiliated fellow at the Yale Law Schoolâs Information Society Project, said the advisory was âlegally unsound,â and compared it to the Draft National Encryption Rules of 2015, a quickly withdrawn proposal to outlaw strong encryption of data in India.
The advisory also included a requirement for AI-generated imagery to be labelled as such, something that the industry has vacillated between taking serious efforts on doing. Amazon Web Services has tried implementing an âinvisibleâ watermark, but has expressed concern that such a move would be of little use as watermarks can be edited out fairly easily.
![](https://th-i.thgim.com/public/latest-news/v2f8tf/article67926556.ece/alternates/SQUARE_80/AIKC.jpg)
Rahul Matthan, a technology lawyer and partner at the firm Trilegal, urged a more permissive approach to AI systems. âIn most instances, the only way an invention will get better is if it is released into the wild â beyond the confines of the laboratory in which it was created,â Mr. Matthan wrote after the advisory was released. âIf we are to have any hope of developing into a nation of innovators, we should grant our entrepreneurs the liberty to make some mistakes without any fear of consequences,â he added, pointing to the aviation industry as an example, where he said air safety improved as a result of planemakersâ willingness to share information on failure with each other to collectively improve air safety.
What has been the governmentâs approach to the AI industry?
Until recently, the government itself shared optimism on AI, where Big Tech firms have often struck a balance between seeking regulation and seeking to control the direction these regulations take. The IT Ministry last April categorically said that âthe government is not considering bringing a law or regulating the growth of artificial intelligence in the countryâ.
![](https://th-i.thgim.com/public/business/cjcgsa/article67929964.ece/alternates/SQUARE_80/WhatsApp%20Image%202024-03-08%20at%209.02.48%20PM.jpg)
But in the last few months, even before the now viral Gemini response, Mr. Chandrasekhar has expressed dissatisfaction with AI models spitting out uncomfortable responses. âYou canât âtrialâ a car on the road and when there is an accident say, âwhoops, it is just on trial. You need to sandbox that,â Mr. Chandrasekhar said on AI firmsâ responses to criticism on bias. The tension underlines the conflict inherent to widely testing an experimental technology â which is that wide testing is what allows these often unruly models to detect mistakes and improve. That dynamic was on display when Gemini generated racially incorrect photos of historical events, leading to a storm of criticism that led to the firm pausing the photo generation feature until it worked on a fix.
Will it benefit local developers?
âThis is just a poor job in communication, resulting from the need to do something in an election year,â Aakrit Vaish, co-founder of Haptik, a conversational AI firm founded in 2013 said on X. Mr. Vaish amplified subsequent clarifications on the advisoryâs applicability as good news for startups, and sought inputs to collect from local firms to send to the ministry.
Also read | Cabinet approves â¹10,372 crore corpus for AI infrastructure
Atul Mehra, founder of Vaayushop, an AI finance firm, expressed hope that the advisory could actually translate to a benefit for local developers. While it was a âshort term hassle,â he conceded on X, âitâs a huge opportunity in disguise. It points to [a] need for local AI stacks, datasets, [and] GPUs [graphics processing units] ⦠Letâs keep building and wait for our right moment to even beat Microsoft and Google.â