The Ministry of Electronics and Information Technology (MeitY) recently issued an advisory aimed at significant technology platforms regarding the use of Artificial Intelligence (AI) models. This move is seen as a way to ensure the safety and trust of India's internet, as emphasised by Minister Rajeev Chandrasekhar.
The advisory specifically addresses the use of untested or unreliable AI platforms on the Indian internet. It requires large platforms to seek permission from MeitY before deploying such AI models and to inform users about the potential fallibility or unreliability of the AI-generated content.
The full text of the advisory can be found below or is available for download here.
In summary, the new advisory requires significant technology platforms to:
Ensure AI models do not allow users to share unlawful content.
Ensure their technology does not permit any bias or discrimination or threaten the electoral process's integrity.
Seek explicit permission from the Government of India for under-tested/unreliable AI platforms before deployment, along with appropriate labelling and informing users about the AI models' fallibility.
Inform users clearly about the consequences of dealing with unlawful information via user agreements.
Netizens have already criticised the advisory especially the vagueness of it. The following arguments have been offered by people potentially affected by the advisory and also policy makers around the world.
Regulatory Overreach: Some stakeholders could perceive the advisory as an unnecessary government intrusion into the technology sector, potentially stifling innovation and creativity.
Impact on Startups: Although startups are not required to seek permission for every AI model they wish to deploy, the ambiguity around what constitutes a "significant platform" may create uncertainty and fear of future regulatory burdens.
Compliance Burdens: Tech companies may feel the advisory adds another layer of bureaucracy that requires resources and time to navigate, which could slow down AI development and deployment.
Vague Definitions: If the advisory does not clearly define terms such as "untested AI platforms" or "significant platforms", it can lead to confusion and inconsistent enforcement, making compliance more challenging for tech companies.
Impact on User Experience: Mandatory labeling and disclosing the AI models' fallibility could impact user experience and potentially deter users from leveraging innovative AI features.
International Competitiveness: Players in the Indian tech industry may fear that such advisories and permissions might put them at a disadvantage compared to international competitors who operate in markets with fewer regulations.
Discretionary Enforcement: There may be concerns about how the advisory will be enforced and whether there is room for discretionary interpretation, which could lead to uneven application across different companies and sectors.
Minister Rajeev Chandrasekhar has clarified that the process of seeking permission, labeling, and user consent for untested platforms is an "insurance policy" for platforms that could otherwise face consumer lawsuits. He further stated that it's a shared and common goal between the Government, users, and platforms to maintain a safe and trustworthy internet environment in India. The ministry has requested compliance with the advisory and submission of an Action Taken-cum-Status Report within 15 days.
The minister also clarified that this measure does not apply to startups, which means that the burgeoning entrepreneurial tech sector in India can continue to innovate without the added burden of seeking permission for every AI model they wish to deploy. Hopefully, this will help allay the fears of the booming startup sector in India