The Indian government’s Ministry of Electronics and IT issued an advisory to tech companies requiring them to get government approval before launching “significant” AI models in India. This is a new policy and has sparked discussions.
The government aims to prevent the misuse of AI, particularly regarding the spread of misinformation and deepfakes, and to ensure user awareness of potential unreliability in AI outputs.
Tech companies need government permission before deploying AI models, especially those still under development.
AI-generated content must be labelled or identified to track its origin and hold creators accountable for misinformation.
Tech companies must inform users about the limitations and potential unreliability of AI outputs through disclaimers and consent mechanisms.
This policy has been met with mixed reactions. Some view it as a necessary step to regulate AI and protect users. Others are concerned about potential delays in innovation and a lack of clear guidelines for what constitutes a “significant” AI model.