India’s rapidly growing AI industry, particularly companies developing generative artificial intelligence (GenAI) models, is facing increasing challenges as stricter data privacy laws and regulatory oversight become more prevalent.
Many businesses, from IT firms to banks and cloud storage providers, seek legal guidance to ensure their operations comply with the Digital Personal Data Protection (DPDP) Act amid concerns about using personal data in AI training.
Regulatory challenges and legal concerns
The DPDP Act, a landmark legislation passed by the Indian Parliament in August last year, sets out stringent requirements for protecting personal data while allowing its processing for lawful purposes. It is a crucial step towards ensuring data privacy and security in the rapidly evolving AI landscape.
However, many companies must build proprietary GenAI models with adequate transparency regarding how personal data is used to train these systems. This lack of transparency could violate the principles of lawful consent, fairness, and openness enshrined in the DPDP Act, raising significant legal risks.
Experts in the industry have voiced concerns that using publicly available data for AI training without proper consent could conflict with the DPDP Act and existing copyright laws. Establishing a breach of consent in AI, where models generate new outputs, adapt to new information, and operate with a degree of autonomy, presents unique challenges in the legal landscape.
Businesses are now consulting with legal experts to navigate these complexities, focusing on crafting privacy policies that secure appropriate user consent, define contractual obligations for data processors, and align with global data protection laws. The DPDP Act mandates principles like purpose limitation and data minimization, but the widespread use of the same data for multiple AI applications raises questions about compliance with these principles.
Data management is an uphill task.
The uncertainty surrounding data management in AI systems extends to the technical capabilities of these models. For instance, whether AI models can selectively delete parts of their memory or if they require complete retraining is a significant concern. The cost implications of such requirements and the need to ensure compliance with the DPDP Act’s provisions are pressing issues for companies as they strive to avoid potential legal pitfalls.
Indian companies are increasingly aware of the need to future-proof their operations against legal challenges, especially in the absence of solid legal precedents regarding GenAI’s impact on citizens’ rights. Tata Consultancy Services (TCS), one of the world’s leading IT services companies, has highlighted the importance of proactive risk management and adherence to evolving legal standards. To ensure compliance, TCS emphasizes robust governance frameworks, effective consent management, and continuous monitoring of global regulatory trends.
What this means for AI development in India
The concerns extend beyond the use of personal data. Experts note that inferences made by AI about individuals are also considered personal data, further complicating the landscape. Inaccuracy and bias in GenAI applications, particularly in marketing, hiring, digital lending, and insurance claims, pose critical risks. Questions about responsibility—whether it lies with the data fiduciary or developer companies like OpenAI—are central to the ongoing debates.
As India’s AI industry continues to grow, the balance between innovation and compliance with stringent data privacy laws will be crucial. Companies must navigate these challenges carefully to avoid legal repercussions while continuing to advance AI technologies. The ongoing dialogue between businesses, legal experts, and regulators is not just important, but it’s crucial in shaping the future of AI development in India. Your involvement is key to the industry’s future.