x

SEBI’s Consultation Paper and the winds of AI Governance

02 July 2025

by Sameer Avasarala Aryashree Kunhambu

As artificial intelligence and machine learning takes centrestage in many business models and customer-centred activities, regulating them to prevent user harm becomes a key prerogative not just at a legislative but a regulatory level. The absence of a comprehensive legislation on AI, reported reluctance[1] to bring in heavy regulatory framework coupled with legislative indications towards light-touch regulation with techno-legal measures[2] has resulted in stronger push by sectoral regulators to address concerns arising from AI.

The Securities and Exchange Board of India (‘SEBI’) has released a consultation paper[3] dated 20 June 2025, on the guidelines for responsible usage of AI/ML in Indian Securities Markets (‘Consultation Paper’) that outlines the background, current regulatory landscape, international best practices, utilization of AI/ML in the market, along with the recommendations of the working group. The Consultation Paper outlines that the current use of AI/ML models are by exchanges, brokers and mutual funds for a wide variety of internal, customer support, security, pattern recognitions, KYC, order executions and related purposes.

Regulatory approach

In the current landscape of the securities market, regulated entities are utilising AI/ML models to not only undertake various business functions (such as use of chatbots for customer support) but also fulfil their statutory obligations (ranging from KYC, onboarding diligence, transaction monitoring, fraud detection etc.) under various SEBI regulations. The approach outlined in the Consultation Paper focuses on the regulation of AI/ML models based on certain core guiding principles below viz.

* Model Governance: It is proposed that market participants may use and implement AI/ML models by implementing model governance through a host of measures. These include monitoring of model functioning, efficacy and performance, risk control, governance structure, contractual framework with service providers for scoping and determining rights and remedies, periodic reviews and ongoing monitoring, independent audits, traceability of reasoning and model functioning, and ensuring compliance with law and regulatory requirements.

The comprehensive framework recommended by SEBI not only encourages ethical use of artificial intelligence models but also sets standards for use and deployment of AI systems across regulated sectors and other sectors with high impact on users.

* Investor Protection Disclosure: SEBI proposes that market participants may ensure the protection of investors by employing transparency in disclosures and fostering trust through measures including:

(i) Disclosure of information to investors for the usage of AI/ML applications including product features, purposes, risks involved, accuracy of the model, fees/charges to be levied, information about the quality of data that is used to make AI/ML driven decisions including its completeness and relevance;

(ii) Ensuring that the language of such disclosures is comprehensible to customers/clients to enable them to make informed decisions; and,

Establishing an investor grievance mechanism for AI/ML systems aligning with SEBI’s existing regulatory framework.

* Testing Framework: It is also proposed that market participants conduct testing and monitoring of systems employing measures such as testing and monitoring of models, segregation of testing environments from production, shadow-testing and comparison with rule-based systems for drift detection, documentation and transparency and post-deployment monitoring for deviation detection and human oversight or intervention, where required.

* Fairness and Bias: Recognizing that data quality is imperative for proper functioning of AI systems, recommendations revolve around ensuring adequate data quality and completeness, bias testing and audit framework to ensure detection and remediation of bias. Given that the recommendations do not outline the specific thresholds for fairness and bias, they serve as significant starting points for organizations to develop practical risk-based systems.

* Data Privacy and Cyber‑Security Measures: In consonance with some of the requirements specified under the upcoming Digital Personal Data Protection Act, 2023, SEBI’s recommendations revolve around clear policy frameworks for data privacy and security for use of AI models and handling of personal data in a legally compliant manner. Akin to existing reporting obligations (to Indian Computer Emergency Response Team, SEBI under the CSCRF, Data Protection Board under the DPDPA and other frameworks), obligations to report glitches, data breaches to SEBI opens way for future coordination between authorities, especially in regulated sectors for convergently responding to such incidents.

Tiered approach to compliance

The Consultation Paper prudently differentiates between customer‑facing and back-office models, providing a light touch framework in the latter’s case. While high‑impact, client‑facing deployments (for example robo-advice, portfolio rebalancing, or automated order routing) are subject to the entire gamut of governance, disclosure, fairness and testing obligations, those for internal utilities (such as cyber‑security analytics or regulatory reporting) may operate under a light regime, that proportionately aims to provide for safeguards.

Classification based on risk associated with deployment of AI systems is also seen across the world, such as the European Union’s AI Act which provides for higher thresholds of obligations on systems that are considered ‘high-risk’ and can impact the critical infrastructure for financial services, as well as negatively impact individual investors.

Risk categorizations

The Annex to the Consultation Paper also outlines certain control measures that may be utilized by market participants to mitigate certain identified risks emerging from the use of AI/ML models. These risks and measures include:

* Malicious Use: To address the risk of malicious use of AI models to generate falsified content that has an impact on the market, the recommended control measures revolve around use of digital signatures for watermarking, reporting and public awareness;

* Concentration: To combat risks associated with concentration of Gen-AI providers, the recommended measures include diversification, reporting of service providers to the regulator, and periodic monitoring of dominant Gen-AI providers;

* Herding or Collusion: To mitigate risks associated with usage of commonly-used AI systems and models, the recommendations revolve around promoting use of varied AI architectures, proprietary datasets, auditing and monitoring of herding behaviour.

* Interpretability: Considering the use of complex Gen-AI models and typical difficulties in understanding the models, the Consultation Paper recommends AI process documentation, use of interpretable models or explainability tools and human review of AI output.

* Model Failure: In a market that extensively uses AI models, flaws in Gen AI systems may have consequential effects on financial stability, and it is recommended to stress test systems, implement volatility controls (such as circuit breakers) and exercise human oversight.

* Non-Compliances: The Consultation Paper also refers to situations where the use of AI models may lead to non-compliances, or where market participants may shift liability to Gen-AI providers and recommends regulatory sandbox or testing, training and ‘human-in-the-loop’ or ‘human-around-the-loop’ mechanisms.

Looking ahead

In light of the above discussion, we note that the Consultation Paper not only provides key recommendations in the absence of a comprehensive AI legislative framework but also offers insight into the perspective of sectoral regulators such as SEBI, on the adoption of AI and emergent risks. While the proposed measures are in line with globally equivalent methods of AI regulation, it is important to consider that finer specifics on implementation and practical application of the measures would have to be specified in due time.

The Consultation Paper positions India’s capital market regulator at the forefront of responsible governance and marks an important step towards recognizing risks caused by use of AI models. The measures serve as significant foundational guidelines based on which various obligations may fructify in subsequent regulations, not just by SEBI but also by other regulators who aim to ring-fence regulated entities from the perils of AI use.

[The authors are Principal Associate and Associate, respectively, in Corporate and M&A practice at Lakshmikumaran & Sridharan Attorneys, Hyderabad]

 

[1] 'No regulations for Artificial Intelligence in India': IT Minister Ashwini Vaishnaw, available here

[2] India developing unique AI regulation model: Ashwini Vaishnaw, available here

[3] Consultation Paper on guidelines for responsible usage of AI/ML In Indian Securities Markets, available here

Browse articles