Article
SEC Division of Examinations Releases 2026…
5 minutes read
The 2026 regulatory environment reflects a pro-innovation stance grounded in enforcement of existing rules. Regulators expect financial firms to leverage AI and emerging technologies responsibly—without compromising governance, control, or compliance.
As generative AI (GenAI) becomes a core capability across U.S. capital markets, supervisory expectations are evolving. Firms must now move from simple experimentation to enterprise-level oversight, ensuring AI-enabled processes operate within documented controls and active human supervision.
1. Transparency & Disclosure
Firms should move beyond boilerplate language to materiality-based reporting. Public disclosures should accurately reflect technical AI capabilities, preventing “AI washing”—the practice of exaggerating AI’s impact on trading or risk management.
2. Adequate Supervision
AI governance must be operationalized within existing supervisory frameworks. This includes integrating technology-specific protocols into internal policies, monitoring third-party vendors, and implementing guardrails to prevent AI errors and data misuse.
3. Active Human Oversight
Supervisory responsibility cannot be delegated entirely to algorithms. Oversight must remain human-led, documented, and capable of catching system errors.
Sia bridges the gap between innovation and accountability. Our agentic tool, RegAI, enables firms to scale AI use cases responsibly while maintaining human-led oversight and regulatory defensibility. With Sia, organizations can operationalize AI adoption, automate compliance processes, and ensure proper governance at scale.
Associate Partner, Financial Services | New York
Zoya is an Associate Partner in our Financial Services Practice leading the Legal and Compliance unit.