Annex - Arrangement for Generative A.I. Sandbox++ (“Sandbox”)

Annex Email: HKMA E-mail Alert of 06 March 2026 (05:00 p.m. HKT)

Document Information

Title: Annex - Arrangement for Generative A.I. Sandbox++ (“Sandbox”)

Type: Annex

URL: https://brdr.hkma.gov.hk/eng/doc-ldg/docId/20260305-3-EN

Email Received: 2026-03-06 19:30

Summary Created: 2026-03-06 13:01

English Summary (8364 chars)
Quick section switch
Management Summary
  • Purpose / Background:
    The HKMA (and other relevant regulators like SFC, IA, MPFA) have launched the Generative AI Sandbox Plus (Sandbox++) to facilitate the development, testing, and pilot of innovative AI and Generative AI solutions within the financial industry. The initiative aims to provide early supervisory feedback and share good practices, focusing on areas like risk management, anti-fraud, and customer experience.
  • One-line conclusion (what changed / what needs to be done):
    Regulated entities can now apply to join the Generative AI Sandbox++, which provides a controlled environment for testing AI/GenAI innovations with supervisory guidance and technical support, requiring adherence to specific application and participation guidelines.
  • Key Changes (3-8 bullets):
  • Expanded scope to include Generative AI (GenAI) alongside broader AI applications.
  • Focus on specific use cases: enhancing risk management, anti-fraud measures, and customer experience.
  • Emphasis on AI safety and risk management components (bias, explainability, monitoring) as mandatory for all use cases.
  • Encouragement of "AI vs. AI" strategies for validating AI outputs.
  • Prioritization of innovative, complex solutions with significant industry impact.
  • Encouragement of data minimization techniques like masking or tokenization.
  • Clear application procedures and specific contact points for different regulated institutions.
  • Participants will receive supervisory feedback and engage in knowledge-sharing events.
  • Key Dates / Deadlines:
    Applications can be submitted continuously. The duration of participation in the sandbox typically lasts six to eight months after selection, with possible extensions.
  • Applicability / Impact scope:
    Applicable to Authorized Institutions under the Banking Ordinance, Licensed Corporations under the Securities and Futures Ordinance, Authorized Insurers and Licensed Insurance Broker Companies under the Insurance Ordinance, MPF Approved Trustees and Principal Intermediaries under the MPF Schemes Ordinance, and Stored Value Facility Licensees under the Payment Systems and Stored Value Facilities Ordinance.
  • Recommended management actions (3-7 actionable bullets):
  • Assess potential AI/GenAI use cases within your institution that align with the Sandbox's focus areas.
  • Review the application requirements and evaluation criteria to prepare a strong proposal.
  • Identify and engage potential technology partners early, though not mandatory.
  • Allocate necessary internal resources for project management and technical execution if selected.
  • Familiarize teams with the data minimization and AI safety principles outlined.
  • Plan for active participation in sandbox events and knowledge sharing.
  • Understand that participation is free, but internal costs for resources and partners are the institution's responsibility.
Detailed Summary
  1. Document overview (nature, purpose, scope)
    This document outlines the "Arrangement for Generative A.I. Sandbox ++" (Sandbox++), established by the HKMA and other relevant regulators. Its primary purpose is to support the development, testing, and pilot of innovative AI and Generative AI (GenAI) solutions in the financial industry. The scope includes providing early supervisory feedback and sharing good practices derived from sandbox trials, focusing on use cases that enhance risk management, anti-fraud measures, and customer experience, while also encouraging broader societal benefits.
  1. Main requirements (group by topic; state what must be done)
  • Solution Focus: Use cases should focus on enhancing risk management (e.g., creditworthiness, investment suitability, listing document review, underwriting, claims forecasting), anti-fraud measures (e.g., deepfake detection, fraudulent message identification, forged document detection, claim anomaly review), and customer experience (e.g., advanced chatbots, real-time claim updates). Broader societal/economic benefits are also encouraged (e.g., climate risk, financial inclusion).
  • AI Safety & Risk Management: All use cases must incorporate AI safety and risk management components, including bias detection/mitigation, explainable AI, and frameworks for AI output monitoring/evaluation.
  • "AI vs. AI" Strategies: Participants are encouraged to explore using AI to validate, safeguard, and enhance the accuracy of AI outputs.
  • Data Minimization: Adherence to data minimization is encouraged, using techniques like data masking or tokenization. Participants can leverage public, anonymized, or synthetic data, focusing on validating workflows and risk controls under realistic conditions.
  • Data Security: Participants must review and implement adequate data security controls, including encryption and access controls, for data used within the Sandbox. A secure data transfer and access mechanism will be provided.
  • Innovation & Complexity: Priority is given to solutions demonstrating a significant level of innovation, complexity, and potential for substantial industry impact.
  • Fair Use: Solutions must be designed to be used in a fair, responsible, and ethical manner.
  • Application Submission: Applicants must provide detailed information including high-level design, applicable models, risk assessments, and technology partner details.
  • Participant Expectations: Accepted participants must allocate sufficient internal resources, provide regular progress updates, actively participate in events, and submit a final report upon conclusion.
  1. Key changes (vs previous requirements)
    This iteration explicitly includes Generative AI (GenAI) and expands the focus areas to encompass more sophisticated applications within risk management, anti-fraud, and customer experience. It places a stronger emphasis on mandatory AI safety and risk management components, encourages advanced validation techniques like "AI vs. AI," and promotes data minimization strategies. The application process and contact points have also been clearly delineated for different regulatory bodies.
  1. Important dates & transition
    There are no specific application deadlines mentioned, suggesting a continuous application window. The typical duration of a sandbox trial, including preparation and reporting, is six to eight months post-selection, with provisions for extensions on a case-by-case basis.
  1. Impact and risks (operations/compliance/IT/data/reporting)
  • Operational Impact: Requires allocation of significant internal resources (project management, technical execution), coordination with technology partners, and active participation in events.
  • Compliance Impact: necessitates adherence to AI safety, risk management, data minimization, and data security principles. Post-sandbox implementation requires following established procedures for new technology adoption.
  • IT/Data Impact: Demands robust data security controls, potentially involving new data sanitization techniques and secure data transfer mechanisms. Trials are conducted in a secure environment at Cyberport's A.I. Supercomputing Centre.
  • Reporting Impact: Participants must provide regular progress updates and submit a comprehensive final report detailing outcomes and learnings.
  1. Compliance action checklist (practical steps)
  • Internal Assessment: Identify potential AI/GenAI use cases aligned with Sandbox objectives.
  • Proposal Development: Prepare detailed application including design, models, and risk assessments.
  • Technology Partner Identification: Engage with technology vendors if external expertise is needed.
  • Resource Allocation: Secure necessary internal human and technical resources.
  • Risk Mitigation Planning: Develop strategies for AI safety, risk management, and data security.
  • Data Strategy: Plan for data anonymization, minimization, and secure handling.
  • Application Submission: Submit application through the designated channels for your institution type.
  • Engagement Plan: Prepare for active participation in sandbox events and knowledge sharing.
  • Reporting Framework: Establish a process for tracking progress and preparing the final report.
  1. Appendices/attachments summary (if any; 1-3 sentences each; total <= 20%)
    No appendices or attachments were provided in the document content.
中文摘要 (3412 chars)
快速切換摘要區塊
管理層摘要
目的/背景

香港金融管理局(HKMA)推出「生成式人工智能沙盒++」(Generative A.I. Sandbox ++),旨在支持金融行業開發、測試和試點創新的AI及生成式AI解決方案,並提供監管機構的早期指導及分享良好實踐。

一句話結論

HKMA設立GenAI Sandbox++,鼓勵金融機構試驗AI/GenAI於風險管理、反詐騙及客戶體驗的創新應用,並提供監管協調與技術支援。

關鍵變更
  • 擴展AI/GenAI應用範圍,涵蓋風險管理、反詐騙及客戶體驗等領域。

  • 鼓勵參與者採用AI對AI策略,以驗證AI輸出的準確性和穩健性。

  • 強調數據最小化原則,鼓勵使用數據脫敏技術,如數據遮蔽或代幣化。

  • 參與者需納入AI安全和風險管理組件,包括偏差檢測與緩解、可解釋AI等。

  • 申請評估將著重於解決方案的創新水平、複雜度、對行業的預期貢獻及公平使用原則。

  • 提供集中式技術支援、監管指導及行業交流機會。

  • 試驗將在Cyberport的AI超級計算中心安全環境中進行。

重要日期 / 截止日

文件未明確列出具體申請截止日期,但提及沙盒試驗期一般為六至八個月,並可按情況延長。

適用對象 / 影響範圍

所有持有銀行條例、證券及期貨條例、保險條例、強積金條例、及支付系統及儲值支付設施條例下牌照的機構及其科技合作夥伴。

管理層建議行動
  • 評估機構內部的AI/GenAI潛在應用場景,特別是風險管理、反詐騙及客戶體驗方面。

  • 檢視現有數據管理及安全措施,確保符合數據最小化及安全傳輸的要求。

  • 考慮與科技夥伴合作,以提升解決方案的技術水平和創新性。

  • 準備詳細的項目設計、風險評估及技術合作夥伴的資訊以遞交申請。

  • 指派專責團隊負責項目管理、技術執行及與監管機構的溝通。

  • 積極參與沙盒內的知識分享及交流活動,以汲取經驗和建立網絡。

  • 一旦獲選,需積極配合監管機構的進度更新要求及完成最終報告。

詳細摘要
1) 文檔概述
性質

政策指引/項目公告

目的

設立「生成式人工智能沙盒++」(Generative A.I. Sandbox ++),以支持金融行業開發、測試和試點創新的人工智能(AI)及生成式AI(GenA.I.)解決方案。其目標是提供早期、具針對性的監管反饋,並分享AI/GenAI應用中的良好實踐。

適用範圍

適用於持有銀行條例、證券及期貨條例、保險條例、強積金條例、及支付系統及儲值支付設施條例下牌照的機構,以及它們的科技合作夥伴。所有申請必須由受規管機構提交。

2) 主要要求
  • 主題

    聚焦領域與應用案例

    內容

    預計聚焦於提升風險管理、反詐騙措施和客戶體驗。範例包括: 風險管理(信用評級、投資產品合規性、承銷決策優化、索賠預測);反詐騙(偵測深偽詐騙、識別詐騙訊息、檢查客戶入職文件、識別異常索賠文件);客戶體驗(先進的客戶服務聊天機器人、實時索賠狀態更新)。鼓勵涵蓋更廣泛的社會和經濟效益的用例,如氣候風險評估、普惠金融、健康保障、長期財務規劃等。

  • 主題

    AI安全與風險管理

    內容

    所有沙盒用例必須納入AI安全和風險管理組件,包括偏差檢測與緩解、可解釋AI、以及AI輸出監測與評估框架。鼓勵參與者探索「AI對AI」策略,以驗證AI輸出的準確性和穩健性。

  • 主題

    通用原則

    內容

    優先考慮具顯著創新水平和潛在重大影響的解決方案。強調數據最小化原則,鼓勵使用數據脫敏技術(如數據遮蔽或代幣化)以減低數據洩漏風險。可適當利用公共數據、匿名數據或合成數據。必須確保AI解決方案在真實條件下進行訓練和測試。將提供安全的數據傳輸和訪問機制,參與者需實施足夠的數據安全控制(如加密和訪問控制)。AI/GenAI特定風險緩解措施的程度是項目選擇的關鍵因素。

  • 主題

    申請流程

    內容

    申請人需提供詳細資訊,包括高層次設計、適用模型、風險評估及技術合作夥伴資訊。監管機構可能要求額外資訊。處理時間取決於用例複雜性、資訊質量及回應速度。

  • 主題

    參與者期望

    內容

    接受沙盒的機構需分配足夠內部資源,定期向監管機構匯報進度,積極參與沙盒活動及知識分享,並在試驗結束時提交最終報告,詳述試驗結果和學習要點。

  • 主題

    沙盒協作與促進活動

    內容

    將組織沙盒協作及其他促進活動,以支持用例開發。機構和技術供應商將按興趣和專業知識分組參與。活動有助於機構識別和合作技術夥伴。

3) 關鍵變更
  • GenAI Sandbox ++ 是前身沙盒計劃的擴展,引入了「++」,暗示了更廣泛的範圍和更深入的支持。

  • 明確鼓勵「AI對AI」策略,為驗證AI輸出提供了一項具體的機制。

  • 對數據最小化原則的要求更加明確,並建議了具體的數據脫敏技術。

  • 申請評估標準更加細緻,納入了「解決方案複雜度」和「公平使用原則」。

  • 擴展了適用機構的範圍,涵蓋了MPF受託人和SVF持牌人等。

4) 重要日期與過渡安排
  • 文件未列出具體的申請截止日期。

  • 一旦被選中,沙盒試驗一般為期六至八個月(包括準備工作和最終報告撰寫)。

  • 可按情況延長沙盒試驗期。

5) 對機構的影響與風險
  • 營運影響: 需要投入人力資源、時間及潛在的商業合作成本。試驗環境的設置和技術夥伴的整合可能帶來營運挑戰。

  • 合規影響: 參與者需確保其AI/GenAI解決方案符合現有監管要求,並在沙盒環境下遵守數據安全和隱私規定。試驗結果將影響未來技術採用的決策,需確保合規過渡。

  • IT/資料影響: 需要確保數據的機密性、完整性和安全性。技術實施和測試可能需要調整現有IT基礎設施。需謹慎處理數據脫敏和訪問權限。

  • 報告影響: 需要按時提交進度報告和最終報告,總結試驗結果和關鍵學習。

6) 合規動作清單
  • 識別潛在的AI/GenAI用例,重點關注風險管理、反詐騙和客戶體驗。

  • 評估機構內部現有的AI/GenAI能力、數據基礎設施和風險管理框架。

  • 審查並確保符合數據最小化原則,研究並計劃實施數據脫敏技術。

  • 規劃AI安全和風險管理組件(如偏差檢測、可解釋性),並考慮AI對AI驗證策略。

  • 物色並評估潛在的科技合作夥伴。

  • 準備詳細的申請資料,包括高層次設計、模型、風險評估及技術夥伴資訊。

  • 指派項目負責人及專責團隊。

  • 向相關監管機構提交申請。

  • 在沙盒試驗期間,定期匯報進度,並積極參與知識分享活動。

  • 試驗結束後,提交最終試驗報告。