The pace of innovation in AI and its ensuing applications are dizzying. All organizations rush to embrace innovation with an eye towards good governance. But striking the right balance between the two is critical. Focus too much on the risks, and nothing moves. Focus not enough on risk considerations and one misfiring application can land the creator — and perhaps the organization — in hot water. So how should we balance this? Here are some considerations in our organization.

We adhere to five general principles for the responsible design, development and deployment of all AI systems:

Reliability. AI systems should be accurate, reliable and be measured against well-defined criteria. AI systems should be monitored and allow for timely human intervention.

Transparency. The use of AI systems should be transparent and explainable. The logic, factors and methodologies behind decisions made by AI systems should be understandable and accessible.

Fairness. AI systems should not discriminate or result in discrimination against individuals or groups of individuals based on any characteristic protected by applicable law.

Privacy. AI systems rely on large amounts of information. The use and deployment of AI systems must comply with applicable law and ethical principles relating to privacy.

Security. AI systems should be designed and deployed in a manner so as to prevent unauthorized access, theft, or misuse.

One principle worthy of additional discussion is Privacy. The ethical and legal considerations surrounding the collection, storage, and use of personally identifiable information (PII) in AI systems is governed by our organization’s robust data privacy framework. This framework is modeled on GDPR, which includes foundational elements such as

• Data minimization

• Anonymization and pseudonymization where possible

• Appropriate notice and consent (where applicable) and strong purpose limitation

• Data Protection Impact Assessments (DPIAs) for high-risk processing activities

We process client data, including PII, only for the purpose of providing services and do not use or otherwise process client data for any other purpose. We use DPIAs proactively to identify and mitigate any privacy risks related to high-risk processing, including activities such as automated decision-making and profiling.

Another concern is the regulatory landscape. We have EU and US federal and state governments as well as regulatory agencies all jumping in to regulate various aspects of AI. Continuous monitoring of the regulatory landscape is critical. But who should do it? Our answer is to share this responsibility among multiple stakeholders and teams.

• The AI Advisory Team is an interdisciplinary working group comprised of members of the Legal, IT, and Information Security Teams which meets on a regular basis to consider the latest information and risks associated with AI, evaluate and onboard new AI tools, and to formulate our approach to the evolving AI landscape.

• The Legal Team monitors developments in the AI regulatory landscape to ensure that our use and deployment of AI is in compliance with the requirements of applicable law and standards (e.g. EU Artificial Intelligence Act, NIST AI Risk Management Framework).

• The IT-Governance Risk Compliance Team conducts comprehensive vendor risk assessments to ensure that AI tools meet our and our clients’ security and reliability standards.

• The Legal Team also ensures that all AI vendor contracts contain appropriate confidentiality, security, privacy and intellectual property ownership commitments.

Moreover, we maintain an AI intranet page which contains guidelines and other resources designed to promote the responsible and secure use of AI tools, including our developed AI and third-party AI (e.g. Open AI).

How would you add to, subtract from, or otherwise strengthen our organizational approach? Does your organization do something particularly clever that you can share in the comments?