AI Dos and Don’ts
July 30, 2024
Although Artificial Intelligence (AI) brings tremendous opportunity, it also presents significant risk—particularly to those with access to sensitive client information and personally identifiable information (PII).
Advisers and investment companies process a lot of sensitive data and are legally obligated, as fiduciaries, to protect it. That’s why it’s critical to proceed with caution when it comes to utilizing AI for business-related purposes.
If you and your firm have begun utilizing AI, or if you are considering utilizing AI, you may find the following AI Do’s and Don’ts useful.
DO:
- DO use public or generally known information in large language model (LLM) prompts.
- It is important to remember that LLMs may not have the most up-to-date information and may not be able to assist with recently published materials.
- DO use LLMs for language translation, text summarization, and certain content creation (provided that such text does not include confidential information).
- DO use LLMs to enhance communication and collaboration, such as by generating non-confidential meeting minutes or assisting in drafting composition (email) and prospect communications.
- DO review inputs and outputs from the LLM prior to use or external publication.
- DO follow firm policy when submitting a new LLM for use to ensure the necessary approval is obtained.
- DO contact the appropriate party to determine if disclosure is required if distributing content prepared through the LLM outside of the United States.
- DO opt out of letting generative AI tools use any data you feed the tool to train their AI models.
- DO conduct due diligence on LLMs prior to usage.
DON’T:
- DON’T use sensitive, proprietary, or confidential information in LLM prompts, including but not limited to client information, trading information, proprietary research.
- DON’T engage in any activity that could compromise the security or integrity of the firm using LLMs, including attempting to open up access to unauthorized data sets and systems.
- DON’T violate any applicable laws or regulations, including data protection and privacy laws using LLMs.
- DON’T use LLMs to create or distribute harmful, offensive or abusive content.
Still have questions?
AI is a rapidly changing technology, and it will raises a lot of questions, particularly for advisers and those with access to sensitive information. At a minimum, Fairview recommends firms conduct training to educate employees on AI-related risk and permitted and prohibited AI usage. If you have any questions about AI or related best practices, let us know and one of our regulatory experts will be in touch.