News & Insights

AI Dos and Don’ts

Although Artificial Intelligence (AI) brings tremendous opportunity, it also presents significant risk—particularly to those with access to sensitive client information and personally identifiable information (PII).   

Advisers and investment companies process a lot of sensitive data and are legally obligated, as fiduciaries, to protect it. That’s why it’s critical to proceed with caution when it comes to utilizing AI for business-related purposes.  

If you and your firm have begun utilizing AI, or if you are considering utilizing AI, you may find the following AI Do’s and Don’ts useful.   

DO: 

  1. DO use public or generally known information in large language model (LLM) prompts. 
    • It is important to remember that LLMs may not have the most up-to-date information and may not be able to assist with recently published materials.  
  2. DO use LLMs for language translation, text summarization, and certain content creation (provided that such text does not include confidential information). 
  3. DO use LLMs to enhance communication and collaboration, such as by generating non-confidential meeting minutes or assisting in drafting composition (email) and prospect communications.  
  4. DO review inputs and outputs from the LLM prior to use or external publication. 
  5. DO follow firm policy when submitting a new LLM for use to ensure the necessary approval is obtained.  
  6. DO contact the appropriate party to determine if disclosure is required if distributing content prepared through the LLM outside of the United States.  
  7. DO opt out of letting generative AI tools use any data you feed the tool to train their AI models. 
  8. DO conduct due diligence on LLMs prior to usage.  

DON’T: 

  1. DON’T use sensitive, proprietary, or confidential information in LLM prompts, including but not limited to client information, trading information, proprietary research.  
  2. DON’T engage in any activity that could compromise the security or integrity of the firm using LLMs, including attempting to open up access to unauthorized data sets and systems. 
  3. DON’T violate any applicable laws or regulations, including data protection and privacy laws using LLMs. 
  4. DON’T use LLMs to create or distribute harmful, offensive or abusive content. 

Still have questions?

AI is a rapidly changing technology, and it will raises a lot of questions, particularly for advisers and those with access to sensitive information. At a minimum, Fairview recommends firms conduct training to educate employees on AI-related risk and permitted and prohibited AI usage. If you have any questions about AI or related best practices, let us know and one of our regulatory experts will be in touch.