RIA Guide to AI

See the guide below to learn more about AI, regulatory considerations, and tips for how to begin using AI.

Download PDF

Have questions? Enter your email below and one of our experts will contact you with more information.


Overview

In the past year, discussions around artificial intelligence (AI) have increased at a meteoric rate. As ChatGPT has become more popular, and as other new AI products have emerged, registered investment advisers, as well as the financial services industry, have raced to figure out what AI means for them. And for many, the idea of AI raises more questions than answers: What is AI? How can it benefit our firm? How can employees utilize AI? Should employees even be allowed to use AI? What are the associated risks?

From business opportunities to regulatory considerations to associated risk, there is a lot to consider.

This guide is intended to provide investment advisers with a solid understanding of AI, regulatory considerations, and tips for how they can begin to use AI—including how to update their compliance programs.

What is AI?

AI is a machine-based system that can, for a given set of human-defined objectives, make predictions, recommendations, or decisions influencing real or virtual environments.1 Artificial intelligence systems use machine and human-based inputs to:

  • Perceive real and virtual environments;
  • Abstract such perceptions into models through analysis in an automated manner; and
  • Use model interference to formulate options for information or action.

AI encompasses machine learning (ML), which is a set of techniques that can be used to train AI algorithms to improve performance on a task based on data.2

Although it seems as if AI only recently emerged, given the hype in the media, it has actually been around for quite some time. The recent popularity around AI is a result of the increased processing power with new computers to analyze large amounts of data very quickly. However, many automated customer service platforms have been using machine learning for years.

How Does AI Work?

The key to all machine learning is called training, formally known as Natural Language Processing (NLP). NLP starts by breaking down text into small pieces called tokens. In simple terms, NLP takes human language and converts it into data that a computer can understand. The program will then search for patterns in the data it has been given to achieve the instructions required. Humans then provide feedback to help educate the computer on which patterns are accurate. For instance, when you enter a prompt in a search bar and the computer provides an answer along with a question such as, “Was this information helpful?”, that’s an example of the how the computer is learning and refining its patterns. The result is a trained AI model, based on the data and feedback provided.

Popular chatbots, such as ChatGPT, are a type of AI model called Large Language Models (LLMs). LLMs are trained on huge volumes of text from the internet, which is why giant tech companies like Microsoft, Google, and Meta are in this space. It can be helpful to think of a chatbot as a parrot. It can mimic and repeat words it has heard, without having a full understanding of their meaning.

The graphic below demonstrates how an LLM works. It takes large amounts of unstructured and structured data (such as Word and Excel files) and breaks them down into smaller chunks called tokens. These tokens are analyzed to look for patterns, and the patterns are stored in a database that are used to interact with a LLM to answer questions about your files.

Understanding the Basics:

How AI and Large Language Models Work


Regulatory Considerations and SEC Exams

With the rise in popularity of AI, it makes sense that many advisers are eager to take advantage of it for their businesses. But there are several important items that advisers must consider before using AI.

Advisers process a lot of sensitive data and are legally obligated, as fiduciaries, to protect it. Using public LLMs could expose sensitive client or firm information. SEC regulators are paying close attention to AI usage, and they’ve started to ask about it in exams. In June 2023, the SEC proposed a new rule regarding predictive analytics and similar technologies, including AI.3

At a high level, the proposed rule includes the following:

  1. Identification, determination, elimination, or neutralization of the effect of conflict of interest

A firm must review any current or future use of covered technologies in conjunction with investor interaction to identify conflicts of interest that might arise from that use and further evaluate if that conflict of interest puts their interest ahead of investors’ interest. “Covered technology” includes a firm’s use of analytical, technological, or computational functions, algorithms, models, correlation matrices, or similar methods or processes that optimize for, predict, guide, forecast, or direct investment-related behaviors or outcomes of an investor.4

If it is determined that the firm’s interest is placed ahead of the investors’ interest, the firm must eliminate or diminish the outcome of the conflicts of interest.

  1. Policies and procedures

If a firm uses covered technology, policies and procedures must be in place to comply with the proposed rules. The policies and procedures should include detailed steps of the process to evaluate the use of a covered technology and how the effect of any conflicts of interest are eliminated or neutralized.

  1. Recordkeeping

Books and records related to the proposed rules requirements would also have to be maintained.

While there has been significant industry pushback, it’s clear that AI will remain a top priority for the SEC, as the SEC has already
asked AI-related questions in exams.

How Advisers Might Use AI

There are several ways advisers might use AI to support business goals. Examples might include:

  • To help draft marketing materials
  • To create commonly used frameworks and templates (i.e., meeting minutes)
  • To help summarize large amounts of data
  • To assist in draft client and prospect correspondence
  • To conduct industry research

Tips for Evaluating AI Products

There are ways to utilize AI, while securing your data and meeting regulatory requirements (including the SEC’s Books and Records Rule). Consider these tough questions before approving an AI product for your firm:

  • Is my client data secure?
  • Is my client data being used to train the model?
  • Are my employees following our compliance procedures regarding acceptable use for AI? And if so, am I able to monitor activity/prove it to regulators?
  • Does the AI product retain my data?

It’s important to remember that if you are not paying for the product, then you ARE the product. Make sure you are taking the right steps with AI to keep your client information safe.

Tips for Updating Your Compliance Program

To update your compliance program, we recommend starting with understanding your current AI usage. This will help determine what changes you may need to make to your current program.

Our recommendations are below:

  • Make a list that outlines how your firm is currently using AI. First, discuss AI usage to gain an understanding of how your firm is currently utilizing AI, including:
    • How it is used
    • What platforms are used
    • What information is input
  • Update policies and procedures. Next begin updating policies and procedures to incorporate acceptable uses and prohibited uses of AI, predictive analytics, and related technologies.
  • Establish acceptable uses for your firm. Acceptable uses may include:
    • To translate language, summarize text, and to create certain content
    • To enhance communication and collaboration
    • In code and application development
    • To conduct research
  • Establish prohibited Uses. This may include:
    • Using AI to process personally identifiable information
    • Using sensitive, proprietary, or confidential information in AI prompts
    • Using AI for personal purposes on a work device
  • Set guidelines for employees.
    • Conduct vendor due diligence on any platforms used.
    • Review privacy policies and agreements. Consider engaging assistance if you do not have expertise internally.
    • If possible, opt out of letting AI tools use any personal or firm data to feed the tool to train their AI models
    • Review outputs from AI tool for reasonableness prior to use or external publication
    • When using AI tools, confirm that the usage does not create a conflict of interest between firm and clients (and/or investors)
    • Consider implementing a testing program to evaluate outputs.
  • Hold employee training. Educate employees on best practices for using AI, including restrictions on uploading confidential information to large language models.
    • Provide ample time for questions, as this new technology could prompt a lot of questions.
    • Ensure employees understand best practices, including acceptable and prohibited uses.
  • Conduct vendor due diligence. Conduct thorough third-party reviews.
    • Make sure you understand the difference between how the AI platform processes personally identifiable
      information vs how the model sources (and uses) information, including information that is entered in a chat.
  • Implement your testing program.
    • Review your risk assessment and adjust accordingly to note the firm’s use of AI
    • Determine/assign responsibility for testing
    • Evaluate AI outputs and remind employees to follow acceptable uses when inputting data

Conclusion

While many advisers, particularly those in a compliance role, may be inclined to avoid AI altogether, there are ways to utilize the benefits of AI. With a sound compliance program in place, and regular, thorough employee training, it is not only possible, but can also be beneficial, to utilize AI to help support business goals.