Features

How Companies and Their Vendors Should Approach the Use of AI

It’s Not Going Away

By Linn Foster Freedman

Consumers have embraced the use of artificial intelligence (AI) tools in their everyday lives since ChatGPT was introduced into the economy last year. Employees are using AI technology in their workplaces, which causes risks to companies. In addition, third-party vendors are embedding AI technology into their products and services, often without companies’ knowledge, and are using company data to teach AI tools.

This article provides practical tips to evaluate the use of AI tools within an organization and by third-party vendors, how to minimize that risk, and how to approach the use of AI tools as technology advances.

Although AI technology has been in existence for decades, it has become mainstream over the past year with the arrival and novelty of ChatGPT’s use by consumers. When consumers embrace technology before companies, it is only a matter of time before consumers start to migrate that use into the workplace, whether it is approved or not.

Companies are struggling with how to introduce AI tools into their environment, as the risks associated with AI tools have been well-documented. These include copyright infringement, use and disclosure of personal information and company confidential data, bias and discrimination, hallucinations and misinformation, security risks, and legal and regulatory compliance risks.

These risks are real and compelling, especially when employees are sharing company data with AI tools. Once employees upload company data to an AI tool, that data may be used by the AI tool developer to teach its AI model, and the company’s confidential data may now be publicly available. Further, many companies are embedding AI into their products or services, and if you are disclosing confidential company data to vendors, they may be using your data to teach their AI tools or feeding your confidential data to other third-party AI tools.

Linn Foster Freedman

Linn Foster Freedman

“Companies are struggling with how to introduce AI tools into their environment, as the risks associated with AI tools have been well-documented.”

The risk is daunting, but manageable with strategy and planning. Here are some tips on how to wrap your arms around your employees’ use of AI tools in your organization. Tips to manage the risk of vendors using AI tools will be addressed later on.

 

Tips for Evaluating Your Organization’s Use of AI Tools

1. Don’t put your head in the sand. AI is here to stay, and your employees are already using it. They don’t understand the risk, but it seems cool, so they are and will continue using it. They will use any tool that will make their jobs easier — that’s human nature. Embrace this fact and commit to addressing the risk sooner rather than later. Ignoring the issue will only make it worse.

2. Don’t prohibit the use of AI tools in your organization. AI tools can be used to increase efficiencies in the workplace and increase business output and profits. Prohibiting its use will put you behind your competition and be a failed strategy. Your employees will use AI tools to make their work lives more efficient, so getting ahead of the risk and communicating with your employees is essential to evaluate and develop the use of AI in your organization.

3. Find out who the entrepreneurs and AI users are in your organization. Encourage the entrepreneurs in your organization to bring use cases to your attention and evaluate whether they are safe and appropriate. There are many uses of AI tools that do not present risks. The use cases should be evaluated, and proper governance and guardrails should be implemented to minimize risks.

4. Develop and implement an AI governance program. While AI tools are developing rapidly, it is essential to have a central program that will govern its use, internally and externally. Gather an AI governance team from different areas in the organization that will be responsible for keeping a tab on where and when AI is used; a process for evaluating uses, tools, and risks; putting guardrails and contractual measures in place to reduce the risk; and processes to minimize the risk of bias, discrimination, regulatory compliance, and confidentiality. The team will start slow, but once processes are in place, they will mature and pivot as technology develops.

5. Communicate with your employees often about the risks of using AI tools, the company’s AI governance program, and the guardrails you have put in place. Companies are better now than ever at communicating with employees about security risks, particularly email phishing schemes. Use the same techniques to educate your employees about the risks of using AI tools. They are using ChatGPT because they saw it on the news or one of their friends told them about it. Use your corporate communications to continually educate your employees using AI tools in the company and why it is important that they follow the governance program you have put in place. Many employees have no idea how AI tools work or that they could inadvertently disclose confidential company information when they use them. Help them understand the risks, make them part of the team, and guide them on how to use AI tools to improve their efficiency.

6. Keep the governance program flexible and nimble. No one likes another committee meeting or extra work to implement another process. Nonetheless, this one is important, so don’t let it get too bogged down or mired in bureaucracy. Start by mapping the uses of AI in the organization, evaluating those uses, and learning from that evaluation to become more efficient in the evaluation process going forward. Put processes in place that can be replicated and eventually automated. The hardest and most important work will be setting up the program, but it will get more efficient as you learn from each evaluation. The governance program is like a mini-AI tool in and of itself.

7. Be forward-thinking. Technology develops rapidly, and business organizations can hardly keep up. This is an area on which to stay focused and forward-thinking. Start by having someone responsible for staying abreast of articles, research, laws, and regulations that will be important in developing the governance program. Right now, a great place to start is with the White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. It gives a forward-thinking view of the development of regulations and compliance around AI that can be used as a prediction of what’s to come for your governance program.

8. Evaluate the risk of the use of AI tools by vendors. The AI governance team should be intimately involved with evaluating vendors’ use of AI tools, which is discussed in more detail below.

 

Tips for Evaluating the Risk of Use of AI Tools by Vendors

1. Carefully map which vendors are using AI tools. It might not be readily apparent which of your vendors use AI tools in their products or services. Team up with your business units to question which vendors are or may be using AI tools to process company data. Then, evaluate what data is disclosed and used by those vendors and determine whether any guardrails need to be put in place with the vendor.

2. Implement a process with business units to question vendors upfront about using AI tools. Business units are closest to the relationship with vendors, providing services to the business unit. Provide questions for the business units to ask when pursuing a business relationship with a vendor so you can evaluate the risk of using AI tools at the start. The AI governance team can then evaluate the use before contract negotiations begin.

3. Insert contractual language around the disclosure and use of company data and using AI tools. Companies may wish to consider developing an information security addendum (ISA) for any vendor with access to the company’s confidential data if they do not have one in place already. As the AI governance team evaluates the disclosure and use of company data to new vendors and the use of AI tools when processing company data, vendors should be questioned on the tools used, security measures used to protect company data (including from unauthorized use or disclosure of AI tools), and contractual provisions on the use of AI. Contractual language should be clear and concise about the vendor’s obligations and the remedies for a breach of the obligations, including indemnification. This language can be inserted in the ISA or the main contract.

4. Evaluate and map existing vendors’ use of AI tools. There may be some vendors you have already contracted with that are using AI tools to process confidential company information of which you are not aware. Prioritize which vendors have the highest risk of processing confidential company data with AI tools and review the existing contract. If applicable, request an amendment with the vendor to put appropriate contractual language in place addressing the processing of company confidential information with AI tools.

5. Add the evaluation of AI tools to your existing vendor-management program. If you have an existing vendor-management program in place, add the use of AI tools into the program going forward. If you don’t have an existing vendor-management program in place, it’s time to develop one.

 

Conclusion

Now is the time to implement a strategy and plan around the use of AI tools within your organization and externally by your vendors. It seems daunting, but the risk is clear and will be present until you address it. Hopefully, the tips in this article will help you start taking control of AI use in your organization and by your vendors and minimize the risk, so you can use AI to make your business more efficient and profitable.

 

Linn F. Freedman is a partner and chair of the Data Privacy + Cybersecurity Team at Robinson+Cole.