Home Posts tagged Artificial Intelligence
Features

It’s Not Going Away

By Linn Foster Freedman

Consumers have embraced the use of artificial intelligence (AI) tools in their everyday lives since ChatGPT was introduced into the economy last year. Employees are using AI technology in their workplaces, which causes risks to companies. In addition, third-party vendors are embedding AI technology into their products and services, often without companies’ knowledge, and are using company data to teach AI tools.

This article provides practical tips to evaluate the use of AI tools within an organization and by third-party vendors, how to minimize that risk, and how to approach the use of AI tools as technology advances.

Although AI technology has been in existence for decades, it has become mainstream over the past year with the arrival and novelty of ChatGPT’s use by consumers. When consumers embrace technology before companies, it is only a matter of time before consumers start to migrate that use into the workplace, whether it is approved or not.

Companies are struggling with how to introduce AI tools into their environment, as the risks associated with AI tools have been well-documented. These include copyright infringement, use and disclosure of personal information and company confidential data, bias and discrimination, hallucinations and misinformation, security risks, and legal and regulatory compliance risks.

These risks are real and compelling, especially when employees are sharing company data with AI tools. Once employees upload company data to an AI tool, that data may be used by the AI tool developer to teach its AI model, and the company’s confidential data may now be publicly available. Further, many companies are embedding AI into their products or services, and if you are disclosing confidential company data to vendors, they may be using your data to teach their AI tools or feeding your confidential data to other third-party AI tools.

Linn Foster Freedman

Linn Foster Freedman

“Companies are struggling with how to introduce AI tools into their environment, as the risks associated with AI tools have been well-documented.”

The risk is daunting, but manageable with strategy and planning. Here are some tips on how to wrap your arms around your employees’ use of AI tools in your organization. Tips to manage the risk of vendors using AI tools will be addressed later on.

 

Tips for Evaluating Your Organization’s Use of AI Tools

1. Don’t put your head in the sand. AI is here to stay, and your employees are already using it. They don’t understand the risk, but it seems cool, so they are and will continue using it. They will use any tool that will make their jobs easier — that’s human nature. Embrace this fact and commit to addressing the risk sooner rather than later. Ignoring the issue will only make it worse.

2. Don’t prohibit the use of AI tools in your organization. AI tools can be used to increase efficiencies in the workplace and increase business output and profits. Prohibiting its use will put you behind your competition and be a failed strategy. Your employees will use AI tools to make their work lives more efficient, so getting ahead of the risk and communicating with your employees is essential to evaluate and develop the use of AI in your organization.

3. Find out who the entrepreneurs and AI users are in your organization. Encourage the entrepreneurs in your organization to bring use cases to your attention and evaluate whether they are safe and appropriate. There are many uses of AI tools that do not present risks. The use cases should be evaluated, and proper governance and guardrails should be implemented to minimize risks.

4. Develop and implement an AI governance program. While AI tools are developing rapidly, it is essential to have a central program that will govern its use, internally and externally. Gather an AI governance team from different areas in the organization that will be responsible for keeping a tab on where and when AI is used; a process for evaluating uses, tools, and risks; putting guardrails and contractual measures in place to reduce the risk; and processes to minimize the risk of bias, discrimination, regulatory compliance, and confidentiality. The team will start slow, but once processes are in place, they will mature and pivot as technology develops.

5. Communicate with your employees often about the risks of using AI tools, the company’s AI governance program, and the guardrails you have put in place. Companies are better now than ever at communicating with employees about security risks, particularly email phishing schemes. Use the same techniques to educate your employees about the risks of using AI tools. They are using ChatGPT because they saw it on the news or one of their friends told them about it. Use your corporate communications to continually educate your employees using AI tools in the company and why it is important that they follow the governance program you have put in place. Many employees have no idea how AI tools work or that they could inadvertently disclose confidential company information when they use them. Help them understand the risks, make them part of the team, and guide them on how to use AI tools to improve their efficiency.

6. Keep the governance program flexible and nimble. No one likes another committee meeting or extra work to implement another process. Nonetheless, this one is important, so don’t let it get too bogged down or mired in bureaucracy. Start by mapping the uses of AI in the organization, evaluating those uses, and learning from that evaluation to become more efficient in the evaluation process going forward. Put processes in place that can be replicated and eventually automated. The hardest and most important work will be setting up the program, but it will get more efficient as you learn from each evaluation. The governance program is like a mini-AI tool in and of itself.

7. Be forward-thinking. Technology develops rapidly, and business organizations can hardly keep up. This is an area on which to stay focused and forward-thinking. Start by having someone responsible for staying abreast of articles, research, laws, and regulations that will be important in developing the governance program. Right now, a great place to start is with the White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. It gives a forward-thinking view of the development of regulations and compliance around AI that can be used as a prediction of what’s to come for your governance program.

8. Evaluate the risk of the use of AI tools by vendors. The AI governance team should be intimately involved with evaluating vendors’ use of AI tools, which is discussed in more detail below.

 

Tips for Evaluating the Risk of Use of AI Tools by Vendors

1. Carefully map which vendors are using AI tools. It might not be readily apparent which of your vendors use AI tools in their products or services. Team up with your business units to question which vendors are or may be using AI tools to process company data. Then, evaluate what data is disclosed and used by those vendors and determine whether any guardrails need to be put in place with the vendor.

2. Implement a process with business units to question vendors upfront about using AI tools. Business units are closest to the relationship with vendors, providing services to the business unit. Provide questions for the business units to ask when pursuing a business relationship with a vendor so you can evaluate the risk of using AI tools at the start. The AI governance team can then evaluate the use before contract negotiations begin.

3. Insert contractual language around the disclosure and use of company data and using AI tools. Companies may wish to consider developing an information security addendum (ISA) for any vendor with access to the company’s confidential data if they do not have one in place already. As the AI governance team evaluates the disclosure and use of company data to new vendors and the use of AI tools when processing company data, vendors should be questioned on the tools used, security measures used to protect company data (including from unauthorized use or disclosure of AI tools), and contractual provisions on the use of AI. Contractual language should be clear and concise about the vendor’s obligations and the remedies for a breach of the obligations, including indemnification. This language can be inserted in the ISA or the main contract.

4. Evaluate and map existing vendors’ use of AI tools. There may be some vendors you have already contracted with that are using AI tools to process confidential company information of which you are not aware. Prioritize which vendors have the highest risk of processing confidential company data with AI tools and review the existing contract. If applicable, request an amendment with the vendor to put appropriate contractual language in place addressing the processing of company confidential information with AI tools.

5. Add the evaluation of AI tools to your existing vendor-management program. If you have an existing vendor-management program in place, add the use of AI tools into the program going forward. If you don’t have an existing vendor-management program in place, it’s time to develop one.

 

Conclusion

Now is the time to implement a strategy and plan around the use of AI tools within your organization and externally by your vendors. It seems daunting, but the risk is clear and will be present until you address it. Hopefully, the tips in this article will help you start taking control of AI use in your organization and by your vendors and minimize the risk, so you can use AI to make your business more efficient and profitable.

 

Linn F. Freedman is a partner and chair of the Data Privacy + Cybersecurity Team at Robinson+Cole.

Law

A Brave New Year

By Lauren C. Ostberg, Esq. and Michael McAndrew, Esq.

 

Artificial intelligence — specifically, natural-language chatbots like ChatGPT, Bard, and Watson — have been making headlines over the past year, whether it’s college writing teachers’ attempts to avoid reading machine-generated essays, the boardroom drama of OpenAI, the SAG-AFTRA strike, or existential anxiety about the singularity.

On the frivolous end of the spectrum, one of the authors of this piece used ChatGPT to find celebrity lookalikes for various attorneys at their firm, and learned that ChatGPT defaults to the assumption that, irrespective of race or gender or facial features, most people (including Lauren Ostberg) look like Ryan Reynolds. On the more serious end, the legislatures of state governments, including those in Massachusetts and Connecticut, have labored over bills that will harness, regulate, and investigate the power of AI.

Lauren Ostberg

“The legislatures of state governments, including those in Massachusetts and Connecticut, have labored over bills that will harness, regulate, and investigate the power of AI.”

In Massachusetts, for example, the Legislature is considering two bills, one (H.1873) “To Prevent Dystopian Work Environments,” and another (S.31) titled “An Act Drafted with the Help of ChatGPT to Regulate Generate Artificial Intelligence Models Like ChatGPT.” The former would require employers using any automatic decision-making system to disclose the use of such systems to their employees, and give employees the opportunity to review and correct the worker data on which those systems relied. The latter, sponsored by Hampden County’s state Sen. Adam Gomez, aims to regulate newly spawned AI models.

While the use of AI to draft S.31 is, in its own right, an interesting real-world application of AI, the use of AI in this way is not the only important part of S.31, which proposes a regulatory regime whereby “large-scale generative artificial intelligence models” are required to register with the attorney general. In doing so, AI companies would be required to disclose detailed information to the attorney general, including “a description of the large-scale generative artificial intelligence model, including its capacity, training data, intended use, design process, and methodologies.”

In addition to requiring the registration of AI companies, S.31 (if passed) would also require AI companies to implement standards to prevent plagiarism and protect information of individually identifiable information used as part of the training data. AI companies must “obtain informed consent” before using the data of individuals. To ensure compliance, the bill gives the AG enforcement powers and grants it the authority to propound regulations that are consistent with the bill.

While S.31 provides robust protections against using data garnered from citizens of the Commonwealth in programming AI models, it may fail because of the amount of disclosure required from AI companies. As part of a new and fast-moving field, AI companies may be hesitant to disclose their processes, as is required by S.31.

Michael McAndrew

Michael McAndrew

“This proposed legislation is, of course, just the beginning of government’s attempts to grapple with the ‘responsible use’ (an Orwellian term, if ever there was one) of AI and technology.”

Though commendable in its effort to protect creators and citizens, S.31 may ultimately drive AI-based businesses out of the Commonwealth if they fear that their competitively sensitive AI processes will be disclosed as part of the public registry envisioned by S.31. However, the structure of the proposed registry of AI businesses is currently unclear; only time will tell how much information will be available to the public. Time will also tell if S.31 (or H.1873, referenced above) makes it out of committee and into law.

Meanwhile, in Connecticut

This past June, Connecticut passed a law, SB-1103, that recognizes the dystopian nature of the government using AI to make decisions about the treatment of its citizens. It requires that — by, on or before Dec. 31, 2023 — Connecticut’s executive and judicial branches conduct and make available “an inventory of all their systems that employ artificial intelligence.” (That is, it asks the machinery of the state to reveal itself, in part.)

By Feb. 1, 2024, the executive and judicial branches must also conduct (and publicly disclose) an “impact assessment” to ensure that systems using AI “will not result in unlawful discrimination or a disparate impact against specified individuals.” ChatGPT’s presumption, noted above, that every person is a symmetrically faced white man would be much more serious in the context of an automated decision-making system that impacts the property, liberty, and quality of life of Connecticut residents.

This proposed legislation is, of course, just the beginning of government’s attempts to grapple with the ‘responsible use’ (an Orwellian term, if ever there was one) of AI and technology. Massachusetts has proposed the creation of a commission to address the executive branch’s use of automated decision making; Connecticut’s new law has mandated a working group to consider an ‘AI Bill of Rights’ modeled after a federal blueprint for the same. The results — and the inventory, and the assessments — remain to be seen in the new year.

 

Lauren C. Ostberg is a partner, and Michael McAndrew an associate, at Bulkley Richardson, the largest law firm in Western Mass. Ostberg, a key member of the firm’s intellectual property and technology group, co-chairs the firm’s cybersecurity practice. McAndrew is a commercial litigator who seeks to understand the implications and risks of businesses adopting AI.

Technology

Protecting Yourself from IT Threats

By Charlie Christensen

 

As hackers, organized crime syndicates, and state-backed bad actors aggressively pursue ways to compromise the world’s data; business owners, leadership, and IT professionals continue to seek ways to counter these ever-growing threats to their information technology infrastructure. In this article, I will explore some of these threats, as well as the advancements in anti-virus/malware protection that are working to defend corporate and personal data every minute of every day.

Lastly, I will provide you with some key steps you should take to protect your business and data assets from attack.

Charlie Christensen

Charlie Christensen

The notion that you are just too small a company to worry about these threats, or that no one wants your data is a fallacy. Criminals are targeting small companies every day because they are easy targets.”

As someone who understands the threats we as IT professionals see every day, it is my hope that I can use this opportunity to provide the average businessperson with a better understanding of what they should focus on most urgently in today’s technology environment, and how they can better protect their business from being compromised.

• Ransomware: This is every company’s worst nightmare and is a topic that we could dedicate an entire article on. In short, ransomware is an extortion scheme that costs businesses billions of dollars per year. It most commonly spreads via malicious email attachments or links, software apps, infected external storage devices, and compromised websites.

Ransomware searches out every computer on the network and seeks to encrypt the data it finds. The only way to get the data back is to pay the extortion, usually via cryptocurrency which is largely untraceable. Not content with simple extortion, cybercriminals are now adding an additional element to the ransomware scheme.

Attackers will now download your data prior to encryption, and if you refuse to pay, they will threaten to release your data into the public domain. If the thought of this doesn’t lead you to a few sleepless nights, it should.

• Phishing, spear phishing, and whaling attacks: I think by now we all understand phishing. An attacker uses social-engineering techniques, like an enticing looking link, to get the end user to disclose some form of personal information such as a Social Security number, information, credentials, etc. Spear phishing, however, is a bit more focused and targeted. A spear-phishing message might seem like it came from someone you know or a familiar company like your bank or credit card company, shipping company, or a frequented retailer.

Whaling, on the other hand, goes after high-value targets such as C-level leadership or accounts payable. A whaling attack might look like an email from the CFO asking you to initiate a transfer to pay a large invoice. This is an incredibly common attack vector and one that relies on your team’s ability to identify it. Education and vigilance are your best defense.

• Advanced persistent threats: APTs happen when an intruder gains access to your systems and remains undetected for an extended period. They seek to quietly extract data such as credit card data, social security numbers, banking information, and credentials. Detection relies on the ability to identify unusual activity such as unusual outbound traffic, increased database activity, network activity at odd times. APTs also likely involve the creation of backdoors into your network.

• Insider threats: Although we are fixated on external threats, internal threats are more common and can be equally as damaging. Examples of intentional and unintentional threats include:

Intentional threats such as employees stealing data by copying or sending sensitive or proprietary data outside the company. This may occur via email/FTP, USB drive, cloud drive (One Drive, Dropbox, iCloud), or some other means. Often, these happen because someone fails to comply with security protocols because they are perceived to be inconvenient or “overkill.”.

Unintentional threats might include an employee clicking on a phishing email, responding to a pop up asking for credentials, not using a strong password, or using the same password for everything. It could also be a system that was not patched, a port that was left open on a firewall, or forgetting to lock a user account after termination.

• Viruses and worms: Frequently considered to be ‘old school’ threats, these still exist and can cause tremendous damage. Users should be careful about clicking on ads, file sharing sites, links in emails, etc. Their purpose is to damage an organization, systems, data, or network. However, traditional anti-virus software is usually effective at controlling them.

• Botnets: Simply put, a botnet is a collection of devices that have access to the internet like PCs, servers, phones, cameras, time clocks, or other commonly found networked devices. These devices are then infected by malware that allows criminals to use them to launch attacks on other networks, generate spam, or create other malicious traffic.

• Drive-by attacks: These are infected graphics or code on a website that gets injected into your computer without your knowledge. They can be used to steal personal information, or inject trojans, exploit kits, and other forms of malware.

While this list might seem exhausting, it only represents a few of the more common attack methods that we see daily. It also helps explain the emergence of a new generation of security products and platforms. To better understand how we look at information security, let me borrow one of the examples I commonly use when speaking to businesspeople and groups about building an effective Information Security Program.

Think of information security as an onion. Like an onion, information security programs are comprised of layers (firewall, backup, AV, email filtering, etc…) of protection surrounding the core (your data). As we build an information security program, we need to put layers of protection between the threat and the asset we are trying to protect. While the details of an information security program are outside the scope of this article, for the purposes of this discussion you only need to understand that there is no single magic product that can protect you from all threats. Anti-virus, or even the new generation endpoint detection and response (EDR) products are but one layer of protection in an over-arching strategy to protect your business from modern threats.

A brief history of antivirus (AV) products has them coming onto the scene in the late 1980s, with familiar names like McAfee, Norton, and Avast. These early products relied on signature-based definitions. Much like you look up a word in the dictionary, these AV products could catch defined threats, but they would easily fail to prevent attacks that had yet been discovered; or worse, that they had not yet downloaded an update for that would allow them to recognize the threat. Traditional AV changed very little until several years ago with the advent of Next Generation Antivirus. NGAV uses definitions coupled with predictive analytics driven by machine learning to help identify undefined threats.

The latest technology to hit the market is enhanced detection and response (EDR) or extended detection and response (XDR). These technologies continue to use traditional signature-based antivirus and NGAV, but they also introduce the use of artificial intelligence (AI).

AI is used to constantly analyze the behavior of devices so it can detect abnormal activities like high CPU usage, unusual disk activity, or perhaps an abnormal amount of outbound traffic. This new generation of software not only detects an attack and warns you that it is occurring, but it can also isolate the attack to the device(s) that are infected by automatically taking them off the network and protecting the rest of your network. Some EDR products like SentinelOne also have threat-hunting capabilities that can map the attack as it unfolds. This mapping aids IT professionals in the identification of devices involved in the attack; a process that can take days or weeks when performed manually. XDR even goes a bit further in that it looks beyond the endpoint (PC, laptop, phone) and looks at the network holistically.

A good example of how EDR systems are being used as a layer of protection is how SonicWall firewalls combine a physical firewall with a suite of security capabilities like content filtering, DPI-SSL scanning, geo-blocking, gateway antivirus, and more to filter traffic before it enters your network. Then, with the addition of their Capture Client product (a collaboration between SonicWall and SentinelOne), they integrate the power of SentinelOne EDR with the firewall’s rules. This allows you to extend protections beyond devices inside the network and include company devices outside the network as well. This helps to eliminate gaps in protection that can exist with remote users.

The notion that you are just too small a company to worry about these threats, or that no one wants your data is a fallacy. Criminals are targeting small companies every day because they are easy targets. Large companies have armies of highly educated and well-paid people protecting their networks. And while a large company might represent a big score, hackers can spend years trying to penetrate a large network. However, they know smaller organizations have limited budgets and staff to protect their network. This makes it far more lucrative to hit 50 or 100 small companies for $100,000 than a single large company for, say, $2 million.

Investing in modern security products, building a sound information security program, and educating your team will pay off in the long run, as the question is not if you will be attacked — but when. The cost of the systems to protect you is far less the frequently irreparable harm caused by a breach or infection.

Many people say, ‘I have cyber insurance,’ but fail to put the necessary precautions in place to protect their systems and data. Little do they know that when they filled out the pre-insurance questionnaire and answered ‘yes’ to all the questions, they gave the insurer the ability to deny the claim. If you do not have written policies, use EDR (or at least NGAV), have a training program in place, and use multifactor authentication to protect user logins, you could be sealing your own fate. Insurers are no longer baffled by today’s technology and are aggressively investigating cyber claims. In fact, we are seeing significantly increasing numbers of denied claims.

There is little you can do after the fact to offset missing protections or enforcement of policies. By taking the appropriate steps to protect your network and systems you can hopefully minimize the risk of falling victim to an attack and ensure that your insurer will cover such a claim. Insurance companies will go to great lengths to cover legitimate claims at great cost. In fact, they can be their own worst enemy. In many ransomware attacks, insurance companies will simply pay the ransom because it is more expeditious to do that than it is to pay for the actual remediation. This, of course, only encourages the criminals while leading to higher premiums and greater risk to our technology infrastructure.

To close, I’d like to leave you with a few things that you can do to better protect your systems, data, and network.

• Take the time to understand what protections you have in place and engage a professional to help you identify any gaps in your information security strategy;

• Educate your staff on information security best practices and the threat spectrum. An educated workforce is one of your best protections. There are several great training tools that are inexpensive and easy to implement, such as KnowBe4;

• Implement a next-generation firewall that utilizes deep packet inspection and take the time to dial in the suite of security features that are designed to stop threats before they get into the network;

• Move to an EDR system rather than relying on a traditional signature-based antivirus;

• Be sure that all systems with access to your networks (computers, network equipment, servers, firewalls, IoT devices, cameras, etc.) are patched regularly to eliminate vulnerabilities that can be easily exploited;

• Do not run unsupported operating systems, equipment, or applications;

• Establish a set of written information security policies, and make sure everyone understands that they need to live by them; and

• Limit those with administrative credentials on your network. If an administrative account is compromised, you have given away the keys to the kingdom. Make sure users only have permission to get to the resources they need to do their job.

 

Charlie Christensen is president of East Longmeadow-based CMD Technology Group; http://www.new.cmdweb.com/; (413) 525-0023.