Home Posts tagged ChatGPT
Features

It’s Not Going Away

By Linn Foster Freedman

Consumers have embraced the use of artificial intelligence (AI) tools in their everyday lives since ChatGPT was introduced into the economy last year. Employees are using AI technology in their workplaces, which causes risks to companies. In addition, third-party vendors are embedding AI technology into their products and services, often without companies’ knowledge, and are using company data to teach AI tools.

This article provides practical tips to evaluate the use of AI tools within an organization and by third-party vendors, how to minimize that risk, and how to approach the use of AI tools as technology advances.

Although AI technology has been in existence for decades, it has become mainstream over the past year with the arrival and novelty of ChatGPT’s use by consumers. When consumers embrace technology before companies, it is only a matter of time before consumers start to migrate that use into the workplace, whether it is approved or not.

Companies are struggling with how to introduce AI tools into their environment, as the risks associated with AI tools have been well-documented. These include copyright infringement, use and disclosure of personal information and company confidential data, bias and discrimination, hallucinations and misinformation, security risks, and legal and regulatory compliance risks.

These risks are real and compelling, especially when employees are sharing company data with AI tools. Once employees upload company data to an AI tool, that data may be used by the AI tool developer to teach its AI model, and the company’s confidential data may now be publicly available. Further, many companies are embedding AI into their products or services, and if you are disclosing confidential company data to vendors, they may be using your data to teach their AI tools or feeding your confidential data to other third-party AI tools.

Linn Foster Freedman

Linn Foster Freedman

“Companies are struggling with how to introduce AI tools into their environment, as the risks associated with AI tools have been well-documented.”

The risk is daunting, but manageable with strategy and planning. Here are some tips on how to wrap your arms around your employees’ use of AI tools in your organization. Tips to manage the risk of vendors using AI tools will be addressed later on.

 

Tips for Evaluating Your Organization’s Use of AI Tools

1. Don’t put your head in the sand. AI is here to stay, and your employees are already using it. They don’t understand the risk, but it seems cool, so they are and will continue using it. They will use any tool that will make their jobs easier — that’s human nature. Embrace this fact and commit to addressing the risk sooner rather than later. Ignoring the issue will only make it worse.

2. Don’t prohibit the use of AI tools in your organization. AI tools can be used to increase efficiencies in the workplace and increase business output and profits. Prohibiting its use will put you behind your competition and be a failed strategy. Your employees will use AI tools to make their work lives more efficient, so getting ahead of the risk and communicating with your employees is essential to evaluate and develop the use of AI in your organization.

3. Find out who the entrepreneurs and AI users are in your organization. Encourage the entrepreneurs in your organization to bring use cases to your attention and evaluate whether they are safe and appropriate. There are many uses of AI tools that do not present risks. The use cases should be evaluated, and proper governance and guardrails should be implemented to minimize risks.

4. Develop and implement an AI governance program. While AI tools are developing rapidly, it is essential to have a central program that will govern its use, internally and externally. Gather an AI governance team from different areas in the organization that will be responsible for keeping a tab on where and when AI is used; a process for evaluating uses, tools, and risks; putting guardrails and contractual measures in place to reduce the risk; and processes to minimize the risk of bias, discrimination, regulatory compliance, and confidentiality. The team will start slow, but once processes are in place, they will mature and pivot as technology develops.

5. Communicate with your employees often about the risks of using AI tools, the company’s AI governance program, and the guardrails you have put in place. Companies are better now than ever at communicating with employees about security risks, particularly email phishing schemes. Use the same techniques to educate your employees about the risks of using AI tools. They are using ChatGPT because they saw it on the news or one of their friends told them about it. Use your corporate communications to continually educate your employees using AI tools in the company and why it is important that they follow the governance program you have put in place. Many employees have no idea how AI tools work or that they could inadvertently disclose confidential company information when they use them. Help them understand the risks, make them part of the team, and guide them on how to use AI tools to improve their efficiency.

6. Keep the governance program flexible and nimble. No one likes another committee meeting or extra work to implement another process. Nonetheless, this one is important, so don’t let it get too bogged down or mired in bureaucracy. Start by mapping the uses of AI in the organization, evaluating those uses, and learning from that evaluation to become more efficient in the evaluation process going forward. Put processes in place that can be replicated and eventually automated. The hardest and most important work will be setting up the program, but it will get more efficient as you learn from each evaluation. The governance program is like a mini-AI tool in and of itself.

7. Be forward-thinking. Technology develops rapidly, and business organizations can hardly keep up. This is an area on which to stay focused and forward-thinking. Start by having someone responsible for staying abreast of articles, research, laws, and regulations that will be important in developing the governance program. Right now, a great place to start is with the White House’s Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. It gives a forward-thinking view of the development of regulations and compliance around AI that can be used as a prediction of what’s to come for your governance program.

8. Evaluate the risk of the use of AI tools by vendors. The AI governance team should be intimately involved with evaluating vendors’ use of AI tools, which is discussed in more detail below.

 

Tips for Evaluating the Risk of Use of AI Tools by Vendors

1. Carefully map which vendors are using AI tools. It might not be readily apparent which of your vendors use AI tools in their products or services. Team up with your business units to question which vendors are or may be using AI tools to process company data. Then, evaluate what data is disclosed and used by those vendors and determine whether any guardrails need to be put in place with the vendor.

2. Implement a process with business units to question vendors upfront about using AI tools. Business units are closest to the relationship with vendors, providing services to the business unit. Provide questions for the business units to ask when pursuing a business relationship with a vendor so you can evaluate the risk of using AI tools at the start. The AI governance team can then evaluate the use before contract negotiations begin.

3. Insert contractual language around the disclosure and use of company data and using AI tools. Companies may wish to consider developing an information security addendum (ISA) for any vendor with access to the company’s confidential data if they do not have one in place already. As the AI governance team evaluates the disclosure and use of company data to new vendors and the use of AI tools when processing company data, vendors should be questioned on the tools used, security measures used to protect company data (including from unauthorized use or disclosure of AI tools), and contractual provisions on the use of AI. Contractual language should be clear and concise about the vendor’s obligations and the remedies for a breach of the obligations, including indemnification. This language can be inserted in the ISA or the main contract.

4. Evaluate and map existing vendors’ use of AI tools. There may be some vendors you have already contracted with that are using AI tools to process confidential company information of which you are not aware. Prioritize which vendors have the highest risk of processing confidential company data with AI tools and review the existing contract. If applicable, request an amendment with the vendor to put appropriate contractual language in place addressing the processing of company confidential information with AI tools.

5. Add the evaluation of AI tools to your existing vendor-management program. If you have an existing vendor-management program in place, add the use of AI tools into the program going forward. If you don’t have an existing vendor-management program in place, it’s time to develop one.

 

Conclusion

Now is the time to implement a strategy and plan around the use of AI tools within your organization and externally by your vendors. It seems daunting, but the risk is clear and will be present until you address it. Hopefully, the tips in this article will help you start taking control of AI use in your organization and by your vendors and minimize the risk, so you can use AI to make your business more efficient and profitable.

 

Linn F. Freedman is a partner and chair of the Data Privacy + Cybersecurity Team at Robinson+Cole.

Cover Story Cybersecurity

Rise of the Machines

 

Twice a year, Tom Loper participates in a Cybersecurity Advisory Council meeting. The last one was … different.

“I would say there was a sense of concern that I hadn’t seen before at that council because of ChatGPT and the phishing potential,” said Loper, dean of the School of Arts, Sciences and Management at Bay Path University.

He explained that people can use ChatGPT, the AI chatbot that has drawn major worldwide attention since its unveiling last fall, to input information from any website, or emails from an organization, to generate a phishing episode much more realistic, and much more likely to draw a response, than its target had ever received.

“These are people — from Facebook, from Fidelity, from the Hartford, from every major organization you can think of in our area and beyond — who were taken aback by the capabilities of ChatGPT,” Loper said.

“It really scares the hell out of all of us, because we know the biggest problem that we have in cybersecurity, the biggest challenge, comes between the brain and the keyboard. Human beings allow people in.”

“It really scares the hell out of all of us, because we know the biggest problem that we have in cybersecurity, the biggest challenge, comes between the brain and the keyboard,” he explained. “Human beings allow people in. The systems are very good at stopping people from breaching — flags go off, bells and whistles go off. But the biggest problem we have is the human intervention that has to take place. And human beings make mistakes. Especially when we’re connected to the outside world, we make mistakes that allow phishing to take place.”

Tom Loper says ChatGPT is already making work easier

Tom Loper says ChatGPT is already making work easier for students and professionals, but that raises issues ranging from plagiarism to how jobs might change.

And ChatGPT just made that challenge even more daunting.

But the impact of this and other AI tools extend far beyond cyberthreats.

“AI has the ability to be as impactful as the internet — possibly even as impactful as electricity — on the way business is conducted,” said Delcie Bean, president and CEO of Paragus Strategic IT in Hadley. “We all knew this day was coming for a long time, but now it’s here, and by the end of this decade, the only businesses that will still be in business are the ones that embrace the change.”

Bean explained that these tools allow enormous amounts of work previously done by humans to be completely automated, often in a fraction of the time and with much greater accuracy — and not just basic administrative work.

“We are also talking about highly complex work like computer coding, law, and even practicing medicine,” Bean related. “In a recent demonstration, AI correctly diagnosed 225 cancer cases within 18 minutes and at 85% accuracy, while human doctors took 50 minutes and only achieved a 64% accuracy rate with the same cases. Between now and the end of the decade, we are going to see dozens of new companies and technologies emerging, displacing a lot of legacy processes and technologies at a rapid pace.”

What does that mean for employers, the workforce, and job opportunities in the future? No one has all the answers to that question — although ChatGPT itself took a stab at it for us  — but there is broad agreement that change is coming.

“AI has the ability to be as impactful as the internet — possibly even as impactful as electricity — on the way business is conducted. We all knew this day was coming for a long time, but now it’s here, and by the end of this decade, the only businesses that will still be in business are the ones that embrace the change.”

“This really challenges all forms of expertise because it’s drawing on this incredible domain of knowledge,” said James Wilson, professor of Business at Bay Path. “Now, the accuracy of it, the citing of it, all that is not there yet. But it will come.”

Wilson recently started teaching courses in a certificate program on digital transformation, which includes discussion of the impact of AI on the workplace.

“It started as a therapy session because the students were like, ‘what’s the future going to be? What skills do I need to have? What’s going to happen?’ It’s very uncertain. We have these things called human skills … which are presumably going to differentiate us from this artificial intelligence — that is, creative thinking, inquiry, critical thinking, collaboration. You’re not going to run your business on ChatGPT.”

Delcie Bean says AI promises to disrupt

Delcie Bean says AI promises to disrupt not only administrative and computer careers, but law, medicine, and a wide range of other fields.

But that’s not the only viewpoint. “The most pessimistic view of this is that AI will do for humans what the automobile did for horses,” Wilson said. “There are some dire projections.”

 

Future … Tense

Loper and Wilson have delved into the capabilities and possibilities of ChatGPT and other AI tools on the higher-education level, and they’ve been impressed.

“ChatGPT is the hot topic because it’s a breakthrough in the way it uses large language processing,” Loper said. “It’s never been made available to the public like this before. The exciting part is how it processes data at a phenomenal speed — it’s absolutely amazing.”

And it’s not just regurgitating knowledge, he stressed. Xiaoxia Liu, assistant professor and director of Applied Data Science at Bay Path, has been using ChatGPT in her classes, helping students finish programming projects.

A Brief ‘Interview’ with ChatGPT

We went right to the source and asked ChatGPT for its take on this article’s topic. These are the answers it provided within seconds.

How will ChatGPT and other AI tools threaten the job market in the coming years?

AI tools and automation have already impacted the job market in many ways, and it is likely that this trend will continue in the coming years. ChatGPT and other AI tools can automate repetitive or low-level tasks, leading to job displacement for those whose work can be automated.

However, it’s essential to note that AI tools and automation can also create new job opportunities, particularly in areas that require specialized skills or expertise in implementing and managing AI technology. Additionally, the use of AI tools can increase productivity and reduce costs, which can result in businesses expanding and creating new job positions.

Overall, the impact of AI on the job market is complex and multi-faceted. While some jobs may be replaced by automation, new opportunities may arise, and existing roles may evolve and require different skill sets. It is essential for individuals and businesses to adapt to these changes by acquiring new skills and embracing the opportunities presented by AI.

What are some jobs that ChatGPT and other AI tools will impact the most?

ChatGPT and other AI tools are likely to impact a range of jobs, particularly those that involve repetitive or low-level tasks that can be automated. Some examples of jobs that may be impacted the most include:

• Customer-service representatives: ChatGPT can automate many customer service tasks, such as answering common questions or directing customers to the appropriate department.

• Data-entry clerks: AI tools can be used to scan and input data, reducing the need for manual data-entry clerks.

• Factory workers: AI-powered robots can perform tasks such as assembly-line work, reducing the need for human workers.

• Drivers: The use of self-driving cars and trucks may impact the job market for drivers in the coming years.

• Financial analysts: AI tools can analyze financial data and identify patterns, potentially reducing the need for human financial analysts.

• Retail salespeople: The rise of e-commerce and automated checkout systems may reduce the need for human retail salespeople.

• Administrative assistants: AI tools can be used to automate tasks such as scheduling appointments, managing email, and organizing files, reducing the need for human administrative assistants.

It’s important to note that the impact of AI on these jobs may vary depending on the industry, location, and other factors. While AI may automate some tasks, it may also create new job opportunities in areas such as data analysis, AI development and maintenance, and other specialized roles.

Write a haiku about how AI might change the way people work.

Machines learn and grow.
Work becomes more efficient.
Humans thrive with ease.

“She gives them a start, and then they have to finish it; they use Python or whatever to finish it,” Loper explained. “Now, she’s letting them use ChatGPT, and they’re finishing it in seconds. And it’s very accurate, very fast. She’s excited about that. That’s an example where ChatGPT uses large language learning to help it finish something that you started without you giving it much guidance at all, other than the start of the program.”

Loper noted that ChatGPT also gives different answers to questions based on who’s asking and in what way.

“The algorithm, for whatever reason, is drawing on what it thinks is ideal, but your past references influence the way that it searches. The type of question you ask generates a certain type of format and answer. So if you’re asking a business question, you get an answer in a business format. If you’re asking a question for a literary magazine, you get a different format.”

And that raises issues with academic plagiarism, Loper noted, because professors can no longer throw a chunk of a student’s work into Google to get a definite take on whether something was lifted, verbatim, from another source.

He has experimented with generating presentations from ChatGPT based on a series of prompts, and recognizes the ramifications for students. “It was logically laid out and put in a format that, if a student gave it to me, I would say, ‘damn, that’s good. You really learned this material.’”

When it comes to cracking down on plagiarism, Wilson added, “we might have to abandon ship on that in a way, because it’s not so much about being original anymore as being creative in your inquiry and critical in your understanding of it.”

Wilson called up other AI tools as well during his talk with BusinessWest, from Butternut AI, which can build a website in 20 seconds, to Pictory AI, which generates videos, to Wondercraft AI, which asks for discussion prompts and will generate a full podcast, featuring multiple voices.

“I teach a business-analytics class, where it was all research, research, research. I don’t think it’s about research anymore,” he said of the way AI will affect academia. “I think it’s about asking the right questions. It’s about the right inquiry. It may not be about writing anymore. It may be about editing and getting a draft from the AI expert and then adjusting it. The amount of content that can be created is staggering.”

Even classroom lectures can benefit, he added. “I can put in a few prompts, and it generates an entire lecture. I can go in and change the text, which will then be re-narrated through AI. Suddenly, all my content is better organized.”

Amid all these implications is the compelling idea that AI will only get sharper.

James Wilson

James Wilson

“We’ve all gotten used to Siri, and we’ve all gotten used to Google, but now you’re going to have this super-intelligent, conversational assistant with you,” Wilson said.

Loper added that these discussions are no longer theoretical. He noted that speakers at the Davos World Economic Forum, among others, have been thinking seriously about what types of work are going to be replaced by artificial intelligence and what careers will continue to be dominated by human beings, with their unique sensing and critical skills.

“Human beings aren’t going away any time soon, but we’re going to have a level of augmentation that we’ve never experienced, and we don’t know how to work with it yet. It’s so new,” he added. “James and I are playing with ChatGPT, and we’re kind of in awe of it, but we’re just skimming the surface compared to some of the ways people are using it. It’s just amazing.”

Added Wilson, “if you try to imagine this in a much smaller sense, it’s like when the smartphone came out — how did that change business? Texting and emailing and video chat reconfigured the way things are done, but in a smaller sense.”

Loper agreed. “This is much bigger than anything like that.”

 

Risk and Reward

Przemyslaw Grabowicz, a computer scientist in the College of Information and Computer Science at UMass Amherst, is heading up a research initiative called EQUATE (which stands for equity, accountability, trust, and explainability), which is currently developing a coordinated response to the Biden administration’s request for public comment on its AI Accountability Policy.

“As a computer scientist, I believe technology can make our lives better, maybe in some senses easier,” he told BusinessWest. “But I think there’s a risk that, if we step into new technologies too quickly, then society may develop a distrust for new technology that may, in the end, slow down developments.”

The National Telecommunications and Information Administration (NTIA), a Commerce Department agency that advises the White House on telecommunications and information policy, is studying whether there are measures that could be implemented assure that AI systems are “legal, effective, ethical, safe, and otherwise trustworthy.”

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms,” NTIA Administrator Alan Davidson told Reuters. “For these systems to reach their full potential, companies and consumers need to be able to trust them.”

In crafting accountability policies, Grabowicz said, leaders in all areas of life need to think carefully about the consequences of technology development and ways in which profits from this development will be converted into long-term societal gain rather than short-term profits. If not, such technology may contribute to the growth of misinformation and polarization.

“As a society, nobody wants these kinds of consequences, but if corporations focus on short-term financial gain, they may not consider the potential harmful consequences of technology being used in a way that it wasn’t meant to when it was developed.”

Such questions, Bean noted, will be further accelerated by advances in other technologies, especially robotics. “We are rapidly approaching the day when there will be free-standing robots in our lives who are able to think, make decisions, and interact with the world around them.”

In terms of security, he went on, it is hard to quantify the threat. “With Microsoft’s new tool VALL-E, which can mimic a human voice with a sample size as small as three seconds; deepfakes being able to be produced in minutes by anyone with basic computer skills; and more and more data being available to be mined, we are going to need to rethink security.

“While it is possible to imagine how technology will respond to meet these threats, the risk to businesses is the gap that exists in between the threats coming online and the response being available and adopted,” he added. “A lot of businesses are likely to face real threats in that gap — not to mention physical security, things like hacking a moving vehicle or sending a robot to conduct a robbery.”

In short, Bean said, “while there is much to look forward to, there are certainly many threats that will need to be understood and addressed.”

Meanwhile, artificial intelligence continues to evolve — in ways we may not even see coming.