Home Posts tagged AI
Law

A Brave New Year

By Lauren C. Ostberg, Esq. and Michael McAndrew, Esq.

 

Artificial intelligence — specifically, natural-language chatbots like ChatGPT, Bard, and Watson — have been making headlines over the past year, whether it’s college writing teachers’ attempts to avoid reading machine-generated essays, the boardroom drama of OpenAI, the SAG-AFTRA strike, or existential anxiety about the singularity.

On the frivolous end of the spectrum, one of the authors of this piece used ChatGPT to find celebrity lookalikes for various attorneys at their firm, and learned that ChatGPT defaults to the assumption that, irrespective of race or gender or facial features, most people (including Lauren Ostberg) look like Ryan Reynolds. On the more serious end, the legislatures of state governments, including those in Massachusetts and Connecticut, have labored over bills that will harness, regulate, and investigate the power of AI.

Lauren Ostberg

“The legislatures of state governments, including those in Massachusetts and Connecticut, have labored over bills that will harness, regulate, and investigate the power of AI.”

In Massachusetts, for example, the Legislature is considering two bills, one (H.1873) “To Prevent Dystopian Work Environments,” and another (S.31) titled “An Act Drafted with the Help of ChatGPT to Regulate Generate Artificial Intelligence Models Like ChatGPT.” The former would require employers using any automatic decision-making system to disclose the use of such systems to their employees, and give employees the opportunity to review and correct the worker data on which those systems relied. The latter, sponsored by Hampden County’s state Sen. Adam Gomez, aims to regulate newly spawned AI models.

While the use of AI to draft S.31 is, in its own right, an interesting real-world application of AI, the use of AI in this way is not the only important part of S.31, which proposes a regulatory regime whereby “large-scale generative artificial intelligence models” are required to register with the attorney general. In doing so, AI companies would be required to disclose detailed information to the attorney general, including “a description of the large-scale generative artificial intelligence model, including its capacity, training data, intended use, design process, and methodologies.”

In addition to requiring the registration of AI companies, S.31 (if passed) would also require AI companies to implement standards to prevent plagiarism and protect information of individually identifiable information used as part of the training data. AI companies must “obtain informed consent” before using the data of individuals. To ensure compliance, the bill gives the AG enforcement powers and grants it the authority to propound regulations that are consistent with the bill.

While S.31 provides robust protections against using data garnered from citizens of the Commonwealth in programming AI models, it may fail because of the amount of disclosure required from AI companies. As part of a new and fast-moving field, AI companies may be hesitant to disclose their processes, as is required by S.31.

Michael McAndrew

Michael McAndrew

“This proposed legislation is, of course, just the beginning of government’s attempts to grapple with the ‘responsible use’ (an Orwellian term, if ever there was one) of AI and technology.”

Though commendable in its effort to protect creators and citizens, S.31 may ultimately drive AI-based businesses out of the Commonwealth if they fear that their competitively sensitive AI processes will be disclosed as part of the public registry envisioned by S.31. However, the structure of the proposed registry of AI businesses is currently unclear; only time will tell how much information will be available to the public. Time will also tell if S.31 (or H.1873, referenced above) makes it out of committee and into law.

Meanwhile, in Connecticut

This past June, Connecticut passed a law, SB-1103, that recognizes the dystopian nature of the government using AI to make decisions about the treatment of its citizens. It requires that — by, on or before Dec. 31, 2023 — Connecticut’s executive and judicial branches conduct and make available “an inventory of all their systems that employ artificial intelligence.” (That is, it asks the machinery of the state to reveal itself, in part.)

By Feb. 1, 2024, the executive and judicial branches must also conduct (and publicly disclose) an “impact assessment” to ensure that systems using AI “will not result in unlawful discrimination or a disparate impact against specified individuals.” ChatGPT’s presumption, noted above, that every person is a symmetrically faced white man would be much more serious in the context of an automated decision-making system that impacts the property, liberty, and quality of life of Connecticut residents.

This proposed legislation is, of course, just the beginning of government’s attempts to grapple with the ‘responsible use’ (an Orwellian term, if ever there was one) of AI and technology. Massachusetts has proposed the creation of a commission to address the executive branch’s use of automated decision making; Connecticut’s new law has mandated a working group to consider an ‘AI Bill of Rights’ modeled after a federal blueprint for the same. The results — and the inventory, and the assessments — remain to be seen in the new year.

 

Lauren C. Ostberg is a partner, and Michael McAndrew an associate, at Bulkley Richardson, the largest law firm in Western Mass. Ostberg, a key member of the firm’s intellectual property and technology group, co-chairs the firm’s cybersecurity practice. McAndrew is a commercial litigator who seeks to understand the implications and risks of businesses adopting AI.

Cover Story Cybersecurity

Rise of the Machines

 

Twice a year, Tom Loper participates in a Cybersecurity Advisory Council meeting. The last one was … different.

“I would say there was a sense of concern that I hadn’t seen before at that council because of ChatGPT and the phishing potential,” said Loper, dean of the School of Arts, Sciences and Management at Bay Path University.

He explained that people can use ChatGPT, the AI chatbot that has drawn major worldwide attention since its unveiling last fall, to input information from any website, or emails from an organization, to generate a phishing episode much more realistic, and much more likely to draw a response, than its target had ever received.

“These are people — from Facebook, from Fidelity, from the Hartford, from every major organization you can think of in our area and beyond — who were taken aback by the capabilities of ChatGPT,” Loper said.

“It really scares the hell out of all of us, because we know the biggest problem that we have in cybersecurity, the biggest challenge, comes between the brain and the keyboard. Human beings allow people in.”

“It really scares the hell out of all of us, because we know the biggest problem that we have in cybersecurity, the biggest challenge, comes between the brain and the keyboard,” he explained. “Human beings allow people in. The systems are very good at stopping people from breaching — flags go off, bells and whistles go off. But the biggest problem we have is the human intervention that has to take place. And human beings make mistakes. Especially when we’re connected to the outside world, we make mistakes that allow phishing to take place.”

Tom Loper says ChatGPT is already making work easier

Tom Loper says ChatGPT is already making work easier for students and professionals, but that raises issues ranging from plagiarism to how jobs might change.

And ChatGPT just made that challenge even more daunting.

But the impact of this and other AI tools extend far beyond cyberthreats.

“AI has the ability to be as impactful as the internet — possibly even as impactful as electricity — on the way business is conducted,” said Delcie Bean, president and CEO of Paragus Strategic IT in Hadley. “We all knew this day was coming for a long time, but now it’s here, and by the end of this decade, the only businesses that will still be in business are the ones that embrace the change.”

Bean explained that these tools allow enormous amounts of work previously done by humans to be completely automated, often in a fraction of the time and with much greater accuracy — and not just basic administrative work.

“We are also talking about highly complex work like computer coding, law, and even practicing medicine,” Bean related. “In a recent demonstration, AI correctly diagnosed 225 cancer cases within 18 minutes and at 85% accuracy, while human doctors took 50 minutes and only achieved a 64% accuracy rate with the same cases. Between now and the end of the decade, we are going to see dozens of new companies and technologies emerging, displacing a lot of legacy processes and technologies at a rapid pace.”

What does that mean for employers, the workforce, and job opportunities in the future? No one has all the answers to that question — although ChatGPT itself took a stab at it for us  — but there is broad agreement that change is coming.

“AI has the ability to be as impactful as the internet — possibly even as impactful as electricity — on the way business is conducted. We all knew this day was coming for a long time, but now it’s here, and by the end of this decade, the only businesses that will still be in business are the ones that embrace the change.”

“This really challenges all forms of expertise because it’s drawing on this incredible domain of knowledge,” said James Wilson, professor of Business at Bay Path. “Now, the accuracy of it, the citing of it, all that is not there yet. But it will come.”

Wilson recently started teaching courses in a certificate program on digital transformation, which includes discussion of the impact of AI on the workplace.

“It started as a therapy session because the students were like, ‘what’s the future going to be? What skills do I need to have? What’s going to happen?’ It’s very uncertain. We have these things called human skills … which are presumably going to differentiate us from this artificial intelligence — that is, creative thinking, inquiry, critical thinking, collaboration. You’re not going to run your business on ChatGPT.”

Delcie Bean says AI promises to disrupt

Delcie Bean says AI promises to disrupt not only administrative and computer careers, but law, medicine, and a wide range of other fields.

But that’s not the only viewpoint. “The most pessimistic view of this is that AI will do for humans what the automobile did for horses,” Wilson said. “There are some dire projections.”

 

Future … Tense

Loper and Wilson have delved into the capabilities and possibilities of ChatGPT and other AI tools on the higher-education level, and they’ve been impressed.

“ChatGPT is the hot topic because it’s a breakthrough in the way it uses large language processing,” Loper said. “It’s never been made available to the public like this before. The exciting part is how it processes data at a phenomenal speed — it’s absolutely amazing.”

And it’s not just regurgitating knowledge, he stressed. Xiaoxia Liu, assistant professor and director of Applied Data Science at Bay Path, has been using ChatGPT in her classes, helping students finish programming projects.

A Brief ‘Interview’ with ChatGPT

We went right to the source and asked ChatGPT for its take on this article’s topic. These are the answers it provided within seconds.

How will ChatGPT and other AI tools threaten the job market in the coming years?

AI tools and automation have already impacted the job market in many ways, and it is likely that this trend will continue in the coming years. ChatGPT and other AI tools can automate repetitive or low-level tasks, leading to job displacement for those whose work can be automated.

However, it’s essential to note that AI tools and automation can also create new job opportunities, particularly in areas that require specialized skills or expertise in implementing and managing AI technology. Additionally, the use of AI tools can increase productivity and reduce costs, which can result in businesses expanding and creating new job positions.

Overall, the impact of AI on the job market is complex and multi-faceted. While some jobs may be replaced by automation, new opportunities may arise, and existing roles may evolve and require different skill sets. It is essential for individuals and businesses to adapt to these changes by acquiring new skills and embracing the opportunities presented by AI.

What are some jobs that ChatGPT and other AI tools will impact the most?

ChatGPT and other AI tools are likely to impact a range of jobs, particularly those that involve repetitive or low-level tasks that can be automated. Some examples of jobs that may be impacted the most include:

• Customer-service representatives: ChatGPT can automate many customer service tasks, such as answering common questions or directing customers to the appropriate department.

• Data-entry clerks: AI tools can be used to scan and input data, reducing the need for manual data-entry clerks.

• Factory workers: AI-powered robots can perform tasks such as assembly-line work, reducing the need for human workers.

• Drivers: The use of self-driving cars and trucks may impact the job market for drivers in the coming years.

• Financial analysts: AI tools can analyze financial data and identify patterns, potentially reducing the need for human financial analysts.

• Retail salespeople: The rise of e-commerce and automated checkout systems may reduce the need for human retail salespeople.

• Administrative assistants: AI tools can be used to automate tasks such as scheduling appointments, managing email, and organizing files, reducing the need for human administrative assistants.

It’s important to note that the impact of AI on these jobs may vary depending on the industry, location, and other factors. While AI may automate some tasks, it may also create new job opportunities in areas such as data analysis, AI development and maintenance, and other specialized roles.

Write a haiku about how AI might change the way people work.

Machines learn and grow.
Work becomes more efficient.
Humans thrive with ease.

“She gives them a start, and then they have to finish it; they use Python or whatever to finish it,” Loper explained. “Now, she’s letting them use ChatGPT, and they’re finishing it in seconds. And it’s very accurate, very fast. She’s excited about that. That’s an example where ChatGPT uses large language learning to help it finish something that you started without you giving it much guidance at all, other than the start of the program.”

Loper noted that ChatGPT also gives different answers to questions based on who’s asking and in what way.

“The algorithm, for whatever reason, is drawing on what it thinks is ideal, but your past references influence the way that it searches. The type of question you ask generates a certain type of format and answer. So if you’re asking a business question, you get an answer in a business format. If you’re asking a question for a literary magazine, you get a different format.”

And that raises issues with academic plagiarism, Loper noted, because professors can no longer throw a chunk of a student’s work into Google to get a definite take on whether something was lifted, verbatim, from another source.

He has experimented with generating presentations from ChatGPT based on a series of prompts, and recognizes the ramifications for students. “It was logically laid out and put in a format that, if a student gave it to me, I would say, ‘damn, that’s good. You really learned this material.’”

When it comes to cracking down on plagiarism, Wilson added, “we might have to abandon ship on that in a way, because it’s not so much about being original anymore as being creative in your inquiry and critical in your understanding of it.”

Wilson called up other AI tools as well during his talk with BusinessWest, from Butternut AI, which can build a website in 20 seconds, to Pictory AI, which generates videos, to Wondercraft AI, which asks for discussion prompts and will generate a full podcast, featuring multiple voices.

“I teach a business-analytics class, where it was all research, research, research. I don’t think it’s about research anymore,” he said of the way AI will affect academia. “I think it’s about asking the right questions. It’s about the right inquiry. It may not be about writing anymore. It may be about editing and getting a draft from the AI expert and then adjusting it. The amount of content that can be created is staggering.”

Even classroom lectures can benefit, he added. “I can put in a few prompts, and it generates an entire lecture. I can go in and change the text, which will then be re-narrated through AI. Suddenly, all my content is better organized.”

Amid all these implications is the compelling idea that AI will only get sharper.

James Wilson

James Wilson

“We’ve all gotten used to Siri, and we’ve all gotten used to Google, but now you’re going to have this super-intelligent, conversational assistant with you,” Wilson said.

Loper added that these discussions are no longer theoretical. He noted that speakers at the Davos World Economic Forum, among others, have been thinking seriously about what types of work are going to be replaced by artificial intelligence and what careers will continue to be dominated by human beings, with their unique sensing and critical skills.

“Human beings aren’t going away any time soon, but we’re going to have a level of augmentation that we’ve never experienced, and we don’t know how to work with it yet. It’s so new,” he added. “James and I are playing with ChatGPT, and we’re kind of in awe of it, but we’re just skimming the surface compared to some of the ways people are using it. It’s just amazing.”

Added Wilson, “if you try to imagine this in a much smaller sense, it’s like when the smartphone came out — how did that change business? Texting and emailing and video chat reconfigured the way things are done, but in a smaller sense.”

Loper agreed. “This is much bigger than anything like that.”

 

Risk and Reward

Przemyslaw Grabowicz, a computer scientist in the College of Information and Computer Science at UMass Amherst, is heading up a research initiative called EQUATE (which stands for equity, accountability, trust, and explainability), which is currently developing a coordinated response to the Biden administration’s request for public comment on its AI Accountability Policy.

“As a computer scientist, I believe technology can make our lives better, maybe in some senses easier,” he told BusinessWest. “But I think there’s a risk that, if we step into new technologies too quickly, then society may develop a distrust for new technology that may, in the end, slow down developments.”

The National Telecommunications and Information Administration (NTIA), a Commerce Department agency that advises the White House on telecommunications and information policy, is studying whether there are measures that could be implemented assure that AI systems are “legal, effective, ethical, safe, and otherwise trustworthy.”

“Responsible AI systems could bring enormous benefits, but only if we address their potential consequences and harms,” NTIA Administrator Alan Davidson told Reuters. “For these systems to reach their full potential, companies and consumers need to be able to trust them.”

In crafting accountability policies, Grabowicz said, leaders in all areas of life need to think carefully about the consequences of technology development and ways in which profits from this development will be converted into long-term societal gain rather than short-term profits. If not, such technology may contribute to the growth of misinformation and polarization.

“As a society, nobody wants these kinds of consequences, but if corporations focus on short-term financial gain, they may not consider the potential harmful consequences of technology being used in a way that it wasn’t meant to when it was developed.”

Such questions, Bean noted, will be further accelerated by advances in other technologies, especially robotics. “We are rapidly approaching the day when there will be free-standing robots in our lives who are able to think, make decisions, and interact with the world around them.”

In terms of security, he went on, it is hard to quantify the threat. “With Microsoft’s new tool VALL-E, which can mimic a human voice with a sample size as small as three seconds; deepfakes being able to be produced in minutes by anyone with basic computer skills; and more and more data being available to be mined, we are going to need to rethink security.

“While it is possible to imagine how technology will respond to meet these threats, the risk to businesses is the gap that exists in between the threats coming online and the response being available and adopted,” he added. “A lot of businesses are likely to face real threats in that gap — not to mention physical security, things like hacking a moving vehicle or sending a robot to conduct a robbery.”

In short, Bean said, “while there is much to look forward to, there are certainly many threats that will need to be understood and addressed.”

Meanwhile, artificial intelligence continues to evolve — in ways we may not even see coming.