Balancing the Rewards and Risks of AI Tools
- May 17, 2024
- 254
AI’s promise of time and money saved has captivated employees and business leaders alike. But the real question is… is it too good to be true? As enticing as these rewards may be, the risks of this new technology must also be seriously considered.
The Rewards
While it’s talked about plenty, it is worth noting that generative AI has in fact changed the game when it comes to making AI available to the masses. Large language models (LLMs) like ChatGPT have captivated everyday workers and consumers in a way that was nonexistent with earlier AI and automation technologies. What’s more, almost anyone can use the technology without needing to know anything about coding or statistics.
Generative AI is set to revolutionize various industries by allowing individuals to conduct research, develop software, create content, and perform other tasks more efficiently. According to McKinsey, this technology could potentially add as much as $4.4 trillion to the global economy each year, significantly boosting workforce productivity across different fields. The implications of fully leveraging such technology are vast.
The Risks
However, AI does come with a set of risks -- and depending on your business, the outcomes you want to see, and the kinds of data you use, it's worth carefully considering if the risks outweigh the benefits.
First and foremost, there’s the issue of data. For AI to act over a set of information, it must have visibility to that origin data -- in other words, you can’t ask ChatGPT to write a blog summarizing the last six product updates unless it has access to information detailing those updates. This information, however amassed, should be examined for accuracy, relevance, and, most importantly, confidentiality -- which becomes extremely important when dealing with public LLMs. If a user uploads private information to a public LLM, that information becomes part of the LLM -- meaning that confidential company information could end up in the public domain. And thanks to End User License Agreements that people often agree to but barely read, that data is now at the mercy of the LLM provider and will likely be used in the future to train the model. Identifying areas where internal firewalls and permissions are lacking is imperative to avoiding data leakage and the loss of proprietary information.
However, it’s not just the data going into the LLM companies must worry about -- it’s also the information they get back. No one should take the answers produced by generative AI tools at face value. Answers could be bias, inaccurate, or simply made up. It’s important to understand where the model is getting the information from and that it is read with a healthy degree of skepticism before being used or promoted in any significant way. And remember, anything produced by generative AI is considered the public domain -- be wary about putting your name on anything written by generative AI as you could end up in a copyright nightmare.
How to Decide if You’re Ready
To guide what steps your organization should take, and if you’re ready to make the AI leap, consider asking these questions:
- Why do we want to adopt this technology? What results do we want to see from it?
- What use cases would be best suited to seeing these results?
- What generative AI tools will allow us to reach these goals?
- How would our customers and partners feel about us using this technology?
- What could go wrong and what would it mean for the business?
Depending on how you answer the above, will likely inform your next step. A small bank with lots of personal information might want to avoid using generative AI until they determine how to safeguard their data better with the tool. A law firm however might find it helpful in summarizing legal research needed for a case. It depends on the use case, the company, and ultimately, the users.
Policy Setting and Employee Education
If an organization does decide to take the plunge and invest in AI, policy setting and employee education are crucial steps to mitigating risk. A company should develop and share an AI policy outlining acceptable tools, acceptable use, what information can be put into the LLM and what cannot be, a summary of End User License Agreements, and how violations of this policy will be handled. Employees should not only be familiar with the policies, but receive training on the tools and be able to ask questions.
Generative AI adoption is going to look different for every company -- and it might not be an all or nothing scenario. A business might decide that employees can use these tools for research, but not for writing, for example. Understanding the risks and rewards is the first step of any new technology deployment. By striking the right balance, developing policies, and mitigating risk, organizations can start to actually reap the benefits of AI while also ensuring its responsible use.