By Paul Coble, Chair of Rose Law Group Technology, AI and IP departments
pcoble@roselawgroup.com
Artificial Intelligence tools are quickly becoming a part of businesses in every industry. AI tools can dramatically increase efficiency and elicit new business insights from existing company data streams. But using the wrong AI tool for the wrong business purposes can introduce unnecessary risk and potentially dedicate proprietary assets to the public domain. Companies in all industries should implement an AI policy to promote the responsible adoption AI products while simultaneously placing guardrails around how employees use AI for business.
The release of mass-market generative AI tools like ChatGPT has led many executives, managers, and workers to consider ways that the technology could be used to make their jobs easier and their organizations more efficient. Others have been enamored by the potential to quickly generate “original” images, videos, and volumes of text with just about any prompt the user can imagine. But how many are considering how those products were created, what happens with the information the employee enters as a prompt, or who owns the intellectual property to what the AI creates? Failing to take these issues—and many more—into account prior to adopting AI tools in the workplace can lead to severe unintended consequences.
To be effective, a corporate AI policy must be product specific and catered to each category of job responsibilities within the organization. A staffer generating a form letter with AI is very different from a copywriter using AI to write ad copy or a developer handing over their coding responsibilities to an algorithm. Moreover, the policy must offer each job category pragmatic and provide clear guidance to employees as to which specific products they can use and for what purposes. If employees cannot quickly understand the bounds of acceptable use, they are not likely to adhere to the policy very closely. If done properly, a comprehensive and pragmatic AI policy can improve:
Risk Management: AI systems can be powerful productivity tools but can also introduce unnecessary risk if not managed correctly. An AI policy helps in identifying, assessing, and mitigating risks associated with AI, including risks related to owning creative assets, exposing sensitive data, and breaching obligations of confidentiality to third parties.
Innovation and Efficiency: A clear policy can streamline the process to develop or implement AI tools, reduce redundancies, and encourage innovation within ethical and legal boundaries. It ensures that AI is used to enhance productivity and efficiency without compromising ethical standards.
Regulatory Compliance: As more governments begin to regulate AI, companies with a well-defined and considered AI policy are better positioned to ensure compliance with this ever-shifting legal landscape.
Professional and Ethical Obligations: An AI policy helps set ethical standards for AI development and usage. This is particularly important as AI systems can have far-reaching impacts on society, including potential biases in decision-making, privacy concerns, and ethical implications in areas like surveillance and data handling.
Brand Reputation and Quality Control: Despite all its promises, many types of generative AI have a trustworthiness problem. Some include undisclosed bias in the training data, others simply invent facts. Overconfidence in the reliability of AI’s capabilities can irreparably damage a company’s reputation and public trust. Strategic Alignment: An AI policy helps ensure that AI initiatives are aligned with the company’s overall values and strategic goals.
Crafting a well thought out AI policy, however, is only half the job—the human component cannot be ignored. The policy design must start with the workers on the front line to map out what kinds of AI products would have the biggest impact and how those products would fit into the workflow. Once the policy is developed, companies must also effectively communicate the policy to employees and train managers on how to implement the policy pragmatically. And finally, forward thinking companies will recognize that setting the role of AI at the company will never be finished. AI is constantly evolving, and so must an effective AI policy. A truly evergreen AI policy will include a feedback mechanism for employees to be able to clarify the policy or request that new AI tools be added.
With the right AI policy in place, companies can confidently embrace the AI revolution and find new ways to improve their business.
More questions: Contact: pcoble@roselawgroup.com