
Artificial intelligence (AI) is everywhere, from fun applications such as making pictures or videos to more serious uses, such as writing emails or helping sift through piles of data, or even to take care of more mundane business or customer service functions.
This rapidly evolving field is simplifying life for many an overloaded person. But it’s not an easy button; there’s a catch. Using some AI tools or allowing employees to use them without a clear policy can open up businesses to privacy and other concerns.
How does AI work?
AI uses the data people put into it to make itself smarter, detecting patterns and making decisions by collecting and sorting huge amounts of data, analyzing it and assessing outcomes.
Let’s put it this way. One popular way to use AI is to ask it to write an email, perhaps to customers explaining a price increase. To do that, a human puts information about the customer and the policy into the AI agent and gives it a prompt to generate the email. Dropping information into the tool — which stores it and then uses it to learn — could expose the business to liability, putting customer and other proprietary data at risk.
[RELATED: Tell us what you expect for 2026, and what you want us to cover]
This is especially true if employees are taking advantage of free AI tools, such as versions of ChatGPT, Google’s Gemini and others. Remember, if the technology is free, you are the product.
How do employees feel about AI?
A 2023 Workday survey showed 73% of employees wanted their company to implement a policy on the use of AI tools and the majority (61%) thought they could benefit from training. Many employees (69%) are concerned about personal data and about half (53%) are worried AI will put them out of a job.
Earlier this year, TPS found around half of respondents use AI constantly throughout the day. The top tasks they use the technology for include communication (54%), data analysis (62%), parts operations (46%), and accounting and finances (46%).
Strategic, transparent policy discussion can help ease employees’ minds. According to a 2024 Gallup survey on using AI in the workplace, only 6% of employees felt prepared to use AI in their roles. A third of employees (32%) said they are very uncomfortable using AI and, researchers say, the number of employees who said they were very prepared to work with AI dropped six percentage points from 2023 to 2024. This may be a sign of employee fears about bringing AI into the workplace. Furthermore, despite being more likely to use and feel prepared to use AI if there was a policy, 70% of employees said their organization lacks guidance for using AI at work.
“This lack of guidance limits employee use of AI technology and creates security risks,” Gallup says. “By establishing clear guidelines for usage, organizations can empower their workforce to use these technologies effectively and securely, ensuring AI serves as a tool for furthering innovation and efficiency, rather than a liability.”
Policies provide protection
A clear, cohesive policy on AI can not only provide employees guidance on using these tools, but it can also prompt open conversations on using the technology to improve business practices and pave the way for brainstorming ways to integrate the tools into company workflows.
So what should go into a policy?
The Corporate Government Institute says the first step is to establish a working group to oversee the policy’s creation, gather expertise and ensure representation. It should include people from across the spectrum of the company and those people, as well as any other relevant leaders, should be educated on AI and its ethical implications.
The group should then define the primary objectives in adopting AI, the institute says, assessing the ethical principles and values of using this technology. Consider fairness, transparency, accountability and human well-being in the discussions. Don’t forget legal and regulatory compliance, including with any state laws that may apply to AI use.
Next, identify potential AI use cases and the risks of those cases. Develop guidelines and best practices to mitigate any risks and establish accountability, define responsibilities and ensure ethical decision making when using AI. Don’t limit it to current uses, either. Be imaginative in the ways organizations can use AI now and in the future.
Establish transparency and responsibility throughout the process and ensure stakeholders can understand the basis of AI decisions and appropriately raise concerns. Encourage continuous monitoring and evaluation, looking at performance of the technology and its impact on workflows. Evaluate policies and procedures regularly to keep them effective and make any necessary adjustments.
Once the policy is set, then communicate it to the workforce. Make sure to use clear, accessible language and provide practical guidance for employees at all levels of the organization. Use a variety of communication tools, including emails, meetings and training sessions to make sure everyone understands the appropriate use of AI in the workplace.










