How to Create Guidelines for Using AI at Work

We’ve compiled a set of useful guidelines for implementing AI applications. The first step is to establish AI usage policies and agree on the rules with your staff. The following principles will help you get started.

Guidelines for Employees

Approved Tools

Only The landscape of language model-based tools is expanding rapidly, with new tools being praised across social media. The temptation to try the latest methods for work tasks is real. However, you should only use tools that your company has approved. This ensures the applications are reliable, secure, and compliant with data protection policies. Since the tool landscape is constantly evolving, it’s also necessary to establish a process for evaluating and approving new tools for use.

Confidential Information

Avoid providing personal data or other sensitive information, such as trade secrets, to AI applications unless you are confident about how and where the data will be processed. AI applications analyze, modify, or generate new content based on the information provided. Be cautious about the type of information you share, as you might not know where or how the data is processed, stored, or shared. This is especially important under EU data protection laws, which require personal data to be handled and stored only as necessary and restrict the transfer of data outside the EU, where data protection might not be as stringent.

AI’s Limitations

Remember, AI applications are not flawless. Even the best tools can produce inaccurate or fabricated information. Always critically review the output of AI tools. You can improve AI performance with well-crafted prompts: provide precise input, sufficient context, and use iteration. Familiarize yourself with the concept of prompt engineering. Here’s an example prompt you can test with a chatbot to learn more: "Explain the term 'prompt engineering' to someone unfamiliar with the topic. Discuss why it improves AI responses and cover techniques such as role definition, step-by-step instructions, response formatting, audience targeting, question-based inquiry, clear precision, and providing examples."

AI chat

Terms of Use

Understand the service's terms of use, intellectual property rights, and licensing agreements. When entering content into an AI system, be aware of potential implications regarding intellectual property. Ensure the material you use complies with usage rights. If AI generates content based on your input, it raises questions about ownership of the derivative work and whether the user has rights to utilize it. Risks, such as copyright infringement, can arise. AI services often include terms and license restrictions defining permissible use cases. For example, AI-generated content may not be eligible for commercial use without a separate license.

For IT Managers and Decision-Makers

Incorporate AI policies into the annual plan. Designate a responsible person or team to oversee and regularly update AI policies. Encourage employees to provide feedback to improve policies and address concerns. Clear guidelines and continuous dialogue help manage the rapidly evolving AI tool landscape. This approach ensures potential conflicts are addressed transparently, constructively, and swiftly while fostering solutions through mutual understanding.

Provide training for employees on using AI tools. Offer guidance and support where needed to ensure everyone understands the policies and tools in place.

Preparing for the EU AI Act

These guidelines should also be reviewed considering the upcoming EU AI Act. The act aims to regulate the development, deployment, and use of AI systems within the EU. It categorizes AI applications into four risk levels: minimal, limited, high, and unacceptable risk. Applications in higher risk categories will face stricter requirements.

The act affects:

  1. AI providers and users within the EU.

  2. Providers and users outside the EU, if the AI system's outputs are used within the EU.

  3. Anyone importing or distributing AI systems within the EU.

The regulation imposes obligations not only on AI system providers but also on users. Each actor’s responsibilities are based on their area of control. AI providers have broader obligations, such as establishing risk management systems, designing AI to enable human oversight, and training AI according to specific principles. AI system users must also monitor their use of AI systems and report any significant issues or malfunctions to the providers, or if their use of the AI system poses risks.

These responsibilities primarily apply to high-risk AI systems, but there are also specific obligations for other AI systems designed for interaction with natural persons, emotion recognition, biometric classification, and systems used for generating or manipulating images, audio, or video.

Founded in 1999, Wapice is a Finnish software company recognized internationally for its excellence. In 2018, we identified AI and analytics as one of our key focus areas. Guided by ISO9001, ISO14001, and ISO27001 certifications, we offer expertise in developing AI strategies and designing and implementing AI solutions tailored to your needs. Let us help you find and build the right solutions for your business together!