Generative artificial intelligence (“GenAI“) tools have become an increasingly prominent feature of modern working environments. The speed and efficiency gains offered by GenAI tools have led many organizations to integrate them into various stages of their business processes.
Against this background, the Turkish Personal Data Protection Authority (“Authority“) published guidelines titled “Use of Generative Artificial Intelligence Tools in the Workplace” on 5 March 2026 (“Guidelines“). The Guidelines were prepared with the aim of providing a general framework on the use of publicly accessible GenAI tools offered by third parties in the workplace, and target raising awareness among companies, institutions, and organizations, drawing attention to potential risks, and promoting informed use.
We assessed the key themes of the Guidelines below, within the framework of Turkish data protection law.
I. Definition of GenAI and the Use Cases
Guidelines define GenAI as “artificial intelligence systems trained on large scale datasets capable of generating original content in various formats, including text, images, video, audio, or software code, in response to user prompts”. Guidelines further note that tools built on these systems are transforming the way business processes are carried out across a wide range of sectors, including customer services, marketing and advertising, education, healthcare, law, and software development.
Moreover, it is highlighted that GenAI tools may offer considerable efficiency gains, particularly by automating repetitive tasks and enabling employees to focus on higher value-added activities. At the same time, the use of such tools is no longer confined to specific industries or professional groups and continues to expand across different fields.
Guidelines also observe that use of GenAI tools is often driven by individual employee initiative rather than a clearly defined corporate strategy or governance framework.
II. Uncontrolled Use in the Workplace (Shadow AI)
Guidelines address the concept of Shadow Artificial Intelligence (“Shadow AI”) and risks associated with it. Shadow AI refers to use of GenAI tools in workplace processes by employees without knowledge, approval, or oversight of relevant organization.
For many organizations, Shadow AI has moved beyond being a theoretical risk and has become a reality encountered within everyday workflows and digital working environments. As GenAI tools become more widespread, employees are increasingly inclined to incorporate them into business processes and in doing so, meeting notes, internal correspondence, report drafts, information about employees or customers, and various non-public organizational data may be shared with third party GenAI tools.
Considering ease of access to GenAI tools, Guidelines indicate that employee adoption may increase rapidly in close future. Widespread use of Shadow AI may therefore significantly complicate organizational risk management in terms of (i) auditability and accountability, (ii) decision quality and accuracy, (iii) protection of intellectual property and trade secrets, (iv) corporate reputation and trust, (v) information security and cybersecurity, and (vi) personal data protection, which in turn, increases the need for further governance within organisations.
III. Awareness and Recommendations Regarding Use of GenAI
As GenAI tools are increasingly relied upon in business processes, organizational approaches regarding their use need to be revisited. In this context, Guidelines address that a complete prohibition on use of GenAI tools in business processes is unlikely to produce effective results in practice and that restrictive approaches may instead drive employees toward use of such tools outside organizational visibility. Rather than prohibitive approaches, it is recommended to adopt guidance, balance, and awareness based approaches to the use of GenAI tools in the workplaces.
Guidelines also address that some organizations’ policies or guidelines may permit the use of publicly accessible GenAI tools for idea generation, linguistic review of texts, or summarizing publicly available content, provided that personal data, trade secrets, or organizationally sensitive information are not shared. Conversely, use cases involving the sharing of information such as customer files, human resources data, or internal correspondence may be deemed impermissible under such policies. Accordingly, there is no single uniform approach applicable to all organizations regarding policies or guidelines on the use of publicly accessible GenAI tools offered by third parties and the scope and limits of permissible use cases may vary.
In this context, the following precautions are also highlighted by the Guidelines in terms of use of GenAI in the workplace:
a. Establishing a clear and accessible AI policy that defines the limits of appropriate GenAI use. Such policy should typically address;
- The GenAI tools that may be used within the organisation, the activities and conditions under which they may be used, the types of information that may be provided as inputs, principles governing the use of outputs, data confidentiality and security considerations.
- Categorisation of organizationally sensitive information and personal data, as sharing such information with third-party GenAI tools may result in that information being processed outside organizational controls. Particular caution is warranted when using GenAI tools in connection with sensitive categories of information such as health data, financial information, and information relating to legal proceedings.
- The terms of controlled access to GenAI tools designated by the organization, as well as possible role-based approaches to determine which employee groups may use these tools.
- Additional measures such as implementing network-level access controls for external platforms, determining which devices may be used to access GenAI tools (e.g., only corporate devices).
Businesses are also recommended to take into consideration the guide prepared by Authority titled “Generative Artificial Intelligence and the Protection of Personal Data Guide (15 Questions)1”2, in terms of data privacy aspects when drafting their internal regulations.
b. Prioritizing employee adoption and training regarding the safe use of GenAI tools, by clear communication and guidance on the policies for GenAI within the organization, supported with periodic updates as policies evolve is critical. This is particularly important as the standalone policies may not be sufficient to increase awareness within the organisation. Regular training should be conducted to raise awareness of the technical and legal risks that may arise from the use of these tools, and feedback mechanisms should be established to allow employees to share their experiences and raise concerns.
c. Employees should also be made aware of the risk of over-reliance on GenAI-generated outputs (i.e. automation bias), as well as the possibility of outputs that appear convincing but do not correspond to reality (i.e. hallucinations), given that users may tend to accept AI-generated content without adequate scrutiny.
Overall, as also detailed under the Guidelines, the approach to the use of GenAI tools in the workplace should be addressed with awareness of the risks that may arise and in light of applicable legal obligations. A holistic approach will support the predictable and responsible use of GenAI tools in the workplace, while also helping to prevent risks arising from uncontrolled use.
Full text of Document is available here. (Only available in Turkish.)