AI introduces new security risks including data leakage through public AI tools like ChatGPT, shadow AI usage by employees without IT oversight, and AI-powered social engineering attacks. The Mimecast State of Human Risk Report found 81% of organisations are concerned about sensitive data leaks via generative AI tools. Businesses need an AI governance policy that defines acceptable AI use, protects sensitive data from being entered into AI systems, and trains staff on AI-specific risks. Mercury IT provides AI governance consulting, helping organisations develop policies and technical controls that enable AI innovation while maintaining security and privacy compliance.