Gartner research vice-president Dennis Xu has half-jokingly suggested banning use of Microsoft's Copilot AI on Friday afternoons — because workers may be too lazy to properly check its potentially problematic output.
Xu offered the advice at the end of a talk titled 'Mitigating the Top 5 Microsoft 365 Copilot Security Risks' at Gartner's Security & Risk Management Summit in Sydney.
The five risks he cataloged include:
Toxic outputs — Copilot producing content that is factually correct but culturally unacceptable in the workplace or among customers.
Oversharing — Copilot surfacing confidential documents in shared contexts without proper access controls.
Prompt injection — Malicious content in shared documents that can manipulate Copilot's behavior.
Remote execution risks — External inputs influencing Copilot's actions in unintended ways.
Data exposure — Sensitive information appearing in AI-generated summaries.
Xu recommended enabling filters, restricting SaaS connections, and training users. But the Friday afternoon ban got the headlines — his point being that workers at that time of week just want to get the job done and won't bother to check for errors.
The recommendation has sparked real debate in enterprise IT. Some argue it's impractical — if AI is integrated into workflows, you can't just turn it off one day a week. Others see it as a reasonable risk mitigation strategy, similar to how companies restrict code deployments on Fridays.
The broader point resonates: AI tools are only as safe as the humans reviewing their output. When attention drops, risk rises.