Business Insights: Global Markets, Strategy & Economic Trends
- Employees quietly using personal ChatGPT and Claude without telling IT or compliance.
- Shadow usage creates security, privacy, and regulatory risks for sensitive data.
- Staff run consumer AI on personal laptops alongside locked corporate PCs, exposing unmet internal AI demand.
- Organizations need controlled access, clear policies, and robust governance to safely harness employee-driven AI use.
Many corporate gen AI programs fail. They yield clunky tools, slow rollouts, and unimpressive results. Meanwhile, a hidden revolution is taking place inside most large organizations. Employees, frustrated by cumbersome corporate tools or lack of access, are quietly using personal ChatGPT, Claude, and other consumer AI models on the side, often without telling IT or compliance. An official at a large central bank reported to one of us that when their employees are working on their secure, no-AI, bank-issued PCs, they often have their personal laptops open to the home page of their favorite large language model.
Read the full article from the original source


