When companies fail to create the right incentives around artificial intelligence, employees respond rationally: they hide their AI use. Ethan Mollick, associate professor at the Wharton School, identifies this as one of the most consequential failures of current corporate AI strategy.
Mollick described the phenomenon in The Economist, warning that the resulting information gap blinds managers to AI's true impact inside their own organizations, "Hvylya" reports.
The motives for hiding vary. Some employees fear punishment for using tools their company has not officially approved. Others do not trust that productivity gains will be shared with them rather than captured entirely by the firm. And some, Mollick wrote, "quietly work 90% less and see no reason to volunteer that information."
The result is an enormous blind spot at the managerial level. Leaders cannot measure what they cannot see, and invisible AI usage makes it nearly impossible to develop a coherent strategy. Companies end up relying on vendor promises and industry hype instead of data from their own workforce.
Mollick tied this problem directly to the broader corporate impulse to normalize AI. When organizations treat the technology as just another software rollout - with compliance targets and IT oversight - they create exactly the conditions that drive usage underground. Employees game metrics by producing what Mollick called "workslop": meeting transcripts and unnecessary memos that satisfy usage requirements without generating real value.
Earlier, "Hvylya" reported on how one dog owner's AI experiment shrank a tumor and sparked plans to disrupt veterinary oncology.
