Digging the Latest Small Business News

The Paradox of AI Approval: Exploring Why 2/3 of Approved Tools Do Not Benefit Employees

Saturday, October 25th 2025

A recent study conducted by Cybernews has revealed concerning statistics about the use of Artificial Intelligence (AI) tools in the workplace. Despite the widespread adoption of AI, with 78% of organizations utilizing it in at least one business function, only 33% of employees find the approved tools to be adequate for their work needs. This has resulted in a surge of shadow AI usage, with 59% of employees using unapproved tools, exposing the gap between IT approval and real-world usability.

According to the 2025 State of AI report by McKinsey, the use of generative AI has also increased to 71% in organizations. However, the study by Cybernews has exposed the failure in execution of AI tools. While 52% of employers have approved or provided AI tools for their workforce, only 33% of employees using those approved tools say they fully meet their work needs.

This discrepancy has led to a dangerous trend known as “shadow AI,” where employees use unauthorized software and platforms to get their work done, creating a massive blind spot for IT and security leaders. Despite widespread awareness of the dangers of using unapproved tools, 75% of employees share sensitive company and customer data with these unauthorized applications, putting organizations at risk of security breaches.

Žilvinas Girėnas, head of product at nexos.ai, a secure all-in-one AI platform for enterprises, explains why the approval paradox continues despite widespread awareness of security risks. “This isn’t a user problem but a procurement and implementation crisis. We’re approving AI tools on promises and checklists, not on how well they fit work practices. Insufficient tools lead employees to bypass approval, risking customer data on unknown platforms,” he says.

The failure of approved tools to meet the needs of employees is creating a conflict between productivity and risk. The result is a clash between the company’s need for security and control versus the employee’s interest in productivity and efficiency. Employees are rarely acting with malicious intent. They are simply seeking the convenience, speed, and features of AI tools that actually allow them to do their jobs better and faster.

Leadership then faces a dilemma – to either block the use of unapproved tools and risk losing a critical productivity edge or to permit their use and lose control over the company’s most sensitive data. The potential risks are significant, as 75% of employees admit to sharing potentially sensitive information, including customer data, internal documents, financial records, and proprietary code, when using unapproved AI tools.

Girėnas points out that the “gray zone” exists because having a policy on paper doesn’t mean it is an effective one. Many organizations implement AI policies with just a simple “I acknowledge” checkbox, without providing training, approved tools that work, or ongoing communication on how to apply the rules practically. This lack of understanding and support from leadership leads employees to make their own decisions, resulting in sensitive data being shared on unapproved platforms.

The disconnect between high adoption rates and low employee satisfaction is the direct result of a flawed, top-down implementation strategy common in many organizations. The problem often isn’t the technology itself, but an absence of planning and user involvement. Companies rush to participate in the AI boom and make procurement decisions without a clear understanding of their teams’ day-to-day workflows.

Girėnas outlines four non-negotiables for organizations to build a secure and productive AI ecosystem. These include mapping employee workflows before selecting tools, offering a secure sandbox, implementing a “living” policy with ongoing feedback, and identifying internal AI champions.

Founded in 2024 by Tomas Okmanas and Eimantas Sabaliauskas, who also co-founded several bootstrapped global ventures, the $3B cybersecurity unicorn Nord Security and Oxylabs, nexos.ai addresses the urgent enterprise need to efficiently deploy, manage, and optimize AI models within organizations. The company attracted its first investment of €8M in early 2025 from Index Ventures, Creandum, Dig Ventures, and a number of prominent angel investors.

Share this article
0
Share
Shareable URL
Prev Post

Minister of Science endorses Aston University’s research on neuromorphic computing

Next Post

The Mark of Quality: Why a Moleskine is the Ultimate Branded Notebook

Read next
0
Share