Connect with us

Business

Unauthorized AI Tools Pose Risks for Businesses Worldwide

Editorial

Published

on

An increasing number of employees are utilizing unauthorized artificial intelligence (AI) tools to enhance productivity, but this trend is raising significant concerns among businesses about data security and intellectual property risks. Known as “shadow AI,” the use of these unapproved platforms can inadvertently expose sensitive company information, potentially leading to cyberattacks.

According to Kareem Sadek, a partner in the consulting practice at KPMG in Canada, businesses often lag in adopting the latest technologies. This delay prompts employees to seek third-party AI solutions for convenience and efficiency. “Shadow AI often seeps in when users are looking for convenience, speed, and intuitiveness,” Sadek said.

The implications of using unauthorized AI tools can be severe. Robert Falzon, head of engineering at Check Point Software Technologies Ltd., highlighted the risk of disclosing confidential data. For instance, employees may inadvertently share proprietary research or financial statements on unapproved chatbots, leading to potentially serious consequences. “There’s a chance that the AI might dig back into its resources and training and find that piece of information about your company that talks about the results… and just nonchalantly provide that to that person,” Falzon explained.

A report from IBM and the Ponemon Institute in July found that 20 percent of companies surveyed experienced data breaches due to incidents involving shadow AI. This figure is notably higher than the 13 percent reporting breaches linked to sanctioned AI tools. The average cost of a data breach in Canada rose to $6.98 million between March 2024 and February 2025, a significant increase from $6.32 million the previous year.

To address these challenges, experts advocate for establishing governance around AI usage in the workplace. Sadek emphasized the importance of developing an AI committee that includes members from various departments, such as legal and marketing, to oversee tool usage and encourage safe adoption. Implementing a framework that aligns with company ethics can help address concerns about data integrity and security.

One approach discussed by Falzon is the adoption of a zero-trust mindset, which involves not trusting any devices or applications that are not explicitly sanctioned by the company. This strategy can significantly mitigate risks associated with unauthorized tools. For example, Check Point restricts employees from inputting sensitive research and development data into chatbots. “That’s going to help make sure that customers are both educated and understand what risks they take,” Falzon noted.

Creating awareness among employees about the risks associated with unauthorized AI tools is crucial. Sadek suggested that hands-on training sessions could help employees understand these risks and foster accountability. “It significantly reduces the use or holds the users or employees accountable,” he stated.

Some companies are taking proactive measures by deploying their own internal chatbots. Sadek described this as a smart strategy to counter unauthorized AI usage, ensuring that data is kept secure and within the organization’s established guidelines. Nevertheless, the deployment of internal tools does not completely eliminate cybersecurity risks.

Researcher Ali Dehghantanha conducted a cybersecurity audit for a Fortune 500 company, where he was able to breach their internal chatbot and access sensitive client information in a mere 47 minutes. “Because of its nature, it had access to quite a number of company internal documents,” Dehghantanha said. He noted that many organizations, including major banks and law firms, rely heavily on internal chatbots for communication and advice, yet many lack adequate security measures.

As organizations look to adopt AI technology, they must consider the total cost of ownership, which includes security measures. “For any technology, always consider the total cost of ownership,” Dehghantanha advised. “One part of that cost of ownership is how to secure and protect it.”

As the use of AI tools becomes more prevalent, businesses face a dual challenge: empowering employees with the necessary resources while safeguarding against potential data leaks. Falzon remarked that companies can no longer prevent staff from using AI; instead, they need to provide approved tools that minimize data leakage risks while maximizing productivity.

This evolving landscape requires a careful balance between innovation and security, as organizations strive to navigate the complexities introduced by shadow AI.

Trending

Copyright © All rights reserved. This website offers general news and educational content for informational purposes only. While we strive for accuracy, we do not guarantee the completeness or reliability of the information provided. The content should not be considered professional advice of any kind. Readers are encouraged to verify facts and consult relevant experts when necessary. We are not responsible for any loss or inconvenience resulting from the use of the information on this site.