news

Shadow AI in the ‘dark corners' of work is becoming a big problem for companies

Shadow AI in the ‘dark corners’ of work is becoming a big problem for companies
Shutthiphong Chandaeng | Istock | Getty Images
  • When employees send information in and out of an organization with new AI tools, tech gatekeepers often miss important pieces of unsanctioned information until it's too late.
  • This is called shadow AI and it poses potential threats that information leaders are trying desperately to rein in.
  • But outright bans on AI tools aren't the answer either, say experts. A better approach includes guardrails and education.

Amid the growing hype and usage of artificial intelligence, the uncontrolled use that goes beyond the jurisdiction of IT departments is something that information leaders are trying desperately to rein in.

Known as shadow AI, this is the AI usage within a company that occurs "in dark corners," said Jay Upchurch, CIO of data analytics platform SAS. "They inevitably pop up in terms of either importance because they were successful or pop up in terms of importance because there was a security issue."

Shadow IT is nothing new, and shadow AI is the latest iteration of the phenomenon. "We have this human nature of autonomy and authority," said Tim Morris, chief security advisor at cybersecurity firm Tanium, who has years of experience in offensive security and incident response. "Any time you grow an organization, different people will create their fiefdoms."

The problem is that shadow AI is more complex, and more dangerous, than shadow IT was in the past.

Inflated risks, sensitive information leaks

Governance and security are major concerns in shadow AI, with questions like whether confidential IP is leaving your controls and going out into a publicly available large language model, if you're infringing on copyright, or if you're giving away personally identifiable information about your customers.

Another risk: Software developers can unintentionally help hackers create malicious malware based on the very code they have entered into AI tools. "When you're a smaller company, the risks are greater," said Ameer Karim, executive vice president and general manager of cybersecurity and data protection at ConnectWise. He added that these organizations must also worry about AI hallucinations and inaccuracies, as most are using free versions of ChatGPT 3.5 or a similar tool, which only includes data trained through January 2022.

Companies including Samsung have experienced sensitive information leaks and Microsoft has had temporary security issues as a result of generative AI deployment. While allowing time for creative tinkering has shown to be an effective way to increase innovation within an organization, experts and anecdotes both suggest allowing full reign isn't the solution.

On the other end of the spectrum, Morris said, "Prohibition never works, by the way." Not only do people not adhere to prohibition, he says, but it's a surefire way to ostracize good talent. "If you want to keep good talent, all you have to do is set the boundaries," he said.

Morris's experience managing offensive cybersecurity teams has shown him the extent to which creative people will go to do what they want to do. "It's like managing the cast of Ocean's 11," he said.

One way Morris enables creativity in a controlled environment is through an annual competition in which competitors pitch and demo their creations.

When thinking about why employees use unsanctioned AI services (even outside of clarified guardrails), Mike Scott, CISO of Immuta, said, "Most shadow AI violations will not be malicious in intent."

Remote users and cloud-based concerns

Education on the risks of shadow AI and the best ways to procure approval help, but they only go so far. "An endpoint security tool is the most feasible and scalable answer to the problem," said Scott. He says the threat of shadow AI is greatest with remote users and cloud-based AI platforms, and that technologies like cloud access security brokers can address both concerns.

Karim recommends enabling tools with built-in privacy and security features, such as the Microsoft Azure OpenAI service, which lets you control what data is uploaded and what stays private.

Upchurch also recommends keeping tabs on where and how much data is flowing within your organization. "You're going to use your normal security fencing to detect and see that's an abnormal amount of data, or that's going to a place that you don't necessarily trust or want it to go to," he said.

While most organizations fall into the category of controlled allowance, Upchurch says there are exceptions. Highly sensitive operations, like his information leader peers in defense contracts, typically fare better with an outright ban, he says. "At that point, you can't really trust your employees because of the sensitivity of what you're dealing with," Upchurch added.

But that's a small sliver of the industry, and the vast majority will fare better with a combination of policies, education, and a balance of offensive and defensive security strategies. In the meantime, it's not hard to see why individual AI solutions are exciting for engineers. "I used to spend a week perfecting a script, which an AI can do in three minutes," Morris said.

Ultimately, Upchurch emphasizes that while shadow AI is very real, so is AI itself. If you don't embrace it, he said, "Your neighbor is going to come in as a competitive threat and take your lunch money."

Copyright CNBC
Contact Us