Overcoming the Top AI Challenges in Cybersecurity

AI in Cybersecurity

The rise of AI has made its waves in the technology industry and the world as a whole. All of the big tech companies have pivoted their efforts into AI and are creating tools and platforms to enable businesses to utilize generative AI in their daily operations. Many economists are projecting AI to be the next driver for significant economic growth. Tools like ChatGPT are now considered mainstream and are utilized by individuals on a daily basis. The field of cybersecurity is not immune to the AI wave, and is in fact one of the fields that stands to experience one of the largest transformations due to AI. Unfortunately, if you’ve been in the cybersecurity field for a significant period of time, you already know that AI will present significant challenges going forward.

We are already seeing the effects of AI in cybersecurity. Vendors have branded all of their existing offerings with the letters “AI” in an attempt to keep up with the trend and to appear more future thinking. Any new cybersecurity offerings will no doubt incorporate AI in some form. We also have employees in all organizations that are already utilizing AI tools to streamline their workflows. While innovation is great, we as cybersecurity professionals are in a position where we are expected to understand, implement, and secure AI within our respective organizations. Unfortunately, what makes this more challenging than other new technologies is that we’ve been effectively told to adopt this technology overnight.

The cybersecurity field will undoubtedly adapt to AI. NIST has already released its AI risk management framework which outlines how AI works and how to best secure it. Cybersecurity leaders and AI leaders are determining the various risks and threats that AI can pose and are determining how to best handle them. Cybersecurity tools will become more capable at analyzing large amounts of data and providing recommendations on how to best mitigate active threats. Regardless of the technological advancements, governance, risk, and compliance (GRC) teams will have their hands full when it comes to determining how to best secure AI within their own workplaces. Likewise, cybersecurity professionals of all specialties, regardless of blue team or red team, will have to come up to speed with how to best utilize and secure it. Here are some of the top challenges of AI in cybersecurity:

Fake AI Solutions

One of the biggest challenges of AI in cybersecurity, if not the biggest challenge for technology teams in general, will be finding solutions that actually use AI. This may sound silly, but not all AI is created equal. Artificial intelligence can be defined as “the theory and development of computer systems able to perform tasks that normally require human intelligence”. However, depending on who you ask, artificial intelligence can also be defined as “any system that appears to make intelligent decisions”. Using the second definition allows for a lot more flexibility with how the term “artificial intelligence” can be applied. Technically, any Python script with an if/else statement can appear to be making intelligent decisions, therefore classifying it as AI. You can apply the same logic to effectively any software offering as they all technically fill the role of using “artificial intelligence”. Knowing this, weeding through the many “AI” solutions will be a tough challenge for cybersecurity teams to crack. Here are some questions that can be asked to vendors that advertise their offerings utilize artificial intelligence:

  1. Can you elaborate how this offering utilizes artificial intelligence?

  2. Can you describe how the models that this offering uses are trained?

  3. Do the models use basic statistical algorithms or do they use deep learning algorithms?

  4. How do we feed the offering data?

  5. Do the models get better as they’re used over time?

Asking these kinds of questions will at the very least force vendors to verify their claims about how their offerings utilize AI. Unfortunately, you can still be tricked into purchasing a product that doesn’t do as advertised. This leads into one of the next AI challenges in cybersecurity:

Lack of Expertise

As previously mentioned, the usage of AI in cybersecurity effectively came overnight. One day we’re worried about the latest APT group and ransomware attacks, and the next day we’re seeing how AI can help us with all things cybersecurity. QFunction is a proponent for AI and believes that AI will be able to make our jobs and lives easier as we become more comfortable with it, but it’s no secret that there’s a big learning curve when it comes to artificial intelligence in general. From a cybersecurity point of view, we know that it can supposedly decrease incident response times and enhance threat detection capabilities using AI algorithms, but the means by which it does that are unknown to many. In fact, we traditionally tend to depend on our tools to do all that stuff for us, which is why it’s easy to fall victim to fake AI systems that don’t do what they’re supposed to do. Cybersecurity personnel need to get up to speed on AI to combat this kind of misinformation, which begs the following question: what is the best way of teaching AI to cybersecurity professionals in the most efficient manner?

The bad news is that there’s no get-rich-quick scheme in terms of learning AI. A true understanding of it involves programming and math, which may not be every cybersecurity professional’s cup of tea. Also, any certifications or degrees in AI can cost an arm and a leg depending on the program. The good news is you won’t need to know the internal workings of it to be effective as AI continues to advance, and AI’s use in cybersecurity will be no different. However, QFunction believes that cybersecurity professionals being able to understand the theory behind AI will put them head and shoulders above those who don’t. It’s similar to those who depend on a user interface to do work versus those who can program custom solutions to their issues. Knowing the theory will simply make you better at your job and will allow you to call out misinformation as you see it. Therefore, if you have the means to get a legitimate certification in AI from a respected institution, you should definitely do it.

Assuming you can get the AI education, you can then be the subject matter expert on your cybersecurity team and propel your team forward for more advanced cybersecurity use cases. That being said, you’ll still have a problem in utilizing it in your organization. This will lead you to one of the bigger challenges of AI in cybersecurity:

Regulatory Compliance:

Having knowledge of AI will put you ahead of your peers in cybersecurity going forward. However, it’s one thing to know how to use it; it’s another thing to know how to regulate and secure it in your organization. AI is already in heavy use within some organizations, and a lot of businesses even have workflows centered around it. This inherently brings risk to organizations, as you may not know what is happening with the data that is supplied to the AI tools. Depending on your business, you may even create its own AI offerings which comes with its own set of risks such as how the training data was obtained, how the AI was audited for bias and fairness, whether the predictions that the AI makes are explainable, etc. This is just a sample of some of the issues that can arise for regulatory compliance; the rest can be seen in the NIST AI Risk Management Framework.

How to Overcome AI Challenges in Cybersecurity

Overcoming these AI challenges in cybersecurity will take time as AI is still new and is rapidly evolving. Here are some recommendations to cybersecurity teams that are looking to better understand how to secure AI in their organizations:

  1. Upskill the existing cybersecurity teams with AI education and training

  2. Directly working with other teams in the organization that are utilizing AI to better understand how its being used within the organization

  3. If within the budget, recommend that the business hire a Chief Artificial Intelligence Officer who can dictate the direction of AI in the organization

  4. Ensuring teams that utilize AI (not just cybersecurity teams) are familiar with the NIST AI Risk Management Framework

Following these recommendations should put your team in a better position as AI continues to evolve. If you’re interested in seeing how AI can help your cybersecurity perform better threat detection without having to invest in entirely new tools, check out QFunction’s AI-based threat hunting! If you’re interested in better securing specific users or systems in your environment, check out QFunction’s targeted user behavior analytics solutions! And if you’re curious how AI can work for finding anomalies in your logs, check out our post on threat hunting on Linux systems!

Previous
Previous

Automated Threat Hunting for Network Beacons Using Zeek and Math

Next
Next

Automated Threat Hunting Within Linux Logs Using DBSCAN