The tech industry is predicting, and in some cases near promising, that Artificial Intelligence (AI) will take over the world and change every aspect of our lives as we know it. With such prevalence, it's important for those of us in the security community to understand what AI can and cannot do. In this article we will cover:  

  • AI of Today Versus AI of the Past 
  • What AI Can Do 
  • Current AI Limitations 
  • Threats That Come with AI and How to Control Them 

AI of Today Versus AI of the Past 

For decades, AI was the name we gave to machines with reasoning abilities (like Star Trek's Lieutenant Commander Data or Portal's GLaDOS). While they were purely fictional at the time, we imagined someday that kind of intelligence may be possible. Fast forward to today, and we see AI at the forefront of technology. But what exactly does AI mean in practical terms? 

The AI of today (more accurately defined as Machine Learning (ML)) has roots dating back to 1913. When the general population says "AI," they are referring to programs that "learn" from data to calculate predictions. AI doesn't actually think or reason the same as humans—it operates on probabilities. Despite marketing and media making it sound like AI has human capabilities, AI remains a tool with distinct limitations. It can, for instance, help you write an email, but one single AI system won't be cleaning your house while doing your taxes anytime soon! 

What AI Can Do 

There are several AI tools being used in the marketplace right now helping us solve problems in record time. For example, these tools can 

  • Generate: Text, images, video, audio, etc. 
  • Analyze: Statistical analysis, anomaly detection, fraud detection, behavior trends, etc. 
  • Predict: Identify what is likely to happen through forecasting 
  • Summarize or Translate: Document summarization, language translation, sentiment analysis, etc. 
  • Monitor: Monitor systems, automatically respond to attacks, etc. 
  • Develop: Code authoring, review, optimization, debugging, etc. 
  • Converse: Chat with customers, prospects, employees, etc. 

If AI can do all that, what can it not do? 

Current AI Limitations 

While powerful, AI does have its limitations. 

Limited Attention: Large language models (LLMs) can only process a certain number of tokens (words and symbols) at a time. Exceeding the limit can lead to errors or poor-quality responses. 

Limited Speed: AI's ability to perform trillion operations per second (TOPS) is impressive, but neural networks require a lot of processing power. The power needed can limit the outputs possible. Check out this article discussing how it takes about one water bottle to cool down a data center that is generating one 100-word email! 

Limited Creativity: AI models depend on initial clean datasets, and some models need human supervision to ensure new learning is not inaccurate or harmful. Models themselves don't inherently know who to trust; they simply generate based on their programming plus the data they've been given. It's essential for someone to guide AI systems, teaching them what is right and wrong. Generative AI relies on us mere humans to keep improving! 

Ultimately, AI simulates human capabilities—like learning, reading, and problem-solving—but it doesn't replicate these capabilities. AI still needs human input and direction to function effectively. And as history has shown us, new technology doesn't eliminate jobs; it shifts them. Like when we thought the typewriter was going to give us so much free time, and if not that, then the computer. AI is definitely going to take some jobs. It is not going to take all jobs. Humans will continue to adapt and create jobs integrating with the new technology.

Threats That Come with AI and How to Control Them 

As AI becomes more accessible, criminals are quickly finding ways to exploit its capabilities. Here are some key threats posed by AI and the controls to combat them. 

Threats Controls

Faster / Smarter Social Engineering

This is where generative AI can research specific targets like key individuals in an organization. AI can customize phishing messages to make them hyper personalized. Then, generative AI can be used to automate attacks. 

User Training 

To combat this faster and smarter social engineering, train your employees on what to look for. Send them phishing simulations and focus your training on verification steps and the prevalence of AI content. Make them aware of the tactics that criminals will use. 

Faster / Smarter Malware 

This is where attackers can use AI to identify vulnerabilities by scanning network configurations. AI can auto generate attack vectors to exploit specific vulnerabilities. Additionally, since AI can make rapid, real-time changes, it can enable attackers to deploy evasive tactics that avoid detection (e.g., mimicking legitimate network activity, adapting movement patterns to blend in, etc.). 

Monitoring 

To combat faster and smarter malware, the first step is to know what is normal on your systems. Identify baseline behavior and ensure your security operations center (SOC) is watching for changes in the baseline behavior. Automate security tasks such as implementing patches. Finally, put your setup to the test; conduct regular IT audits and penetration tests. 

Adversarial AI/ML

This is where attackers are aiming to make model output inaccurate and/or harmful. Poisoning attacks alter your training data, evasion attacks alter your input data, and model tampering attacks alter your model structures.

In-House Testing and Maintenance OR Outsourced Vendor Management 

To combat adversarial AI/ML, if you are running in-house systems using AI, you will need to do testing and maintenance. In most cases, you are using AI tools through vendors, which means it's time for vendor management. Check out Tandem's Artificial Intelligence (AI) Vendor Review Checklist for what to consider when reviewing these vendors. 

Privacy/Confidentiality 

This is a concern with the collection and storage of proprietary information. Employees may (intentionally or unintentionally) upload company information to AI systems which could include confidential or private information. 

Policies and Procedures 

Combat this threat with policies and procedures designed to ensure proprietary information is not disclosed with AI tools. Policy considerations may include input limits, data anonymization, and user training.  

In addition to these specific controls, institutions must continue to rely on their Incident Response Programs to manage the inevitable incidents. 

Conclusion 

As AI continues to evolve, it presents both opportunities and challenges. While it's not a perfect system and won't likely replace human intelligence, it has the potential to enhance many areas of our lives. With that power comes the responsibility to manage risks, maintain security, and ensure AI is used ethically. 

In the end, AI is a tool, and like any tool, its impact will depend on how we choose to use it. 

To learn more about managing AI third-party relationships, check out Tandem Vendor Management. Tandem offers a simplified and streamlined interface, designed to organize your vendor management program. Learn more at Tandem.App/Vendor-Management-Software