Exploring the Potentials and Limitations of Artificial Intelligence in Cybersecurity
Artificial intelligence (AI) has rapidly emerged as a powerful tool in the field of cybersecurity, with potential applications ranging from advanced threat detection to the automation of repetitive tasks. While the benefits of AI in cybersecurity are clear, it is important to carefully consider the limitations and weaknesses of this technology in order to maximize its potential while minimizing any negative impacts.
In this article, we will explore the potentials and limitations of AI in cybersecurity, highlighting important questions such as: What does it have to offer? What are its current weaknesses? Is AI here to replace the current cybersecurity framework? By examining these questions, we can gain a better understanding of how to integrate and innovate with AI in cybersecurity in a responsible and effective manner.
What does it have to offer?
Big tech companies like Facebook and Instagram are most likely using AI to fight spam and phishing more effectively or at least looking for a way to implement the technology. These can be seen through very common attacks targeting public accounts.
For example, a really frequent case affecting many users is the creation of fake social media profiles impersonating men and women indiscriminately. The attack consists of creating fake social media pages and impersonating somebody in order to perform phishing attacks on all the followers of the target user. These phishing attacks can be incredibly harmful. Victims can wake up to discover a fake profile with an OnlyFans link, including fake images of them!? Trust me, not a fun experience.
Let's take Instagram's current policy regarding this case as an example. If you are reporting a fake profile to Instagram, or what they called on their public articles "Hacked" accounts, you will find out that the process currently being used is not fully automated. The instructions given are to report it to this page and send a government ID picture to be reviewed at some point in time. This means that you will not have an immediate solution. However, it is difficult to have an accurate estimate of how long it will take since we don't know Instagram's team, internal policy, and the current amount of traffic they are dealing with regarding these cases.
This is further supported by the Ponemon Institute study, "The Cost of Phishing," which found that the average annual cost per phishing attack is $14.8 million.
Loss of employee productivity represents a significant component of The Cost of Phishing. Employee productivity losses are among the costliest to organizations and have increased
significantly from an average of $1.8 million in 2015 to $3.2 million in 2021. Employees are spending more time dealing with the consequences of phishing scams.
“We estimate the productivity losses based on hours spent each year by employees/users viewing and possibly responding to phishing emails averages 7 hours annually, an increase from 4 hours in 2015.” Extracted from The Cost of Phishing.
By implementing AI-powered tools, companies can significantly reduce the cost of phishing attacks and improve the speed and efficiency of detecting and resolving such attacks. The paper “Phishing Attacks Detection: A Machine Learning-Based Approach" by Fatima SALAHDINE, Zakaria El Mrabet, Ph.D., and Naima Kaabouch presents the potential of machine learning algorithms in accurately detecting phishing attacks. The proposed approach achieved an accuracy score of 98.5% in identifying phishing URLs, outperforming traditional anti-phishing solutions. The paper presents a machine learning-based approach to detect phishing attacks that outperform traditional anti-phishing solutions in terms of accuracy.
These techniques can be used to detect and prevent phishing attacks in a variety of ways, including identifying malicious websites and emails, filtering out spam and other unwanted messages, and creating user interfaces that are resistant to phishing attempts. As highlighted in the references, the development of AI-based anti-phishing solutions is an important area of research that holds significant promise in the fight against cybercrime.
In addition, AI in cybersecurity motivates current professionals to upgrade their skills and stay current with the latest advancements in the field. The continuous development of new technologies creates an environment of learning and growth.
As new talent enters the field bringing fresh ideas to the table. It is a win-win scenario for all stakeholders in the industry, even for the future of cybersecurity. The infusion of AI in cybersecurity not only gets rid of repetitive tasks, and slow flows, but also frees up professionals to tackle higher-level and strategic issues which consequently will drive innovation to our tools and the way we do our jobs on a positive note if implemented correctly.
The benefits of AI in cybersecurity are numerous, as it has already made a significant impact in the field, including advanced threat detection, automation of repetitive tasks, and improve incident response. The recent acquisition of OpenAI's GPT-3 by Microsoft has sparked competition between tech giants, including Google, which will accelerate advancements in AI technology in the following years. This competition will not only benefit the companies but also individuals and organizations that rely on the security of their digital systems and data.
What are its current weaknesses?
Regardless of the potential for AI to help the cybersecurity space, it is crucial to understand its limitations and weaknesses in order to approach its integration into the field with caution and prudent consideration.
The widespread use of the term "AI" in various industries, including cybersecurity, has led to an influx of startups claiming that their AI product or technology will revolutionize the industry, very similar to the FOMO effect we see, for example, in cryptocurrency. To fully realize the potential of AI in cybersecurity, it is crucial to approach these claims with a critical perspective, evaluating the technology and its limitations. This will ensure that its integration into the industry is done in a responsible and sustainable manner, avoiding the potential for new problems, compliance, or legal issues that could arise from exploiting its popularity for profit or cost savings in our field.
There is growing skepticism among security professionals toward AI's capabilities due to the limitations of current AI technologies. Many view AI as a mere advanced binary decision tree that has been fed a large amount of data, rather than a truly "intelligent" system that can operate independently without human guidance.
The accuracy and effectiveness of AI models such as Large Language Models (LLM), like ChatGPT, are heavily dependent on the quality of the input they receive. In order to write a good prompt for an AI model, one must have a strong understanding of the subject matter and knowledge of the strengths and limitations being asked. Without this combination of knowledge, the AI tool may not be able to provide accurate and helpful responses.
There are a number of limitations that need to be considered. One that really affects our field is the need to feed AI technology with vast amounts of data, including sensitive information, in order to make it functional. This raises questions about how to protect users' privacy while still obtaining the data needed to power AI usage.
In the case of the European Union, the General Data Protection Regulation (GDPR) provides a broad framework for Communication on AI for Europe, which the European Commission published in April 2018. The document reflects the appropriate legal and ethical framework to create an environment of trust and accountability and to ensure Europe develops and uses AI in accordance with its value. The proposed Artificial Intelligence Act aims to balance addressing risks while fostering innovation, creating a safe space for the developers of these tools.
However, the US does not yet have a comprehensive federal privacy law regulating AI usage. While California's Consumer Privacy Act (CCPA) addresses data protection and privacy, it does not specifically address AI. Recently, the White House released a Blueprint for an AI Bill of Rights, but as noted by Alexandra Reeve Givens, President and CEO of the Center for Democracy and Technology stated;
"Today's agency actions are valuable, but they would be even more effective if they were built on a foundation set up by a comprehensive federal privacy law"
This means that the ordinary person in the United States will probably not have the necessary resources to implement this technology the right way without making it a vulnerability.
AI in cybersecurity is neither easy nor cheap. It requires high technical skills. Take a look at the example of CNIL they provide a reminder of principles needed to comply with their respective regulations like the French Data Protection Act and the GDPR to be followed.
It is essential for organizations to conduct deep research and due diligence to ensure the security and privacy of their data before entering into any arrangements with AI product vendors.
The Blueprint for an AI Bill of Rights is a positive step toward addressing these concerns, but it remains to be seen translated into legally binding regulations to protect organizations from exploitation by third-party AI vendors.
The current regulatory landscape surrounding the use of AI in cybersecurity leaves organizations susceptible to exploitation by third-party AI vendors, as well as increased risk of attack by hackers who are seeking to exploit vulnerabilities in AI systems. We saw a recent example in the recent incident involving ChatGPT's alter-ego (Dan), which forced the AI to break its own rules through threats of harm.
Is AI here to replace the current Cybersecurity framework?
AI has been implemented in various forms to enhance the current cybersecurity framework and it has shown tremendous potential in detecting and preventing cyberattacks, detecting fraud, and automating security processes.
For instance, AI can be used to detect fraudulent activities in credit card transactions by analyzing data from multiple sources, such as spending patterns and account information.
AI can also automate various security processes, such as vulnerability scanning, log analysis, and incident response. This can help security teams save time and resources and respond more quickly to potential threats.
However, it is worth noting that despite the potential benefits of AI in cybersecurity, the technology is still in its early stages when it comes to effectively responding to cyber threats. In fact, according to a report by Capgemini, less than 18% of organizations currently make significant use of AI for cyber threat response.
In fact, many organizations face challenges when it comes to implementing AI solutions. According to a survey conducted by Capgemini, the number-one ranked challenge faced by organizations in adopting AI in cybersecurity is a lack of understanding of how to scale use cases from proof of concept to full-scale deployment, with 69% of respondents admitting to struggling in this area. As Cole Sinkford, former CISO at GE Renewable Energy points out
"There are so many basic things that are the building blocks of cybersecurity that you need to have in place before you start talking about really advanced things like AI" (as cited in the same study Capgemini AI in Cybersecurity, 2020)
It is important to note that while AI can automate certain tasks and detect potential threats more quickly, it is not likely to replace cybersecurity professionals altogether. AI algorithms still require human oversight to ensure that the right decisions are being made, and to monitor the AI's effectiveness and the security framework's response to new security threats.
As Rohit Chauhan, Executive Vice President, Artificial Intelligence at Mastercard explains
"AI will not take over your job. But people who embrace AI and run with it will be much better positioned than people who resist AI."
The biggest limitation of AI is going to be our own imagination, and organizations that effectively implement AI in their cybersecurity strategies will be better positioned to keep up with evolving threats.
In conclusion, the integration of Artificial Intelligence (AI) in the field of Cybersecurity can bring numerous benefits, such as advanced threat detection, automation of repetitive tasks, and improve incident response. However, it is crucial to approach AI integration with caution and a critical perspective, evaluating the technology and its limitations to ensure a responsible and sustainable integration, avoiding potential new problems, compliance, or legal issues. The use of AI can also motivate current professionals to upgrade their skills and stay current with the latest advancements in the field, resulting in a win-win scenario for all stakeholders in the industry. The future of AI in Cybersecurity is promising, and it will be exciting to see what new innovations and advancements will emerge in the field.
Don't keep your opinions to yourself! Let us know what you think about this topic in the comments section.