Security Awarness background

How Cybersecurity Is Changing and What to Expect in 2025

Written by Hornetsecurity / 23.12.2024 /
Home » Blog » How Cybersecurity Is Changing and What to Expect in 2025

Cybersecurity is always changing, with new threats and “innovations” arising every year. In the coming years, it’s important to understand these shifts to stay ahead of potential risks. Looking back at past predictions and current trends helps us prepare for challenges like the rise of ransomware, growing use of AI, and evolving regulations. Stay with Hornetsecurity to discover more about the predictions for 2025. 

Did We Get Last Year’s Predictions Right? 

Looking back on our various predictions in the previous editions of the Cybersecurity Report is an interesting exercise, foretelling the future is always challenging, but we definitely got some things right, and a few things didn’t pan out as we expected. 

There are more ransomware groups in 2024 than in 2023, and more posts on leak sites, indicating that ransomware is still going strong with more businesses being compromised than last year. The approximate amount of ransoms paid in 2023 was $1.1 billion, whereas the statistics for the first half of 2024 is $459 million USD, although the prediction is that 2024 will be a more “fruitful” year than 2023. This is in part due to larger payments for more severe breaches, with the largest known ransom ever being $75 million (by an unknown Fortune 50 company). 

We expected MFA fatigue attacks and MFA bypass attacks to increase, and this has certainly been the case. The number and proliferation of both open source and “commercial” kits for both crafting the email lures and setting up the proxy services that pretend to be a real login site has exploded, in response to more widespread adoption of push notification MFA options. To combat this in your organization look for phishing resistant MFA, such as Windows Hello for Business, FIDO2 hardware keys or Passkeys, which uses a smartphone as a FIDO2 key, obviating the need for additional hardware purchases. These technologies are “locked” to the legitimate sign in page, thus even if the user is tricked into visiting a fake site, the sign in tech won’t work, hence they’re called phishing resistant. Our recommendations for password-less security in the 2024 Cybersecurity Report still stand today, with the one addition of Passkeys, which are also password-less, as well as being phishing resistant. 

We saw some risks with the old Microsoft Teams client being built on the electron platform, fortunately it’s now been replaced with the new Teams client which seems not to have as many vulnerabilities. Teams is still an attack vector for phishing lures, although since Microsoft changed the default options for accepting communications from external parties and displaying warnings when a new contact is trying to reach you, this hasn’t exploded in popularity. 

Spyware and malware on smartphones are ongoing issues with both the EU and US taking steps to contain the proliferation of vendors and their use in democratic societies as we predicted. 

As we mentioned attacks against Application Programming Interfaces (APIs) increased in 2024, compared to 2023 (various sources estimate between 20 – 29%). This is often a “hidden” attack vector, and thus popular with criminals, as the monitoring and alerting on APIs aren’t as robust as for other systems. If your organization publishes APIs for your web applications publicly, make sure you have a robust security model for access, and monitor for malicious use, including DDOS attacks. 

The task of managing Microsoft 365 tenants cybersecurity posture continues to be a challenge, as we predicted, although we do want to point to a new tool, currently in public preview which is available to all M365 tenants – Exposure Management. This gives you insight into your tenant’s security configuration and posture, plus initiatives to focus on to improve in particular areas such as defending against BEC or ransomware. 

Time-to-Exploit (the time between a vulnerability becoming publicly known and a working exploit for it being available) went from 63 days in 2018/2010, 32 days in 2021/2022, down to five days in 2023. While we haven’t seen the statistics for 2024 yet, we have seen several successful attacks within days of a vulnerability disclosure. This is putting further strain on defenders as patching is a never-ending job, and you can’t patch everything, everywhere all at once, and so will need prioritization, trying to make sure internet exposed devices are kept up to date. 

We looked at IoT devices as a vector for attacks in enterprise networks and in the first five months of 2024 they surged 107% compared to the same period in 2023. 

While we have certainly seen some convincing deep fakes in 2024, even with the support of AI tools for generating images, audio and video, we haven’t yet seen major breaches caused by them. We still expect that as these tools become easier to use and more capable, we’ll see more attacks, and general disinformation campaigns relying on them. 

The Security Lab’s Predictions 

Every year, as part of this report, the Security Lab team at Hornetsecurity looks at the state of the industry, our data, attack trends, and more to make a series of predictions for the coming year. This serves to inform businesses what potential threats they may face in the coming year, along with how the industry may change. The following are the Security Lab predictions for 2025. 

It should come as no surprise that many of our predictions in this report involve AI. While some of these predictions can easily be grouped together, others are more specific. We’ve broken out these predictions as needed throughout this section. 

LLMs in attacker’s hands 

Last year we looked at the rise of ChatGPT and other Large Language Models (LLMs) and their impact on cybersecurity, both for attackers and defenders. The original fears of LLMs writing flawless malware code haven’t materialized and arguably, the inclusion of AI chat interfaces and other automation into security solutions has been more successful in helping defenders. 

We have seen some actual data on LLM usage by attackers from Microsoft, where Forest Blizzard, a Russian state sponsored threat actor used them for researching satellite and radar technologies, probably to support the Ukraine war, as well as assistance with scripting tasks, including file manipulation. Emerald Sleet from North Korea on the other hand extensively uses phishing to lure their targets and used LLMs to understand known vulnerabilities as well as improve the language and tone in phishing messages. Finally Crimson Sandstorm (Iran, connected to the Islamic Revolutionary Guard Corps) used LLMs for social engineering assistance, troubleshooting errors and .NET development assistance. Notably nearly all these use cases could have been fulfilled using ordinary search engine queries which would not have enabled Microsoft to gather these insights, so arguably as an attacker you’ve failed in your operational security (OpSec) if you’re using a public LLM to do your research. 

Attacks on LLMs themselves continue to proliferate and MITRE has created ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) to track the different types, in a similar way to the Enterprise ATT&CK matrix

With all this in mind, we’re likely to see AI / LLMs in the Cybersecurity discussions in the coming year for a number of reasons: 

  • AI will be increasingly used for reconnaissance and information gathering;
  • AI will be used to help attackers understand the best time to launch attacks based on data provided;
  • AI will continue to be used to improve nearly every attack vector for threat actors, including email, voice, social engineering…etc.;
  • AI will increasingly be used to quickly identify easily exploited objects in weak infrastructure;
  • AI-enabled tools will continue to evolve to assist defenders.

AI-enabled deepfakes used for spear-phishing and to influence the public

The use of deepfake technology in spear-phishing attacks is a growing concern and we’re likely to see this combination in 2025. Deepfakes can create highly realistic videos and audio recordings that mimic the appearance and voice of real individuals. This technology can be used to create convincing phishing messages that trick recipients into revealing sensitive information or performing actions that compromise security. 

The rise of advanced deepfake technology will also pose a potential threat to public opinion and trust. Deepfakes can create highly realistic videos and audio recordings that are difficult to distinguish from genuine content. This technology has already been used to spread misinformation and will continue to see increased use by threat actors. This will ultimately lead to an erosion of trust in digital media. 

We’ll start to see noteworthy attacks on LLM-products 

Large language models (LLMs) are becoming increasingly popular, but they’re also vulnerable to various types of attacks themselves. These include injection attacks, data exfiltration, and jailbreaks, where malicious actors manipulate the input data to deceive the model or extract sensitive information. These vulnerabilities can compromise the integrity, security, and ultimately the trust of LLM-based systems. With increased reliance on these systems, threat-actors (especially nation states) would love nothing more than to use a popular LLM to their advantage. Whether that’s disinformation, the dissemination of malicious links or something else remains to be seen. 

This has been discussed at length since ChatGPT first made waves in the market. The question of legalities, copyright, and ownership have underpinned AI generated content at nearly every stage of evolution. That said, we’re likely reaching a point where we’re going to see more frequent and impactful litigation as a result of the use of LLMs. 

We’re also likely to see some form of government regulation on the use of AI by major nation states as a result. This is likely to be centered around data privacy, especially in places like the EU, who is already leading the way with their AI Act . These new regulations will not only require attention on the side of LLM creators themselves, but also by organizations that are looking to use generative AI in their own organizations. 

New regulatory frameworks and challenges 

Speaking of regulation, the introduction of new regulatory frameworks such as NIS2, DORA, CRA, and KRITIS (Germany only) will present significant challenges for organizations. These new frameworks aim to enhance cybersecurity and data protection and are sorely needed, but complying with them will be difficult and resource-intensive for many organizations. In addition to this, the place of compliance officer within many organizations will continue to evolve and become increasingly important. 

On a side note, the number of organizations requiring a certain type of compliance adherence in order to conduct business with them will increase as well. Supply chain attacks are becoming more prevalent and damaging, and rather than explicitly trust partner organizations like the old days, many organizations are requiring that their customer and / or suppliers conform with some of the same regulatory frameworks that they themselves must as well. 

Corruption of the open source community 

For many years free and open-source software (FOSS) was seen as something of an oasis in a perceived security poor software ecosystem. With the XZ Utils incident we discussed earlier in this report, along with several other high-profile security vulnerabilities, this sentiment is no longer the case. The XZ Utils situation saw a very determined threat actor try to take a very popular open-source package and attempt to use it to create a widespread supply chain attack. With that (near) level of success, attackers are likely to attempt something similar with other industry critical open-source packages. There has already been a noted increase in the amount of malicious open source packages, and what has been recently been happening with the PyPi software repository is likely only a taste of the things to come. 

Continued predictions for quantum computing 

In past reports we’ve spoken about a threat that’s not imminent but on the horizon; Quantum Computing. While we’re still some years away from a cryptographically relevant quantum computer (CRQC), some experts estimate 2037, minus 5 to plus 20 years, and development is progressing rapidly. The day these computers arrive is known as Q-Day. And if your business is storing sensitive data in encrypted form today that you expect to still need access to in 10 years’ time, you need to look at this now. That’s because the NSA, and presumably their counterparts in other countries are capturing vast amounts of data that they can’t decrypt today but may be able to in the future. 

NIST in the US agrees and has standardized three post-quantum encryption algorithms

  • ML-KEM (Module-Lattice-Based Key-Encapsulation Mechanism);
  • ML-DSA (Module-Lattice-Based Digital Signature Algorithm);
  • SLH-DSA (Stateless Hash-Based Digital Signature Algorithm).

There’s a fourth standard coming as well. The old kyber crystal inspired names were definitely nerdier. The new names indicate which area of cryptography they should be used in. 

Microsoft is also taking this upcoming threat seriously through the Quantum Safe Program and recently announced that their open-source core cryptographic library SymCrypt which is used in Windows 10 & 11, Windows Server, Azure and Microsoft 365 now support ML-KEM with ML-DSA and SLH-DSA support coming soon. 

The challenge with quantum computers is scaling them up, both the number of physical qubits (a CRQC will need many thousands) and the error correction required to produce a reliable logical qubit to program against. We still recommend that if your organization holds sensitive data, that you expect / have a regulatory requirement to keep for 10+ years, figure out how to re-encrypt it with a quantum safe algorithm, particularly now that the standards have been 
ratified. 

Increased adoption of “memory safe” languages 

Software has long been plagued by security issues that are the result of memory management issues. This includes things like buffer overflows and use-after-free errors. As a result, the industry has started to move towards “memory safe” languages like Rust and / or Swift. These languages have built-in protections against many common memory-related vulnerabilities, and it eases the burden on software developers when it comes to writing secure code. With the increasing outlook of pending regulation on the software industry, developers are likely to increase adoption of these languages to not only make their software more secure but also prepare for the aforementioned regulations ahead of time. 


Transform Your Employees into Cybersecurity Champions 

95% of cybersecurity incidents are caused by human error – ensure your workforce is equipped to prevent attacks. With Hornetsecurity’s Security Awareness Service, you’ll strengthen your defense with AI-powered, tailored training and realistic spear phishing simulations. 

  • Fully automated, personalized e-training; 
  • Industry-leading Employee Security Index (ESI®) for continuous improvement; 
  • Realistic spear phishing simulations to prepare your team for real-world threats; 
  • Effortless monitoring and reporting through our Awareness Dashboard. 

Request your demo today and start building a culture of cybersecurity! 


Conclusion 

As attackers leverage advanced technologies like AI and (eventually) quantum computing, defenders must stay ahead with innovative solutions and strong resilience. Organizations are advised to focus on proactive strategies, including regulatory compliance, enhanced API monitoring, and phishing-resistant MFA. Whether you are a business or an individual, learning from past predictions and adapting to emerging trends is recommended. Hornetsecurity is committed to creating a safer digital world.