Incorporating artificial intelligence technologies into defense systems is critical to staying ahead of threat actors as the AI landscape rapidly changes and new cyber-threat trends emerge, according to a new Microsoft Digital Defense Report.
As we enter “one of the most transformative technological eras in modern history,” industry and government cyber defenders can expect increasingly sophisticated attacks aimed at high-value individuals and targets, said Microsoft.
Personal and home-use products are also likely to be targeted as the Internet of Things market – a network of connected devices with embedded sensors, software, and connectivity – grows at a rapid 42 percent each year, the report also noted.
“As defenders, particularly governments, are considering the threats associated with the abuse of AI, it is important to keep in mind that many of the future victims will not have the benefit of automated systems and programs to defend them,” reads the report. “Many ecosystem threats will have an immediate impact on the most vulnerable targets—humans.”
Around 600 million identity attacks occur each day – with 99 percent of them password-based – and threat actors have been shifting toward attacking infrastructure, bypassing authentication, and exploiting applications as multi-factor authentication is more widely used, said Microsoft.
Hesitance to incorporate AI into defensive strategies will allow these threat actors to exploit security gaps they identify with AI tools – which can be done more rapidly and efficiently without identity revealing mistakes – warned the report, which says that defenders should use AI systems to protect against evolving tactics, techniques, and procedures.
Other emerging socially engineered attacks include AI-enabled spear phishing and whaling coupled with malware; highly tailored deepfakes; and resume swarming, which allows threat actors to scrape keywords and qualifications from job postings and develop imaginary perfect candidates to place inside organizations to steal sensitive information.
Many of these methods employ video, audio, text, and image capabilities, with China and Russia frequently utilizing AI-generated images, and audio and video in their attacks.
As nations use AI to commit cyberattacks, countries are varying in their incorporation of defensive AI, Microsoft said, noting that variation is “not surprising; they reflect the core values of the governments’ leadership, the countries’ legal and constitutional frameworks, and the state of the technology industry and its potential for future growth.”
Implementing international standards for AI security is now essential, the report said, as security vulnerabilities put humans at risk.
“International standards can mitigate fragmentation and ensure more consistency, good practice, controls, and even conformity assessment, especially where supply chains, threat actors, and applications are of a global nature,” said Microsoft.
Other threat trends that government and industry should be aware of include impersonation, content production, nefarious knowledge acquisition, cyber threat amplification, and direct and indirect social attacks, Microsoft said.