Zephyrnet Logo

Hackers Use AI to Create Terrifying Malware Targeting Sandboxes

Date:

Did you know that 42% of businesses were affected by cyberattacks in 2020? That figure is going to rise as cybercriminals use AI to attack businesses more efficiently.

Artificial intelligence technology has led to some tremendous advances that have changed the state of cybersecurity. Cybersecurity professionals are leveraging AI technology to fight hackers. AI-driven solutions include smart firewalls for intrusion detection and prevention, new malware prevention tools and risk scoring algorithms to identify possible phishing attacks.

Unfortunately, cybersecurity professionals aren’t the only ones with access to AI technology. Hackers and malware creators are also using artificial intelligence in much more horrifying ways.

Hackers have developed malware with sophisticated AI algorithms to take control of sandboxes. This is the newest threat in the realm of cybersecurity technology.

AI Powered Malware is the Biggest Threat to Sandboxes in 2022

Sandboxes have been widely used in software development workflows to run tests in a presumably safe environment. Today, they are also likely to be embedded in most cybersecurity solutions, such as endpoint detection & response (EDR), intrusion prevention systems (IPS), as well as standalone solutions.

However, sandboxes are also common entry points for cyber attackers. Over the years of the sandboxes’ functioning, adversaries have discovered AI algorithms to inject malware that can remain undetected in sandbox environments and even execute privilege escalation to higher levels of the infected networks.

What’s even more alarming is that sandbox-evading techniques keep evolving with advances in machine learning, posing a growing threat to organizations on a global scale. Let’s review the most widely used sandbox-evading malware as of the beginning of 2022.

Recognizing Humans

Typically, sandboxes are being used occasionally. For example, when there is a need to test untrusted software. So, attackers have used machine learning to develop new strains of malware that are able to track user interactions and only activate when no signs of the latter are visible.

Of course, there are ways to emulate users’ actions with AI, such as intelligent responses to dialog boxes and mouse clicks. File-based sandboxes run automatically without the need for human engineers to do anything, but it’s difficult to fake the meaningful actions that the real user would perform. Most recent sandbox-evading malware can distinguish real user interaction from the fake one and what’s more, even trigger after a certain real-user behavior was observed.

For instance, Trojan.APT.BaneChant is programmed to wait while the mouse clicks are abnormally fast. However, it activates after they track a certain amount of slower clicks, for example, three left-mouse clicks at a moderate pace, which are more likely to belong to a real user. Scrolling is also considered human by some malware. It can be activated after a user has scrolled a document to the second page. Detecting such malware is especially tricky, that’s why more agile SOC teams set up a continuous renewal process of threat detection rules by implementing solutions like SOC Prime’s Detection as Code platform where they can find the most accurate and up-to-date content. For example, there are cross-vendor detection rules for DevilsTongue malware which can typically execute kernel code without being captured by sandboxes.

Knowing Where They Are

Scanning for details like device IDs and MAC addresses, the malware can indicate virtualization with sophisticated AI algorithms and then run them against a blocklist of known virtualization vendors. After that, the malware would check the number of available CPU cores, amount of installed memory, and the hard drive size. Inside VMs, those values are lower than in physical systems. As a result, it’s possible for the malware to stay inactive and hide before the sandbox owners run a dynamic analysis. Although some sandbox vendors are able to hide their system specifications so that the malware can’t scan them.

Speaking of sandbox analysis tools, some malware types like CHOPSTICK can recognize whether or not they are in a sandbox by scanning for an analysis environment. Such an environment is considered too risky for attackers, so most viruses don’t activate if they recognize it. Another way for them to infiltrate is to send a smaller payload and thereby test the victim’s system before executing the full-fledged attack.

As you might already guess, malware can potentially scan for all sorts of system features with AI tools that are trained to recognize the underlying digital infrastructure. For example, they can seek digital signature systems to find out information about computer configuration or scan for active processes in the operating system to see if there’s any antivirus running.

If the malware is programmed to detect system reboots, it will activate only after this event took place. Reboot triggers can also distinguish a real reboot from an emulated one so VMs typically can’t trick such bots into exposing themselves upon a fake reboot.

Planning Perfect Timing

AI has also made malware more dangerous by perfecting the timing of attacks. Timing-based techniques are among the most common in sandbox evasion. Sandboxes usually don’t work around the clock so there is some limited time during which they scan for threats. Attackers abuse this feature to seed malware that lies dormant when the sandbox is active and executes an attack when it’s turned off. For example, malware like FatDuke can run the delaying algorithm that exploits free CPU cycles and waits until the sandbox goes off. Then, it activates the actual payload.

The less sophisticated malware examples will only have preset timing requirements until the code detonates. For example, GoldenSpy activates after two hours of being inside the system. Similarly, the “logic bomb” technique implies that the malicious code executes at a certain date and time. Logic bombs typically activate only on end users’ devices. For that, they have in-built scanners for system reboots and human interaction.

Hiding the Trace

Once the malware infects the target system, it wants to hide the evidence of its presence. A variety of techniques has been observed that help adversaries to make that happen. AI has made it easier for malware to modify its own code to fall under the radar of malware protection software and manual threat screening.

One of the primary targets of cybercriminals is to encrypt the communication with their Command & Control (C&C) servers so they can install further payloads through little backdoors. For that, they can frequently change attack artifacts like site IPs with domain generation algorithms (DGA). Some examples include Dridex, Pykspa, and Angler exploit kit. Another example is Smoke Loader malware that changed roughly 100 IP addresses in less than two weeks. In this case, there is no need for hard-coded domain names since they easily get detected. Any access to a victim’s system counts, even if it’s a sandbox.

Most DGAs come at increased maintenance costs so not all attackers can afford them. That’s why they developed other methods that don’t require the DGA. For exa
mple, DNSChanger malware alters the settings of a user’s DNS server to make it connect to a rogue DNS instead of the one pre-programmed by an Internet service provider.

Another way for malware to stay undetected in a sandbox is to encrypt data in formats that are unreadable in this particular environment. Some Trojans like Dridex use encrypted API calls. Andromeda botnet and Ebowla framework encrypt data with several keys to avoid communication with the server. Gauss cyber-espionage toolkit uses the specific path-and-folder combination to generate an embedded hash and bypass detection.

Hackers Will Keep Using AI to Create More Devastating Malware to Attack Sandboxes

AI technology has been a terrifying tool in the hands of savvy hackers. They are using it to take control of sandboxes in various applications.

For a long time, sandboxes seemed like a good idea: what can be better than having an isolated environment where you can safely test the untrusted software? However, it turns out that they are not as isolated as developers want them to be. Hackers using AI can create more horrific attacks against it. The presence of an interruption in processes, specific markers of virtual environments, and other typical features open a window of opportunity for attackers to base their malware algorithms on the sandboxes’ blind spots.

SOC engineers need to make sure that not only their key assets are regularly scanned for malware but also the sandboxes that are used in their organization, especially in times when they are inactive. To successfully maintain security posture and minimize the chances of intrusion, security teams should continuously enrich the detection base with new rules and update the existing stack to be able to identify the constantly mutating malware. Organizations tend to search for solutions that can save up to hundreds of hours per month on content research and development from scratch, as well as look for ways to optimize content creation. This can be achieved by choosing generic languages that make it fast to develop, modify, and translate rules, like Sigma. Moreover, leveraging free online translation tools like Uncoder.IO can help teams save sufficient time by instantly converting the latest Sigma detections into a variety of SIEM, EDR, and XDR formats.

The post Hackers Use AI to Create Terrifying Malware Targeting Sandboxes appeared first on SmartData Collective.

spot_img

Latest Intelligence

spot_img

Chat with us

Hi there! How can I help you?