Skip to content

Admiration Tech News

  • Home
  • Cyber Attacks
  • Data Breaches
  • Vulnerability
  • Exploits
  • Crack Tutorials
  • Programming
  • Tools

Tag: vulnerability

Sticky Werewolf targets the aviation industry in Russia and Belarus

Posted on August 11, 2024 - August 11, 2024 by Maq Verma

Morphisec researchers observed a threat actor, tracked as Sticky Werewolf, targeting entities in Russia and Belarus.

Sticky Werewolf is a threat actor that was first spotted in April 2023, initially targeting public organizations in Russia and Belarus. The group has expanded its operations to various sectors, including a pharmaceutical company and a Russian research institute specializing in microbiology and vaccine development.

In their latest campaign, Sticky Werewolf targeted the aviation industry with emails supposedly from the First Deputy General Director of AO OKB Kristall, a Moscow-based company involved in aircraft and spacecraft production and maintenance. Previously, the group used phishing emails with links to malicious files. In the latest campaign, the threat actor used archive files containing LNK files that pointed to a payload stored on WebDAV servers.

After executing the binary hosted on a WebDAV server, an obfuscated Windows batch script is launched. The script runs an AutoIt script that ultimately injects the final payload.

“In previous campaigns, the infection chain began with phishing emails containing a link to download a malicious file from platforms like gofile.io. However, in their latest campaign, the infection method has changed.” reads the analysis published by Morphisec. “The initial email includes an archive attachment; when the recipient extracts the archive, they find LNK and decoy files. These LNK files point to an executable hosted on a WebDAV server. Once executed, this initiates a Batch script, which then launches an AutoIt script that ultimately injects the final payload.”

The archive includes a decoy PDF File and two LNK Files Masquerading as DOCX Documents named Повестка совещания.docx.lnk (Meeting agenda) and Список рассылки.docx.lnk (Mailing list) respectively. 

Sticky Werewolf

The threat actor used phishing messages allegedly sent by the First Deputy General Director and Executive Director of AO OKB Kristall. The recipients are individuals from the aerospace and defense sector who are invited to a video conference on future cooperation. The messages use a password-protected archive containing a malicious payload.

The payloads employed by the threat actors include commodity RATs or stealers. Recently, Sticky Werewolf was spotted using Rhadamanthys Stealer and Ozone RAT in their campaigns. In previous attacks the group also deployed MetaStealer, DarkTrack, and NetWire.

“These malwares enable extensive espionage and data exfiltration. While there is no definitive evidence of Sticky Werewolf’s national origin, the geopolitical context suggests possible links to a pro-Ukrainian cyberespionage group or hacktivists, though this attribution remains uncertain.” concludes the report that also includes Indicators of Compromise (IoCs).

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malware, Ransomware, Spyware, vulnerabilityLeave a comment

China-linked Velvet Ant uses F5 BIG-IP malware in cyber espionage campaign

Posted on August 11, 2024 - August 11, 2024 by Maq Verma

Chinese cyberespionage group Velvet Ant was spotted using custom malware to target F5 BIG-IP appliances to breach target networks.

In late 2023, Sygnia researchers responded to an incident suffered by a large organization that they attributed to a China-linked threat actor tracked as ‘Velvet Ant.’

The cyberspies deployed custom malware on F5 BIG-IP appliances to gain persistent access to the internal network of the target organization and steal sensitive data.

The investigation revealed that the threat actor had been present in the organization’s on-premises network for about three years, aiming to maintain access for espionage purposes. They achieved persistence by establishing multiple footholds within the company’s environment. One method used was exploiting a legacy F5 BIG-IP appliance exposed to the internet, which served as an internal Command and Control (C&C). When one foothold was discovered and remediated, the threat actor quickly adapted and pivoted to another. This demonstrated their agility and deep understanding of the target’s network infrastructure.

The investigation revealed that the Chinese hackers had been present in the organization’s on-premises network for about three years. They achieved persistence by establishing multiple footholds within the company’s environment. One method used was exploiting a legacy internet-facing F5 BIG-IP appliance, which was also used by attackers as an internal Command and Control (C&C). After the researchers discovered and remediated one foothold, the APT group quickly pivoted to another. This demonstrated their agility and deep understanding of the target’s network infrastructure.

“The compromised organization had two F5 BIG-IP appliances which provided services such as firewall, WAF, load balancing and local traffic management. These appliances were directly exposed to the internet, and both of which were compromised. Both F5 appliances were running an outdated, vulnerable, operating system. The threat actor may have leveraged one of the vulnerabilities to gain remote access to the appliances.” reads the analysis published by Sygnia. “As a result, a backdoor hidden within the F5 appliance can evade detection from traditional log monitoring solutions.”

Once the attackers had compromised the F5 BIG-IP appliances, they gained access to internal file servers and deployed the PlugX RAT. The PlugX RAT was used by multiple Chinese APT groups in cyberespionage campaigns over the years.

Forensic analysis of the F5 appliances revealed that the Velvet Ant group also used the following malware in their attacks:

  1. VELVETSTING – a tool that connects to the threat actor’s C&C once an hour, searching commands to execute. Once the tool received a command, it was executed via ‘csh’ (Unix C shell).
  2. VELVETTAP – a tool with the ability to capture network packets.
  3. SAMRID – identified as ‘EarthWorm’, an open-source SOCKS proxy tunneller available on GitHub. The tool was utilized in the past by multiple China-linked APT groups, including ‘Volt Typhoon’, ‘APT27’ and ‘Gelsemium’.
  4. ESRDE – a tool with similar capabilities to that of ‘VELVETSTING’, but with minor differences, such as using bash instead of ‘csh’.
Velvet Ant

Researchers provided the following recommendations for organizations to mitigate attacks of groups like Velvet Ant:

  • Limit outbound internet traffic.
  • Limit lateral movement throughout the network.
  • Enhance security hardening of legacy servers.
  • Mitigate credential harvesting.
  • Protect public-facing devices.

The report also includes indicators of compromise for the attack analyzed by the researchers.

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malware, Ransomware, Spyware, vulnerabilityLeave a comment

China-linked spies target Asian Telcos since at least 2021

Posted on August 11, 2024 - August 11, 2024 by Maq Verma

A China-linked cyber espionage group has compromised telecom operators in an Asian country since at least 2021.

The Symantec Threat Hunter Team reported that an alleged China-linked APT group has infiltrated several telecom operators in a single, unnamed, Asian country at least since 2021.

The threat actors used tools associated with Chinese espionage groups, they planted multiple backdoors on the networks of targeted companies to steal credentials. 

“The attacks have been underway since at least 2021, with evidence to suggest that some of this activity may even date as far back as 2020. Virtually all of the organizations targeted were telecoms operators, with the addition of a services company that serves the telecoms sector and a university in another Asian country.” reads the report published by Broadcom Symantec Threat Hunter Team.

Evidence collected by the experts suggests that the cluster activity may have been active since 2020.

In a recent espionage campaign, the attackers employed custom malware associated with several Chinese APT groups. Some of the malware used by the threat actors are:

  • Coolclient: A backdoor linked to the Fireant group (also known as Mustang Panda or Earth Preta). It logs keystrokes, manages files, and communicates with a C2 server. This campaign used a version of VLC Media Player (disguised as googleupdate.exe) to sideload a Coolclient loader, which then reads and executes encrypted payloads.
  • Quickheal: A backdoor associated with the Needleminer group (also known as RedFoxtrot or Nomad Panda). The variant used by the attackers in recent attacks was a 32-bit DLL that communicated with a hardcoded C&C server using a custom protocol mimicking SSL traffic.
  • Rainyday: A backdoor, linked to the Firefly group (also known as Naikon), was used in a recent espionage campaign.

In addition to utilizing custom backdoors. the cyber espionage group also employed a range of tactics, techniques, and procedures (TTPs) to compromise their targets. They deployed custom keylogging malware, port scanning tools, credential theft through the dumping of registry hives, a publicly available tool known as Responder that acts as a Link-Local Multicast Name Resolution (LLMNR) NetBIOS Name Service (NBT-NS) and multicast DNS (mDNS) poisoner, and enabling RDP. 

“Tools used in this campaign have strong associations with multiple Chinese groups and at least three of the custom backdoors deployed are believed to be used exclusively by Chinese espionage actors.” concludes the report.” “The nature of the link between the actors involved in the current campaign remains unclear. Possibilities include, but are not limited to:

  • Attacks by multiple actors, acting independently of one another.
  • A single actor using tools and/or personnel acquired from or shared by other groups.
  • Multiple actors collaborating in a single campaign.

The ultimate motive of the intrusion campaign remains unclear.”

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malware, Ransomware, Spyware, vulnerabilityLeave a comment

Russia-linked APT Nobelium targets French diplomatic entities

Posted on August 11, 2024 - August 11, 2024 by Maq Verma

French information security agency ANSSI reported that Russia-linked threat actor Nobelium is behind a series of cyber attacks that targeted French diplomatic entities.

The French information security agency ANSSI reported that Russia-linked APT Nobelium targeted French diplomatic entities. Despite the French agency linked the attacks to the cyberespionage group Nobelium (aka APT29, SVR group, Cozy Bear, Midnight Blizzard, BlueBravo, and The Dukes), ANSSI differentiates these groups into separate threat clusters, including a group named Dark Halo, which was responsible for the 2020 SolarWinds attack.

October 2020, used against high-value targets, most likely for espionage purposes. Western diplomatic entities, such as embassies and Ministries of Foreign Affairs, account for the majority of known victims of Nobelium. However, several IT companies have also reported that they have been targeted by Nobelium’s operators in late 2023 and 2024.

The report published by ANSSI is based upon elements collected by the French agency, evidence shared by its national partners (known as C4 members), and publicly available reports. The document warns of phishing campaigns conducted by Nobelium against French public and diplomatic entities aiming at gathering strategic intelligence.

“Nobelium is characterized by the use of specific codes, tactics, technics and procedures. Most of Nobelium campaigns against diplomatic entities use compromised legitimate email accounts belonging to diplomatic staff, and conduct phishing campaigns against diplomatic institutions, embassies and consulates.” reads the report published by ANSSI. “These activities are also publicly described as a campaign called “Diplomatic Orbiter”.”

Attackers forge lure documents to target diplomatic staff, attempting to deliver their custom loaders to drop public post-exploitation tools such as Cobalt Strike or Brute Ratel C4. The tools allows attackers to access the victim’s network, perform lateral movements, drop additional payloads, maintain persistence, and exfiltrate valuable intelligence.

The agency confirmed that several IT companies have also reported being targeted by Nobelium in late 2023 and 2024.

ANSSI warns of Nobelium attacks

“French public organisations have been targeted several times by phishing emails sent from foreign institutions previously compromised by Nobelium’s operators.” continues the report. “From February to May 2021, Nobelium operators conducted several phishing campaigns3 exploiting compromised email accounts belonging to the French Ministry of Culture and the National Agency for Territorial Cohesion (ANCT), sending an attachment called “Strategic Review”.”

In March 2022, a European embassy in South Africa received a phishing email that impersonated a French embassy, announcing the closure after a terrorist attack. The attackers sent the email from a compromised account of a French diplomat. In April and May 2022, Nobelium phishing messages reached dozens of email addresses from the French Ministry of Foreign Affair. Threat actors used themes like the closure of a Ukrainian embassy or a meeting with a Portuguese ambassador.

In May 2023, Nobelium targeted several European embassies in Kyiv, including the French embassy, with a phishing campaign involving an email about a “Diplomatic car for sale.” The ANSSI also reported a failed attempt to compromise the French Embassy in Romania.

“ANSSI has observed a high level of activities linked to Nobelium against the recent backdrop of geopolitical tensions, especially in Europe, in relation to Russia’s aggression against Ukraine. Nobelium’s activities against government and diplomatic entities represent a national security concern and endanger French and European diplomatic interests. The targeting of IT and cybersecurity entities for espionage purposes by Nobelium operators potentially strengthens their offensive capabilities and the threat they represent.” concludes the report that also provides indicators of compromise. “Nobelium’s techniques, tactics, and procedures remain mainly constant over time.”

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malware, Ransomware, Spyware, vulnerabilityLeave a comment

Russia-linked group APT29 likely breached TeamViewer’s corporate network

Posted on August 11, 2024 - August 11, 2024 by Maq Verma

Russia-linked APT group, reportedly APT29, is suspected to be behind a hack of TeamViewer ‘s corporate network.

TeamViewer discovered that a threat actor has breached its corporate network and some reports attribute the intrusion to the Russia-linked APT group APT29 (aka SVR group, BlueBravo, Cozy Bear, Nobelium, Midnight Blizzard, and The Dukes).

The unauthorized access to the IT infrastructure of the company occurred on June 26, threat actors used the credentials of a standard employee account within its IT environment.

Upon detecting the suspicious activity by this account, the company immediately started the incident response measures.

“A comprehensive taskforce consisting of TeamViewer’s security team together with globally leading cyber security experts has worked 24/7 on investigating the incident with all means available. We are in constant exchange with additional threat intelligence providers and relevant authorities to inform the investigation.” reads the statement published by the company.

“Current findings of the investigation point to an attack on Wednesday, June 26, tied to credentials of a standard employee account within our Corporate IT environment. Based on continuous security monitoring, our teams identified suspicious behavior of this account and immediately put incident response measures into action. Together with our external incident response support, we currently attribute this activity to the threat actor known as APT29 / Midnight Blizzard. Based on current findings of the investigation, the attack was contained within the Corporate IT environment and there is no evidence that the threat actor gained access to our product environment or customer data.”

An update published by TeamViewer states that findings confirmed that the attack on its infrastructure was limited to its internal corporate IT environment and did not affect the product environment, connectivity platform, or any customer data.

The popular Ars Technica reporter Dan Goodin reported that an alert issued by security firm NCC Group reports a “significant compromise of the TeamViewer remote access and support platform by an APT group.”

In May 2019, the German newspaper Der Spiegel revealed that the German software company behind TeamViewer was compromised in 2016 by Chinese hackers.

According to the media outlet, Chinese state-sponsored hackers used the Winnti trojan malware to infect the systems of the Company.

The Winnti group was first spotted by Kaspersky in 2013, according to the researchers, the nation-state actor has been active since at least 2007.

The gang is financially-motivated and was mostly involved in cyber espionage campaigns.  The hackers were known for targeting companies in the online gaming industry, the majority of the victims are located in Southeast Asia.

The Winnti cyberespionage group is known for its ability in targeting supply chains of legitimate software to spread malware.

According to the company, it was targeted by the hackers in autumn 2016, when its experts detected suspicious activities were quickly blocked them to prevent major damages.

TeamViewer spokesperson revealed that the company investigated the attempts of intrusion, but did not find any evidence of exposure for customer data and sensitive data.

Der Spiegel pointed out that TeamViewer did not disclose the security breach to the public.

“In autumn 2016, TeamViewer was target of a cyber-attack. Our systems detected the suspicious activities in time to prevent any major damage. An expert team of internal and external cyber security researchers, working together closely with the responsible authorities, successfully fended off the attack and with all available means of IT forensics found no evidence that customer data or other sensitive information had been stolen, that customer computer systems had been infected or that the TeamViewer source code had been manipulated, stolen or misused in any other way.” said company spokesman.

“Out of an abundance of caution, TeamViewer conducted a comprehensive audit of its security architecture and IT infrastructure subsequently and further strengthened it with appropriate measures.”

At the time the company published a statement to exclude it was breached by hackers:

“Göppingen/Germany, May 23, 2016. A recent article warns, “TeamViewer users have had their bank accounts emptied by hackers gaining full-system access”. TeamViewer is appalled by any criminal activity; however, the source of the problem, according to our research, is careless use, not a potential security breach on TeamViewer’s side.” wrote the company.

Only in 2019, the company admitted it was breached in 2016.   

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malware, Ransomware, Spyware, vulnerabilityLeave a comment

China-linked APT exploited Cisco NX-OS zero-day to deploy custom malware

Posted on August 11, 2024 - August 11, 2024 by Maq Verma

Cisco fixed an actively exploited NX-OS zero-day, the flaw was exploited to install previously unknown malware as root on vulnerable switches.

Cisco addressed an NX-OS zero-day, tracked as CVE-2024-20399 (CVSS score of 6.0), that the China-linked group Velvet Ant exploited to deploy previously unknown malware as root on vulnerable switches.

The flaw resides in the CLI of Cisco NX-OS Software, an authenticated, local attacker can exploit the flaw to execute arbitrary commands as root on the underlying operating system of an affected device.

“This vulnerability is due to insufficient validation of arguments that are passed to specific configuration CLI commands. An attacker could exploit this vulnerability by including crafted input as the argument of an affected configuration CLI command.” reads the advisory published by Cisco. “A successful exploit could allow the attacker to execute arbitrary commands on the underlying operating system with the privileges of root.”

The IT giant pointed out that only attackers with Administrator credentials can successfully exploit this vulnerability on a Cisco NX-OS device.

In April 2024, researchers reported to the Cisco Product Security Incident Response Team (PSIRT) that the issue was actively exploited in the wild.

Cybersecurity firm Sygnia observed the attacks on April 2024 and reported them to Cisco.

“Sygnia identified that CVE-2024-20399 was exploited in the wild by a China-nexus threat group as a ‘zero-day’ and shared the details of the vulnerability with Cisco. By exploiting this vulnerability, a threat group – dubbed ‘Velvet Ant’ – successfully executed commands on the underlying operating system of Cisco Nexus devices.” reads the report published by Sygnia. “This exploitation led to the execution of a previously unknown custom malware that allowed the threat group to remotely connect to compromised Cisco Nexus devices, upload additional files, and execute code on the devices.“

The vulnerability impacts the following devices:

  • MDS 9000 Series Multilayer Switches (CSCwj97007)
  • Nexus 3000 Series Switches (CSCwj97009)
  • Nexus 5500 Platform Switches (CSCwj97011)
  • Nexus 5600 Platform Switches (CSCwj97011)
  • Nexus 6000 Series Switches (CSCwj97011)
  • Nexus 7000 Series Switches (CSCwj94682) *
  • Nexus 9000 Series Switches in standalone NX-OS mode (CSCwj97009)

Cisco recommends customers monitor the use of credentials for the administrative users network-admin and vdc-admin.

Cisco provides the Cisco Software Checker to help customers determine if their devices are vulnerable to this flaw.

In late 2023, Sygnia researchers responded to an incident suffered by a large organization that they attributed to the same China-linked threat actor ‘Velvet Ant.’

The cyberspies deployed custom malware on F5 BIG-IP appliances to gain persistent access to the internal network of the target organization and steal sensitive data.

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malware, Ransomware, Spyware, vulnerabilityLeave a comment

Multiple cybersecurity agencies warn of China-linked APT40 ‘s capabilities

Posted on August 11, 2024 - August 11, 2024 by Maq Verma

Multiple cybersecurity agencies released a joint advisory warning about a China-linked group APT40 ‘s capability to rapidly exploit disclosed security flaws.

Cybersecurity agencies from Australia, Canada, Germany, Japan, New Zealand, South Korea, the U.K., and the U.S. released a joint advisory warning about the China-linked group APT40 (aka TEMP.Periscope, TEMP.Jumper, Bronze Mohawk, Gingham Typhoon, ISLANDDREAMS, Kryptonite Panda, Red Ladon, TA423, and Leviathan) and its capability to rapidly exploit disclosed flaws

The China-linked group was able to exploit vulnerabilities within hours or days of the public disclosure.

APT40 has previously targeted organizations in countries like Australia and the United States. The group is able to rapidly adapt vulnerability proofs of concept (POCs) for their operations. They identify new exploits in widely used public software, such as Log4J, Atlassian Confluence, and Microsoft Exchange, to target the associated infrastructure.

“APT 40 has previously targeted organizations in various countries, including Australia and the United States. Notably, APT 40 possesses the ability to quickly transform and adapt vulnerability proofs of concept (POCs) for targeting, reconnaissance, and exploitation operations.” reads the advisory. “APT 40 identifies new exploits within widely used public software such as Log4J, Atlassian Confluence and Microsoft Exchange to target the infrastructure of the associated vulnerability.“

In July 2021, the U.S. Justice Department (DoJ) indicted four members of the cyber espionage group APT40 (aka TEMP.Periscope, TEMP.Jumper, and Leviathan) for hacking tens of government organizations, private businesses and universities around the world between 2011 and 2018.

The APT40 group has been active since at least 2013, it is focused on targeting countries important to the country’s Belt and Road Initiative (i.e. Cambodia, Belgium, Germany, Hong Kong, Philippines, Malaysia, Norway, Saudi Arabia, Switzerland, the United States, and the United Kingdom).

The group appears to be focused on supporting the naval modernization efforts of the Government of Beijing. Threat actors target engineering, transportation, and defense sectors, experts observed a specific interest in maritime technologies.

The cyberspies also targeted research centres and universities involved in naval research with the intent to access advanced technology to push the growth of the Chinese naval industry. The list of victims of the APT40 group also includes organizations with operations in Southeast Asia or involved in South China Sea disputes.

Three of the defendants are said to be officers in a provincial arm of the MSS and one was an employee of a front company that was used to obfuscate the government’s role in the hacking campaigns.

“APT40 regularly conducts reconnaissance against networks of interest, including networks in the authoring agencies’ countries, looking for opportunities to compromise its targets. This regular reconnaissance postures the group to identify vulnerable, end-of-life or no longer maintained devices on networks of interest, and to rapidly deploy exploits. APT40 continues to find success exploiting vulnerabilities from as early as 2017.” continues the joint advisory. “APT40 rapidly exploits newly public vulnerabilities in widely used software such as Log4J (CVE-2021-44228), Atlassian Confluence (CVE-2021-31207, CVE-2021-26084) and Microsoft Exchange (CVE-2021-31207, CVE-2021-34523, CVE-2021-34473). ASD’s ACSC and the authoring agencies expect the group to continue using POCs for new high-profile vulnerabilities within hours or days of public release.”

APT40 China

APT40 use to exploit vulnerable public-facing infrastructure over other hacking techniques like phishing. They prioritize obtaining valid credentials for subsequent activities. The group often relies on web shells to maintain persistence early in an intrusion. Persistence is established early in an intrusion, making it likely to be observed in all cases, regardless of the level of compromise or further actions taken.

In the past, the APT40 was observed using compromised Australian websites as C2 servers, however he recently evolved this technique.

“APT40 has embraced the global trend of using compromised devices, including small-office/home-office (SOHO) devices, as operational infrastructure and last-hop redirectors [T1584.008] for its operations in Australia. This has enabled the authoring agencies to better characterize and track this group’s movements.” continues the report.

Many of the compromised SOHO devices are end-of-life or unpatched that can be easily hacked using N-day exploits. Compromised SOHO (Small Office/Home Office) devices provide attackers with a platform to launch attacks by mimicking legitimate traffic.

The report provides details about Tactics, Techniques, and Procesured associated by the the group and detection and mitigation recommendations.

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malware, Ransomware, Spyware, vulnerabilityLeave a comment

Void Banshee exploits CVE-2024-38112 zero-day to spread malware

Posted on August 11, 2024 - August 11, 2024 by Maq Verma

Void Banshee APT group exploited the Windows zero-day CVE-2024-38112 to execute code via the disabled Internet Explorer.

An APT group tracked as Void Banshee was spotted exploiting the Windows zero-day CVE-2024-38112 (CVSS score of 7.5) to execute code through the disabled Internet Explorer.

The vulnerability is a Windows MSHTML Platform Spoofing Vulnerability. Successful exploitation of this vulnerability requires an attacker to take additional actions before exploitation to prepare the target environment. An attacker can trigger the issue by sending the victim a malicious file that the victim would have to execute.

Trend Micro researchers discovered that the flaw was actively exploited in the wild in May and reported it to Microsoft which addressed the zero-day with the July 2024 Patch Tuesday security updates.

Void Banshee was observed exploiting the CVE-2024-38112 flaw to drop the Atlantida info-stealer on the victims’ machines. The malware allows operators to gather system information and steal sensitive data, such as passwords and cookies, from multiple applications.

In the group’s attack chain, Void Banshee attempts to trick victims into opening zip archives containing malicious files disguised as book PDFs. The archives are disseminated in cloud-sharing websites, Discord servers, and online libraries, and other means. The APT group focuses on North America, Europe, and Southeast Asia.

“This zero-day attack is a prime example of how unsupported Windows relics are an overlooked attack surface that can still be exploited by threat actors to infect unsuspecting users with ransomware, backdoors, or as a conduit for other kinds of malware.” states Trend Micro.

Void Banshee exploited the disabled Internet Explorer process to run HTML Application (HTA) files using specially crafted .URL files with the MHTML protocol handler and the x-usc! directive. This technique resembles the exploitation of CVE-2021-40444, another MSHTML flaw that was exploited in zero-day attacks. The experts warn that this attack method is very concerning because Internet Explorer no longer receives updates or security fixes.

“In this attack, CVE-2024-38112 was used as a zero-day to redirect a victim by opening and using the system-disabled IE to a compromised website which hosted a malicious HTML Application (HTA)” states the report. “In the URL parameter of the internet shortcut file, we can see that Void Banshee specifically crafted this URL string using the MHTML protocol handler along with the x-usc! directive. This logic string opens the URL target in the native Internet Explorer through the iexplore.exe process.”

Void Banshee APT CVE-2024-38112

Attackers used the internet shortcut file to direct the victims to an attacker-controlled domain where an HTML file downloads the HTA stage of the infection chain. The researchers noticed that Void Banshee uses this HTML file to control the window view size of Internet Explorer, hiding browser information and hiding the download of the next infection stage from the victim.

By default, IE prompts users to open or save the HTML application, but the APT group disguised the HTA file as a PDF by adding spaces to the file extension. Upon running the HTA file, a series of scripts is executed, along with the LoadToBadXml .NET trojan loader, the Donut shellcode, and the Atlantida stealer.

“In this campaign, we have observed that even though users may no longer be able to access IE, threat actors can still exploit lingering Windows relics like IE on their machine to infect users and organizations with ransomware, backdoors, or as a proxy to execute other strains of malware.” Trend Micro concludes. “The ability of APT groups like Void Banshee to exploit disabled services such as IE poses a significant threat to organizations worldwide. Since services such as IE have a large attack surface and no longer receive patches, it represents a serious security concern to Windows users.”

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malware, Ransomware, Spyware, vulnerabilityLeave a comment

China-linked APT group uses new Macma macOS backdoor version

Posted on August 11, 2024 - August 11, 2024 by Maq Verma

China-linked APT group Daggerfly (aka Evasive Panda, Bronze Highland) Evasive Panda has been spotted using an updated version of the macOS backdoor Macma.

The China-linked APT group Daggerfly (aka Evasive Panda or Bronze Highland) has significantly updated its malware arsenal, adding a new malware family based on the MgBot framework and an updated Macma macOS backdoor.

“The Daggerfly (aka Evasive Panda, Bronze Highland) espionage group has extensively updated its toolset, introducing several new versions of its malware, most likely in response to exposure of older variants.” reads the report. “The new tooling was deployed in a number of recent attacks against organizations in Taiwan and a U.S. NGO based in China, which indicates the group also engages in internal espionage. In the attack on this organization, the attackers exploited a vulnerability in an Apache HTTP server to deliver their MgBot malware.“

The APT group was spotted using the malware families in attacks against Taiwanese organizations and a U.S. NGO in China. The attackers exploited an Apache HTTP server vulnerability to deliver their MgBot malware.

Daggerfly has been active for at least a decade, the group is known for the use of the custom MgBot malware framework. In 2023, Symantec identified a Daggerfly intrusion at an African telecom operator, using new MgBot plugins. This highlights the group’s ongoing evolution in cyber espionage tactics.

The Macma macOS backdoor was first detailed by Google in 2021 and has been used since at least 2019. At the time of discovery, threat actors employed the malware in watering hole attacks involving compromised websites in Hong Kong. The watering hole attacks used exploits for iOS and macOS devices. Attackers exploited the privilege escalation vulnerability CVE-2021-30869 to install Macma on macOS devices.

Macma is a modular backdoor that supports multiple functionalities, including device fingerprinting, executing commands, screen capture, keylogging, audio capture, uploading and downloading files.

Although Macma was widely used in cyber operations carried out by nation-state actors, it was not linked to a particular group. However, Symantec has found evidence to suggest that it is part of the Daggerfly toolkit. Two variants of the Macma backdoor C2 server (103.243.212[.]98) that was also used by an MgBot dropper.

In addition to this shared infrastructure, Macma and other malware in the Daggerfly’s arsenal, including Mgbot all contain code from a single, shared library or framework. Elements of this library have been used to build Windows, macOS, Linux, and Android threats. The functionality provided by this library includes:

  • Threading and synchronization primitives
  • Event notifications and timers
  • Data marshaling
  • Platform-independent abstractions (e.g. time)

The new variants used by Daggerfly implement the following additions/improvements:

  • New logic to collect a file’s system listing, with the new code based on Tree, a publicly available Linux/Unix utility. 
  • Modified code in the AudioRecorderHelper feature
  • Additional parametrisation
  • Additional debug logging
  • Addition of a new file (param2.ini) to set options to adjust screenshot size and aspect ratio

The experts also observed another malware, tracked as Suzafk (aka ‘NetMM’, Nightdoor), in the group toolkit that ESET researchers linked to Evasive Panda in March.

“Suzafk is a multi-staged backdoor capable of using TCP or OneDrive for C&C. The malware contained the following configuration, indicating the functionality to connect to OneDrive is in development or present in other variants of the malware.” continues the report.

The backdoor includes the code from the al-khaser project, a public code repository developed to avoid detection by detecting virtual machines, sandboxes, and malware analysis environments.

The malware can also execute commands for network and system monitoring, such as ‘ipconfig,’ ‘systeminfo,’ ‘tasklist,’ and ‘netstat.’

“The [Daggerfly] group can create versions of its tools targeting most major operating system platforms.” concludes the report. “In addition to the tools documented here, Symantec has seen evidence of the ability to Trojanize Android APKs, SMS interception tools, DNS request interception tools, and even malware families targeting Solaris OS. Daggerfly appears to be capable of responding to exposure by quickly updating its toolset to continue its espionage activities with minimal disruption.”

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malware, Ransomware, Spyware, vulnerabilityLeave a comment

Belarus-linked APT Ghostwriter targeted Ukraine with PicassoLoader malware

Posted on August 11, 2024 - August 11, 2024 by Maq Verma

Belarus-linked APT group GhostWriter targeted Ukrainian organizations with a malware family known as PicassoLoader, used to deliver various malicious payloads.

The Ukrainian Government’s Computer Emergency Response Team (CERT-UA) reported a surge in activity associated with the APT group UAC-0057 (aka GhostWriter) group between July 12 and 18, 2024. Threat actors distributed documents containing macros designed to deploy the PICASSOLOADER malware on victim computers, which then delivered the post-exploitation tool Cobalt Strike Beacon.

The attackers used bait documents related to local government reform (USAID/DAI “HOVERLA” project), taxation, and financial-economic metrics (“oborona.rar,” “66_oborona_PURGED.xls,” “trix.xls,” “equipment_survey_regions_.xls,” “accounts.xls,” “spreadsheet.xls,” “attachment.xls,” “Податок_2024.xls”).

“Based on this, it can be inferred that UAC-0057 might have targeted both project office specialists and their counterparts among the employees of relevant local government bodies in Ukraine.” reads the report published by CERT-UA.

Ghostwriter

The campaign was likely part of a broader cyber espionage activity against the Ukrainian government.

In November 2021, Mandiant Threat Intelligence researchers linked the Ghostwriter disinformation campaign (aka UNC1151) to the government of Belarus.

In August 2020, security experts from FireEye uncovered a disinformation campaign aimed at discrediting NATO by spreading fake news content on compromised news websites.

According to FireEye, the campaign tracked as GhostWriter, has been ongoing since at least March 2017 and is aligned with Russian security interests.

Unlike other disinformation campaigns, GhostWriter doesn’t spread through social networks, instead, threat actors behind this campaign abused compromised content management systems (CMS) of news websites or spoofed email accounts to disseminate fake news.

The operators behind Ghostwriter targeted Belarusian entities before the 2020 elections, some of the individuals (representatives of the Belarusian opposition) targeted by the nation-state actor were later arrested by the Belarusian government.

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malware, Ransomware, Spyware, vulnerabilityLeave a comment

China-linked APT41 breached Taiwanese research institute

Posted on August 11, 2024 - August 11, 2024 by Maq Verma

China-linked group APT41 breached a Taiwanese government-affiliated research institute using ShadowPad and Cobalt Strike.

Cisco Talos researchers reported that the China-linked group compromised a Taiwanese government-affiliated research institute. The experts attributed the attack with medium confidence to the APT41 group.

The campaign started as early as July 2023 and threat actors delivered the ShadowPad malware, Cobalt Strike, and other post-exploitation tools.

The sample of ShadowPad malware employed in this campaign exploited an outdated vulnerable version of Microsoft Office IME binary as a loader. The loader in turn loads the customized second-stage loader for launching the payload.

“Cisco Talos assesses with medium confidence that this campaign is carried out by APT41, alleged by the U.S. government to be comprised of Chinese nationals. This assessment is based primarily on overlaps in tactics, techniques and procedures (TTPs), infrastructure and malware families used exclusively by Chinese APT groups.” reads the report published by Cisco Talos. “Talos’ analyses of the malware loaders used in this attack reveal that these are ShadowPad loaders. However, Talos has been unable to retrieve the final ShadowPad payloads used by the attackers.”

ShadowPad is a modular remote access trojan (RAT) sold exclusively to Chinese hacking groups. It has been publicly linked to APT41, a group believed to operate from Chengdu, China, and has also been used by other Chinese groups such as Mustang Panda and the Tonto Team.

The researchers were not able to determine the initial attack vector. The attackers compromised three hosts in the targeted environment and exfiltrated some documents from the network. 

Attackers used a web shell to maintain persistence and drop additional payloads like ShadowPad and Cobalt Strike.

The attackers used a unique Cobalt Strike loader written in GoLang to bypass Windows Defender’s detection. This loader, derived from an anti-AV tool called CS-Avoid-Killing found on GitHub and written in Simplified Chinese, is promoted in various Chinese hacking forums and tutorials. The presence of Simplified Chinese file and directory paths suggests that the threat actors who created the loader are proficient in the language.

APT41

Attackers were also observed running PowerShell commands to execute scripts used to run the ShadowPad malware directly in memory and fetch Cobalt Strike malware from C2 server.

“During our investigation of this campaign, we encountered two distinct iterations of ShadowPad. While both iterations utilized the same sideloading technique, they each exploited different vulnerable legitimate binaries to initiate the ShadowPad loader.” continues the report. “The initial variant of the ShadowPad loader had been previously discussed in 2020, and some vendors had referred to it as ‘ScatterBee’. Its technical structure and the names of its multiple components have remained consistent with earlier reports. The more recent variant of the ShadowPad loader targeted an outdated and susceptible version of the Microsoft Office IME imecmnt.exe binary, which is over 13 years old.”

Talos also discovered that APT41 created a custom loader to inject a proof-of-concept for CVE-2018-0824 directly into memory. The threat actors used a remote code execution vulnerability to achieve local privilege escalation.

“During the compromise the threat actor attempts to exploit CVE-2018-0824, with a tool called UnmarshalPwn, which we will detail in the sections below.” continues the report. “The malicious actor is careful, in an attempt to avoid detection, during its activity executes “quser” which, when using RDP allows it to see who else is logged on the system. Hence the actor can stop its activity if any other use is on the system. Cisco Talos also noticed that once the backdoors are deployed the malicious actor will delete the webshell and guest account that allowed the initial access.”

By analyzing artifacts from this campaign, the researchers identified samples and infrastructure potentially used by the same threat actors in different campaigns. Sharing these findings could help the community to make connections and enhance further investigations.

Talos released Indicators of Compromise for this campaign on their GitHub repository.

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malware, Ransomware, vulnerability1 Comment

Chinese StormBamboo APT compromised ISP to deliver malware

Posted on August 11, 2024 - August 11, 2024 by Maq Verma

A China-linked APT, tracked as StormBamboo, compromised an internet service provider (ISP) to poison software update mechanisms with malware.

Volexity researchers reported that a China-linked APT group, tracked as StormBamboo (aka Evasive Panda, Daggerfly, and StormCloud), successfully compromised an undisclosed internet service provider (ISP) in order to poison DNS responses for target organizations.

The threat actors targeted insecure software update mechanisms to install malware on macOS and Windows victim machines.

In mid-2023, Volexity discovered multiple malware infections affecting macOS and Windows systems within victim organizations. The company linked the attacks to StormBamboo APT group. Upon investigating the incidents, the researchers determined that a DNS poisoning attack at the ISP level caused the infection. The attackers altered DNS responses for domains related to software updates to deploy multiple malware families, including MACMA and POCOSTICK (MGBot). The attacker’s methods resemble those of DriftingBamboo, suggesting a possible connection between the two threat actors.

Daggerfly has been active for at least a decade, the group is known for the use of the custom MgBot malware framework. In 2023, Symantec identified a Daggerfly intrusion at an African telecom operator, using new MgBot plugins. This highlights the group’s ongoing evolution in cyber espionage tactics.

The Macma macOS backdoor was first detailed by Google in 2021 and has been used since at least 2019. At the time of discovery, threat actors employed the malware in watering hole attacks involving compromised websites in Hong Kong. The watering hole attacks used exploits for iOS and macOS devices. Attackers exploited the privilege escalation vulnerability CVE-2021-30869 to install Macma on macOS devices.

Macma is a modular backdoor that supports multiple functionalities, including device fingerprinting, executing commands, screen capture, keylogging, audio capture, uploading and downloading files.

Although Macma was widely used in cyber operations carried out by nation-state actors, it was not linked to a particular group.

“During one incident investigated by Volexity, it was discovered that StormBamboo poisoned DNS requests to deploy malware via an HTTP automatic update mechanism and poison responses for legitimate hostnames that were used as second-stage, command-and-control (C2) servers.” reads the report published by Volexity. “The DNS records were poisoned to resolve to an attacker-controlled server in Hong Kong at IP address 103.96.130[.]107. Initially, Volexity suspected the initial victim organization’s firewall may have been compromised. However, further investigation revealed the DNS poisoning was not performed within the target infrastructure, but further upstream at the ISP level.”

Volexity promptly alerted the ISP, which then investigated key traffic-routing devices on their network. After rebooting and taking parts of the network offline, the DNS poisoning stopped. The researchers were not able to identify a specific compromised device, however updating or deactivating various infrastructure components effectively ended the malicious activity.

“The logic behind the abuse of automatic updates is the same for all the applications: the legitimate application performs an HTTP request to retrieve a text-based file (the format varies) containing the latest application version and a link to the installer.” continues the report. “Since the attacker has control of the DNS responses for any given DNS name, they abuse this design, redirecting the HTTP request to a C2 server they control hosting a forged text file and a malicious installer. The AiTM workflow is shown below.”

StormBamboo

StormBamboo targeted various software vendors with insecure update mechanisms, using complex methods to deploy malware. For example, they targeted 5KPlayer’s update process for the “youtube-dl” dependency to deliver a backdoored installer from their C2 servers. Once compromised systems, the attackers installed a malicious Google Chrome extension called ReloadText to steal browser cookies and email data.

“The incident described in this blog post confirms the supposition made by ESET concerning the infection vector for the POCOSTICK malware. The attacker can intercept DNS requests and poison them with malicious IP addresses, and then use this technique to abuse automatic update mechanisms that use HTTP rather than HTTPS.” concludes the report. “This method is similar to the attack vector Volexity previously observed being used by DriftingBamboo following the 0-day exploitation of Sophos Firewalls.”

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malware, Ransomware, Spyware, vulnerabilityLeave a comment

North Korea Kimsuky Launch Phishing Attacks on Universities

Posted on August 8, 2024 - August 8, 2024 by Maq Verma

Cybersecurity analysts have uncovered critical details about the North Korean advanced persistent threat (APT) group Kimsuky, which has been targeting universities as part of its global espionage operations. 

Kimsuky, active since at least 2012, primarily targets South Korean think tanks and government entities, though its reach extends to the US, the UK and other European nations. The group specializes in sophisticated phishing campaigns, often posing as academics or journalists to infiltrate networks and steal sensitive information.

Recent Findings and Tactics

According to a new advisory published by Resilience today, its analysts capitalized on Kimsuky’s operational security mistakes, which led to the collection of source code, login credentials and other crucial data. 

The data revealed that Kimsuky has been phishing university staff, researchers and professors, aiming to access and exfiltrate valuable research and intelligence. Once inside university networks, the group was observed stealing information critical for North Korea, particularly given the country’s limited scientific community.

The group’s actions align with the objectives of the Reconnaissance General Bureau (RGB), North Korea’s primary foreign intelligence agency. 

Historically, Kimsuky has been linked to attempts to steal sensitive data, including nuclear research, healthcare innovations and pharmaceutical secrets. There is also evidence suggesting that Kimsuky engages in financially motivated cybercrime, potentially as a means to fund its espionage activities.

Resilience’s new findings shed light on Kimsuky’s methods, particularly its use of phishing pages that mimic legitimate university login portals. By altering the code of these pages, Kimsuky can capture the credentials of unsuspecting victims. Notably, the group has targeted institutions such as Dongduk University, Korea University and Yonsei University.

The operation also highlighted Kimsuky’s use of a custom tool called “SendMail,” which was deployed to send phishing emails using compromised email accounts. These emails were carefully crafted to deceive recipients into providing their login information, furthering Kimsuky’s espionage efforts. 

According to Resilience, the breadth and depth of Kimsuky’s tactics underscore the persistent and evolving threat posed by state-backed cyber groups. 

Recommendations for Organizations

To tackle this threat, the security firm recommended leveraging phish-resistant multifactor authentication (MFA), such as FIDO-compliant hardware tokens or push-based mobile applications. 

Additionally, users should always double-check that the URL they are logging into matches the page they expect to be on, as some password managers can assist with this automatically. 

Finally, organizations are encouraged to review and test Breach and Attack Simulation packages that simulate Kimsuky activity to better prepare for potential attacks.

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malware, Ransomware, Spyware, vulnerabilityLeave a comment

New CMoon USB worm targets Russians in data theft attacks

Posted on August 8, 2024 - August 8, 2024 by Maq Verma

A new self-spreading worm named ‘CMoon,’ capable of stealing account credentials and other data, has been distributed in Russia since early July 2024 via a compromised gas supply company website.

According to Kaspersky researchers who discovered the campaign, CMoon can perform a broad range of functions, including loading additional payloads, snapping screenshots, and launching distributed denial of service (DDoS) attacks.

Judging from the distribution channel the threat actors used, their targeting scope is focused on high-value targets rather than random internet users, which indicates a sophisticated operation. 

Distribution mechanism

Kaspersky says the infection chain begins when users click on links to regulatory documents (docx, .xlsx, .rtf, and .pdf) found on various pages of a company’s website that provides gasification and gas supply services to a Russian city.

The threat actors replaced the document links with links to malicious executables, which were also hosted on the site and delivered to the victims as self-extracting archives containing the original document and the CMoon payload, named after the original link.

“We have not seen other vectors of distribution of this malware, so we believe that the attack is aimed only at visitors to the particular site,” reports Kaspersky.

After the gas firm was notified of this compromise, the malicious files and links were removed from its website on July 25, 2024.

However, due to CMoon’s self-propagation mechanisms, its distribution may continue autonomously.

CMoon is a .NET worm that copies itself to a newly created folder named after the antivirus software it detected on the compromised device or one resembling a system folder if no AVs are detected.

The worm creates a shortcut on the Windows Startup directory to ensure it runs on system startup, securing persistence between reboots.

To avoid raising suspicions during manual user checks, it alters its files’ creation and modification dates to May 22, 2013.

The worm monitors for newly connected USB drives, and when any are hooked up on the infected machine, it replaces all files except for ‘LNKs’ and ‘EXEs’ with shortcuts to its executable.

CMoon also looks for interesting files stored on the USB drives and temporarily stores them in hidden directories (‘.intelligence’ and ‘.usb’) before these are exfiltrated to the attacker’s server.

CMoon features standard info-stealer functionality, targeting cryptocurrency wallets, data stored in web browsers, messenger apps, FTP and SSH clients, and document files in the USB or user folders that contain the text strings ‘secret,’ ‘service,’ or ‘password.’

An interesting and somewhat unusual feature is the targeting of files that might contain account credentials such as .pfx, .p12, .kdb, .kdbx, .lastpass, .psafe3, .pem, .key, .private, .asc, .gpg, .ovpn, and .log files.

Targeted data
Targeted directories and data
Source: Kaspersky

The malware can also download and execute additional payloads, capture screenshots of the breached device, and initiate DDoS attacks on specified targets.

Stolen files and system information are packaged and sent to an external server, where they are decrypted (RC4) and verified for their integrity using an MD5 hash.

Generating the data package for exfiltration
Generating the data package for exfiltration
Source: Kaspersky

Kaspersky leaves open the possibility of more sites outside its current visibility distributing CMoon, so vigilance is advised.

No matter how targeted this campaign may be, the fact that the worm spreads autonomously means it could reach unintended systems and create the conditions for opportunistic attacks.

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malware, Programming, Ransomware, Spyware, vulnerabilityLeave a comment

FBI: BlackSuit ransomware made over $500 million in ransom demands

Posted on August 8, 2024 - August 8, 2024 by Maq Verma

CISA and the FBI confirmed today that the Royal ransomware rebranded to BlackSuit and has demanded over $500 million from victims since it emerged more than two years ago.

This new information was shared as an update to a joint advisory published in March 2023, which says the BlackSuit gang has been active since September 2022.

However, this private group is believed to be a direct successor of the notorious Conti cybercrime syndicate and started as Quantum ransomware in January 2022.

While they initially used other gangs’ encryptors (like ALPHV/BlackCat), likely to avoid drawing unwanted attention, they deployed their own Zeon encryptor soon after and rebranded to Royal in September 2022.

After attacking the City of Dallas, Texas, in June 2023, the Royal ransomware operation began testing a new encryptor called BlackSuit amid rebranding rumors. Since then, they have been operating under the BlackSuit name, and Royal Ransomware attacks have stopped altogether.

“BlackSuit ransomware is the evolution of the ransomware previously identified as Royal ransomware, which was used from approximately September 2022 through June 2023. BlackSuit shares numerous coding similarities with Royal ransomware and has exhibited improved capabilities,” the FBI and CISA confirmed in a Wednesday update to their original advisory.

“Ransom demands have typically ranged from approximately $1 million to $10 million USD, with payment demanded in Bitcoin. BlackSuit actors have demanded over $500 million USD in total and the largest individual ransom demand was $60 million.”

In March 2023 and a subsequent November 2023 advisory update, the two agencies shared indicators of compromise and a list of tactics, techniques, and procedures (TTPs) to help defenders block the gang’s attempts to deploy ransomware on their networks.

CISA and the FBI also linked the BlackSuit gang to attacks against over 350 organizations since September 2022 and at least $275 million in ransom demands.

The joint advisory was first issued after the Department of Health and Human Services (HHS) security team revealed in December 2022 that the ransomware operation was behind multiple attacks targeting healthcare organizations across the United States.

Most recently, multiple sources told BleepingComputer that the BlackSuit ransomware gang was behind a massive CDK Global IT outage that disrupted operations at over 15,000 car dealerships across North America.

This widespread outage after last month’s attack forced CDK to shut down its IT systems and data centers to contain the incident and car dealerships to switch to pen and paper, making it impossible for buyers to purchase cars or receive service for already-bought vehicles.

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malware, Ransomware, Spyware, vulnerability1 Comment

Critical Progress WhatsUp RCE flaw now under active exploitation

Posted on August 8, 2024 - August 8, 2024 by Maq Verma

Threat actors are actively attempting to exploit a recently fixed  Progress WhatsUp Gold remote code execution vulnerability on exposed servers for initial access to corporate networks.

The vulnerability leveraged in these attacks is CVE-2024-4885, a critical-severity (CVSS v3 score: 9.8) unauthenticated remote code execution flaw impacting Progress WhatsUp Gold 23.1.2 and older.

Proof-of-concept (PoC) exploits for CVE-2024-4885 are publicly available that target exposed WhatsUp Gold ‘/NmAPI/RecurringReport’ endpoints.

Threat monitoring organization Shadowserver Foundation reports that the attempts started on August 1, 2024, coming from six distinct IP addresses.

Tweet

The CVE-2024-4885 RCE

Progress WhatsUp Gold is a network monitoring application that allows you to track the uptime and availability of servers and services running on them. However, as with any software, it should only be accessible internally, through a VPN, or via trusted IP addresses.

On June 25, 2024, Progress released a security bulletin warning about fifteen high and critical-severity bugs, including CVE-2024-4885, a 9.8-rated critical RCE flaw. Progress urged users to upgrade to the latest version, 23.1.3, to resolve the vulnerabilities.

CVE-2024-4885 is a remote code execution flaw in the ‘WhatsUp.ExportUtilities.Export. GetFileWithoutZip’ function, allowing unauthenticated attackers to execute commands with the privileges of the ‘iisapppool\\nmconsole’ user.

This is not an admin user but still has elevated permissions within the context of WhatsUp Gold. It can execute code on the server and even access the underlying system.

The vendor’s recommendations for those unable to upgrade to 23.1.3 were to monitor exploitation attempts at the’/NmAPI/RecurringReport’ endpoint and implement firewall rules to restrict access to it only to trusted IP addresses on ports 9642 and 9643.

The flaw was discovered by security researcher Sina Kheirkhah, who published a detailed technical write-up on his blog, including a proof-of-concept exploit.

The exploit sends a ‘TestRecurringReport’ request to an exposed WhatsUp Gold reporting endpoint that contains a specially crafted configuration. This configuration includes the URL to an attacker-controlled web server and the user ID the targeted server should respond with.

When the targeted server responds to the attacker’s server, it will include the user name and encrypted password associated with the user ID.

Kheirkhah’s exploit uses this information to make and receive further requests and resposnes with the targeted server to ultimately cause a file to be written on the server, which is then launched remotely for code execution, as illustrated below.

Exploit

As the final payload in the exploit is delivered from attacker-controlled servers, it is unknown at this time what payloads are being created on targeted servers. However, similar activity in the past created webshells on the targeted devices for easier access and persistence.

Given the active exploitation status, WhatsUp Gold admins should apply the latest security updates or mitigations and continue monitoring for suspicious activity.

The WhatsUp Gold server should also be placed behind a firewall and accessible only internally or by trusted IP addresses.

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

INTERPOL recovers over $40 million stolen in a BEC attack

Posted on August 8, 2024 - August 8, 2024 by Maq Verma

A global stop-payment mechanism created by INTERPOL successfully recovered over $40 million stolen in a BEC attack on a company in Singapore.

INTERPOL says this is the largest recovery of funds stolen through a business email compromise (BEC) scam.

BEC scams are a type of cyberattack in which cybercriminals attempt to redirect legitimate corporate payments to an attacker-controlled bank account.

These attacks are conducted by threat actors compromising a vendor’s or company’s email address to trick billing departments into approving new banking information where payment should be sent. When the threat actors receive the payment, they quickly use money mules to drain the account or transfer it to numerous other accounts under their control.

The 2023 FBI IC3 Report says that they received 21,489 complaints with $2.9 billion in reported losses due to business email compromise in 2023.

I-GRIP recovers over $40 million

According to an announcement by INTERPOL, a commodity firm based in Singapore fell victim to a BEC attack after receiving what they thought was an email from their supplier.

“On 15 July, the firm had received an email from a supplier requesting that a pending payment be sent to a new bank account based in Timor Leste,” reads the announcement.

“The email, however, came from a fraudulent account spelled slightly different to the supplier’s official email address.”

Thinking this was a legitimate request, the firm wired $42.3 million to the attacker-controlled bank accounts, only to realize four days later that they had fallen for an attack.

After reporting the attack to the Singapore authorities, law enforcement used INTERPOL’s Global Rapid Intervention of Payments (I-GRIP) to request assistance from authorities in Timor Leste and recover $39 million from the BEC attack.

Further investigations by the Timor Leste authorities led to the arrest of seven suspects and the recovery of an additional $2 million, bringing the total recovered amount to $41 million.

In June, a global police operation called “Operation First Light” arrested 3,950 people for their suspected involvement in phishing, pig butchering scams, fake online shopping sites, and romance and impersonation scams.

I-GRIP was also used during the operation to recover millions of dollars stolen by the threat actors. Since its launch in 2022, it has been used to recover over $500 million stolen through fraud and cybercrime.

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

Windows Update downgrade attack “unpatches” fully-updated systems

Posted on August 8, 2024 - August 8, 2024 by Maq Verma

SafeBreach security researcher Alon Leviev revealed at Black Hat 2024 that two zero-days could be exploited in downgrade attacks to “unpatch” fully updated Windows 10, Windows 11, and Windows Server systems and reintroduce old vulnerabilities.

Microsoft issued advisories on the two unpatched zero-days (tracked as CVE-2024-38202 and CVE-2024-21302) in coordination with the Black Hat talk, providing mitigation advice until a fix is released.

In downgrade attacks, threat actors force an up-to-date target device to roll back to older software versions, reintroducing vulnerabilities that can be exploited to compromise the systems.

SafeBreach security researcher Alon Leviev discovered that the Windows update process could be compromised to downgrade critical OS components, including dynamic link libraries (DLLs) and the NT Kernel. Even though all of these components were now out of date, when checking with Windows Update, the OS reported that it was fully updated, with recovery and scanning tools unable to detect any issues.

By exploiting the zero-day vulnerabilities, he could also downgrade Credential Guard’s Secure Kernel and Isolated User Mode Process and Hyper-V’s hypervisor to expose past privilege escalation vulnerabilities.

“I discovered multiple ways to disable Windows virtualization-based security (VBS), including its features such as Credential Guard and Hypervisor-Protected Code integrity (HVCI), even when enforced with UEFI locks. To my knowledge, this is the first time VBS’s UEFI locks have been bypassed without physical access,” Leviev revealed.

“As a result, I was able to make a fully patched Windows machine susceptible to thousands of past vulnerabilities, turning fixed vulnerabilities into zero-days and making the term “fully patched” meaningless on any Windows machine in the world.”

As Leviev said, this downgrade attack is undetectable because it cannot be blocked by endpoint detection and response (EDR) solutions, and it’s also invisible since Windows Update reports that a device is fully updated (despite being downgraded).

No patches after six months

Leviev unveiled his “Windows Downdate” downgrade attack six months after reporting the vulnerabilities to Microsoft in February as part of a coordinated responsible disclosure process.

Microsoft said today that it’s still working on a fix for the Windows Update Stack Elevation of Privilege (CVE-2024-38202) and Windows Secure Kernel Mode Elevation of Privilege (CVE-2024-21302) vulnerabilities used by Leviev to elevate privileges, create malicious updates, and reintroduce security flaws by replacing Windows system files with older versions.

As the company explains, the CVE-2024-38202 Windows Backup privilege escalation vulnerability enables attackers with basic user privileges to “unpatch” previously mitigated security bugs or bypass Virtualization Based Security (VBS) features. Attackers with admin privileges can exploit the CVE-2024-21302 privilege escalation flaw to replace Windows system files with outdated and vulnerable versions.

Microsoft said it’s not currently aware of any attempts to exploit this vulnerability in the wild and advised implementing recommendations shared in two security advisories published today to help reduce the risk of exploitation until a security update is released.

“I was able to show how it was possible to make a fully patched Windows machine susceptible to thousands of past vulnerabilities, turning fixed vulnerabilities into zero-days and making the term ‘fully patched’ meaningless on any Windows machine in the world,” Leviev said.

“We believe the implications are significant not only to Microsoft Windows, which is the world’s most widely used desktop OS, but also to other OS vendors that may potentially be susceptible to downgrade attacks.”

Update August 07, 17:27 EDT: A Microsoft spokesperson sent the following statement after the story was published.

We appreciate the work of SafeBreach in identifying and responsibly reporting this vulnerability through a coordinated vulnerability disclosure. We are actively developing mitigations to protect against these risks while following an extensive process involving a thorough investigation, update development across all affected versions, and compatibility testing, to ensure maximized customer protection with minimized operational disruption.

Microsoft also told BleepingComputer that they are working on an update that will revoke outdated, unpatched Virtualization Based Security (VBS) system files to mitigate the attack. However, it will take time to test this update due to the large number of files that will be impacted. 

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

The 5 Worst Anti-Patterns in API Management

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Imagine this: you are working in a company named DonutGPT as Head of Platform Engineering, and you sell millions of donuts online every year with AI-generated recipes. You need to make your critical services available to hundreds of resellers through secured APIs. Since nobody on earth wants to see his donut order fail, your management is putting on the pressure to ensure a highly available service.

Your current production environment consists mostly of VMs, but you are in the process of progressively migrating to a cloud native platform. Most of the production services you handle expose APIs, but your team has very little control and visibility over them. Each service is owned by a different developer team, and there is no consistency in languages, deployed artifacts, monitoring, observability, access control, encryption, etc.

Some services are Java-based, “secured” with old TLS 1.1 certificates, and behind a JWT access control policy. Other services are Python-based, use TLS 1.3 certificates, and are behind a custom-made access control policy. This diversity can be extended to the rest of the services in your production environment.

You (reasonably) think that this situation is far from ideal, and you plan to rationalize APIs at DonutGPT with the help of an API Management solution. Your requirements include:

  • Your APIs should be strongly governed with centralized and consistent security policies
  • You need advanced traffic management like rate limiting or canary releases
  • You need real-time observability and usage metrics on all public endpoints

Simply put, you want predictable operations, peace of mind, and better sleep.

It looks like your plan is right, and you are on track for better days (or nights). However, an API journey is long, and the road ahead is full of obstacles. Here are the top five worst anti-patterns you should avoid when you start your API odyssey.

Anti-Pattern 1: Monolith-Microservices

You are about to invest time, money, and effort in setting up an API management solution. In this process, you will centralize many aspects of your exposed services like traffic management, connectivity security, and observability. It’s easy to think, “The more centralized everything is, the more control I have, and the better I will sleep.” Why not use this API management solution to intercept every API call and transform the HTTP body to sanitize it from sensitive data (like private information)?

This would ensure that every API call is clean across the whole system. That’s true, but only in the short term.

Let’s fast forward three years. Your API management platform is now mature and manages hundreds of APIs across dozens of different teams. The initial quick win to sanitize the HTTP body within the API management workflow gradually became a white elephant:

  • The first quick patch inevitably evolved into more complex requirements, needing to be adapted to every API. Your ten stylish lines of code quickly grow to an unmaintainable 5000-line script.
  • No one wants to take ownership of this custom script now that it operates many teams’ APIs
  • Every new version of an API may require updating and testing this piece of code, which is located in the API platform and separated from the services’ code repositories.
  • It takes a lot of work to test this custom script. If you have any issues, you will first learn of them from live traffic, and you will have a hard time debugging it.
  • Your API solution is highly resource-intensive. You should avoid delegating the whole HTTP body to your reverse proxy. This consumes most of the CPU allocated to your platform, giving you very little margin for security while making it a super expensive approach.

In short, it’s best to avoid short-term decision-making. What seems like a good idea at the time may not hold up several years down the road. API management is designed to discover, secure, organize, and monitor your APIs. It should not be used as a shortcut to execute application code.

→ Separation of concern is critical when designing your next production platform.

Anti-Pattern 2: Cart Before the Horse

Another interesting anti-pattern is a laser focus on the long-term, possibly idealized, outcome without recognizing or understanding the steps to get there. Your API transformation project is so expensive you want to ensure everything runs smoothly. So, you choose the most feature-rich API management solution to cover all possible future needs despite being unable to take most of its capabilities today.

Sure, it’s more expensive, but it’s a safe bet if it prevents you from a potential migration in three years. This may seem risk-free, but you only see the tip of the API project iceberg.

Fast forward three years with this top-notch & expensive solution:

  • The transition from the legacy platform took way longer than expected.
  • This new solution required paid training sessions from the vendor for your team and many developers throughout the company
  • You STILL have yet to use many features of the solution.
  • Many developer teams avoided adopting this new platform due to its complexity.
  • Your initial goal of controlling all API calls within the company has yet to be reached.
  • You still have inadequate sleep.

At this point, you acknowledge that the most complete (and complex) solution might not be the best option, so you bite the bullet and decide to migrate to a simpler solution that fits your existing needs. In your attempt to avoid an API management migration three years after starting your project, you ended up causing it anyway, only sooner than initially anticipated.

The point here is that while you should aim for your long-term vision (and choose a solution that aligns with it), address your needs today and strategically build towards that vision. This includes planning for progressive training and adoption by the teams. If the product cannot provide you with a progressive learning curve and deployment journey, then you won’t be able to stick to your plan.

Here is an example of a progressive journey with the same product:

  1. Start small with basic ingress resources on Kubernetes.
  2. Then, an API Gateway will be introduced that brings API traffic management and security.
  3. Then, after you have a much better understanding of the features that are important for your business, transition to an API management platform.

In a nutshell, don’t pick a product because of all the bells and whistles. No amount of cool features will solve your challenges if they never get used. Evaluate them based on what it’s like to use to meet your needs today and whether or not they provide a progressive transition to more advanced use cases in the future.

→ Don’t get ahead when transitioning to your API management platform.

Anti-Pattern 3: Good Enough as Code

As a modern Head of Platform Engineering, you strongly believe in Infrastructure as Code (IaC). Managing and provisioning your resources in declarative configuration files is a modern and great design pattern for reducing costs and risks. Naturally, you will make this a strong foundation while designing your infrastructure.

During your API journey, you will be tempted to take some shortcuts because it can be quicker in the short term to configure a component directly in the API management UI than setting up a clean IaC process. Or it might be more accessible, at first, to change the production runtime configuration manually instead of deploying an updated configuration from a Git commit workflow. Of course, you can always fix it later, but deep inside, those kludges stay there forever.

Or worse, your API management product needs to provide a consistent IaC user experience. Some components need to be configured in the UI. Some parts use YAML, others use XML, and you even have proprietary configuration formats. These diverse approaches make it impossible to have a consistent process.

You say, “Infrastructure as a Code is great, but exceptions are ok. Almost Infrastructure as a Code is good enough.”

Fast forward three years:

  • 60% of the infrastructure is fully declared in configuration files and sits in a git repository
  • Those configuration files are written in five formats: YAML, INI, XML, JSON, and a custom format.
  • The remaining 40% requires manual operations in some dashboards or files.
  • There is such diversity in configuration formats or processes that your team is unable to get the platform under control and constantly needs to be rescued by other teams that have knowledge of each format or process.
  • Human error is so high that your release process is prolonged and unreliable. Any change to the infrastructure requires several days to deploy in production, and this is the best-case scenario.
  • In the worst-case scenario, a change is deployed in production, creating a major outage. As your team is not able to troubleshoot the issue quickly, the time to recovery is measured in hours. Your boss anxiously looks at the screen over your shoulder, waiting for the miraculous fix to be deployed. Thousands of donut orders are missed in the process.
  • You don’t even try to sleep tonight.

The conclusion is obvious — setting up API Management partially as code defeats the purpose of reducing costs and risks. It’s only when your API Management solution is 100% as code that you can benefit from a reliable platform, a blazing fast time to market, and fast recovery.

Exceptions to the process will always bring down your platform’s global efficiency and reliability.

→ Never settle for half-baked processes.

Anti-Pattern 4: Chaotic Versioning System

When you start your API journey, planning for and anticipating every use case is difficult. Change is inevitable, but how you manage it is not. As we’ll see in this section, the effects of poor change management can snowball over the years.

Let’s go back to the very beginning: You are launching your brand new API platform and have already migrated hundreds of APIs into production. You are pretty happy with the results; you feel under control and are getting better sleep.

After one year, your state-of-the-art monitoring alerts flood your notifications, pointing to a bunch of API calls from one of your biggest customers with 404 errors. 404 errors are widespread, so you pay little attention to them and quickly forward the issue to the developer team in charge of the API.

During the following months, you see the number of 404 errors and 500 errors rising significantly, affecting dozens of different APIs. You start to feel concerned about this issue and gather your team to troubleshoot and find the root cause.

Your analysis uncovers a more significant problem: your APIs need a consistent versioning system. You designed your platform as if your API contracts would never change, as if your APIs would last forever.

As a result, to handle change management and release new versions of their APIs, each team followed the processes:

  • Some teams did not bother dealing with compatibility checks and kept pushing breaking changes.
  • Some teams tried to keep their APIs backward compatible at all costs. Not only did this make the codebase a nightmare to maintain, but it slowly became obvious that it discouraged teams from innovating, as they wanted to avoid breaking changes and maintaining compatibility with all versions.
  • Some teams followed a more robust process with the use of URL versioning, like https://donutgpt.com/v1/donuts and https://donutgpt.com/v2/donuts. They were able to maintain multiple versions at the same time, with different codebases for each version. The problem was that other teams were using different strategies, like query parameter versioning (https://donutgpt.com/donuts?version=v1) or even header versioning.
  • Some teams consistently followed a specific versioning strategy like URL versioning but did not provide versioned documentation.

This study makes you realize how creative the human brain is — the developers chose so many different options!

The result is that customers were:

  • Using outdated documentation
  • Calling outdated or dead APIs
  • Calling different APIs with different versioning strategies
  • Calling unreliable APIs
  • Receiving donuts with your new “experimental recipe” when they ordered your classic “Legend GPT Donut”

The key takeaways are apparent: No code lasts forever, and change is a natural part of API development. Given this truth, you must have a strong, reliable, and repeatable foundation for your release process and API lifecycle management.

Your choice of API management solution can help, too. Choose a solution that provides a flexible versioning strategy that fits your needs and can be enforced on every API of DonutGPT.

Additionally, ensure teams maintain several versions of their APIs that can be easily accessible as part of a broader change management best practice. This is the only way to maintain a consistent and reliable user experience for your customers.

→ Enforce a uniform versioning strategy for all your APIs.

Anti-Pattern 5: YOLO Dependencies Management

Now that you’ve learned why managing your API versioning strategy is critical, let’s discuss dependency management for APIs — a topic that is often highly underestimated for a good reason. It’s pretty advanced.

After the miserable no-versioning-strategy episode, you were reassured to see versioning policies enforced on every piece of code at DonutGPT. You were even starting to sleep better, but if you’ve read this far, you know this can’t last.

After two more months, your state-of-the-art monitoring again alerts you to several API calls from one of your biggest customers, resulting in 404 errors! You’ve got to be kidding me! You know the rest of the story: task force, troubleshooting, TooManyDonutsErrors, root cause analysis, and (drum roll) …

All your APIs indeed followed the enforced versioning strategy: https://donutgpt.com/v1/donuts. So, what happened?

This was only enforced on the published routes on the API management platform. The services behind the APIs were following a different versioning strategy. Even for those few, there was no dependency management between your API routes and backend services.

In other words, https://donutgpt.com/v1/donuts and https://donutgpt.com/v2/donuts were able to call the same version of a service, which led to a situation similar to the no-versioning-strategy episode, with a terrible customer experience. It gets even more complex if some services call other services.

You start to see my point: you need dependency policies enforced on all your APIs and services. Every API needs to be versioned and call a specific service version (or range), and this same approach should be applied to every service. To achieve this, your API management solution must provide a flexible way to express dependencies in API definitions. Furthermore, it should check for dependencies at the deployment phase through intelligent linters to avoid publishing a broken API dependency chain.

These capabilities are uncommon in API management products, so you must choose wisely.

→ Enforce dependency checks at deployment.

Wrap Up

You dedicated most of your career to building DonutGPT’s infrastructure, solving countless challenges during this adventure. The outcome has been quite rewarding: DonutGPT disrupted the donut market thanks to its state-of-the-art AI technology, producing breathtaking donut recipes.

You are proud to be part of this success story; however, while accelerating, the company now faces more complex problems. The biggest problem, by far, is the industrialization of DonutGPT’s APIs consumed by customers and resellers. During this journey, you tried multiple solutions, started over, and made some great decisions… and some debatable ones. DonutGPT messed up a few donut orders while exploring the API world.

Now that you can stand back and see the whole project, you realize that you have hit what you consider today to be anti-patterns. Of course, you learned a lot during this process, and you started thinking it would be a great idea to give that knowledge back to the community through a detailed blog post, for example.

Of course, this story, the character, and the company are fictitious, even though AI-generated donut recipes might be the next big thing. However, these anti-patterns are very real and have been observed repeatedly during our multiple conversations at Traefik Labs with our customers, prospects, and community members.

While planning your API journey, you should consider these five principles to maximize your return and minimize your effort:

  • Design your API platform with a strong separation of concerns. Avoid delegating business logic to the platform.
  • Do not set the bar too high or too fast. Proceed step by step. Start with more straightforward concepts like ingresses and progressively move to more advanced API use cases once you understand them better.
  • While industrializing your processes, tolerating exceptions will defeat the purpose, and you won’t gain all the expected benefits of a fully automated platform.
  • Versioning becomes a critical challenge in the long run. Starting your API journey with a strong and consistent versioning strategy across all your APIs will make your platform more scalable, reliable, and predictable.
  • Within complex infrastructures with many moving parts, controlling and certifying runtime dependencies for all components is crucial to achieving a high level of trust and stability for your platform.

Of course, this list is not exhaustive, but it covers the most common practices. All that said, these recommendations should not prevent you from drifting away and trying different processes. Innovation is built on top of others’ feedback, but still requires creativity.

Posted in VulnerabilityTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

Internal Developer Platforms: The Heart of Platform Engineering

Posted on August 4, 2024 by Maq Verma

Platform engineering involves creating a supportive, scalable and efficient infrastructure that allows developers to focus on delivering high-quality software quickly. Many of us have been doing this approach for years, just without a proper name tied to it.

To get platform engineering right, many have turned to the concept of platforms as products (PaaP), where you put the emphasis on the user experience of the developers themselves and view your internal developers as customers. This implies platforms should have a clear value proposition, a roadmap and dedicated resources so that our internal developers have all the resources we would arm our external customers with if preparing them to onboard into a new product.

However, we can’t discuss the popular trend of treating PaaP without discussing what lies at the heart of this conversation. The PaaP approach is particularly pivotal in the realm of internal developer platforms (IDPs), which are central to the platform engineering craze because you can’t get your external platform right if your internal one is a mess. Traditional approaches often overlook the necessity of aligning the platform’s capabilities with developers’ needs, leading to poor adoption and suboptimal outcomes.

This is where internal developer platforms come into play, serving as the backbone of this engineering paradigm. These platforms are not just about providing tools and services; they are about crafting an experience that empowers developers to perform their best work. When platforms are designed with a deep understanding of what developers truly need, they can significantly enhance productivity and satisfaction.

IDPs are usually referred to as developer-focused infrastructure platforms (not to be confused with a developer control plane) and were made popular by the well-known “Team Topologies” book (they’re something we’ve prioritized for a long time here at Ambassador). “Team Topologies” focuses on structuring business and technology teams for peak efficiency, and a big focus is highlighting the need for platform teams to offer platforms as an internal product to enable and accelerate other teams.

The benefit of internal platforms is that they enable teams to spend more time on delivering business value, provide guardrails for security and compliance, standardize across teams and create an ease of deployment. Here’s why IDPs are critical to build that solid foundation for your IDP and perfect your platform strategy as a whole:

worldwide so they can code, ship and run apps fast

Why Internal Developer Platforms Are Critical

Enhanced Developer Experience (DX)

Internal developer platforms focus on improving the overall developer experience, making it easier for developers to access the tools and resources they need. Your developers should not be dreading their experience; instead, they should be able to focus on the things that matter most: good development.

The more you focus on making your internal platform friction-free, it will lead to increased efficiency and creativity as developers are able to focus more on solving business problems rather than grappling with infrastructural complexities. With easier access to tools and fewer operational hurdles, developers can experiment and innovate more freely. This environment encourages the exploration of new technologies and approaches, which can lead to breakthroughs in product development.

Friction-free IDPs include well-documented processes, standardized tools and removing manual work where possible (automation is your friend). If you’ve built your IDP to meet these requirements, then your devs will be happier and more productive.

Streamlined Operations and Resource Management

Speaking of standardization — by standardizing development environments, internal platforms reduce variability and streamline operations across the development life cycle. This not only speeds up the development process but also reduces the likelihood of errors, leading to more stable releases.

Having components and tools centralized in an internal developer platform streamlines the foundation for developer self-service, success and responsibility. A developer platform empowers both developers and platform engineers to focus on and excel in their core business areas, and enable faster development cycles and the ability to ship software with speed and safety.

A strong IDP allows organizations to optimize resource usage, ensuring that developers have access to necessary resources without overprovisioning. This can lead to cost savings and more efficient use of infrastructure.

And as a bonus, a comprehensive IDP helps you not just attract new talent but retain it as well. In a competitive tech landscape, devs are looking for environments where they can increase their skills and work on exciting projects that don’t compromise their intelligence or threaten their ability to innovate freely. Well-designed internal developer platforms can be a key differentiator in whether future devs will want to work on your team.

Avoiding Common Anti-Patterns That Undermine Your API Strategy

A recent example made this recommendation very clear. Anti-patterns that can undermine API management are largely the result of a lack of a cohesive strategy and plan, and biting off more than a team can chew. This is where we see the opportunity for a platform approach to API development.

IDPs help you craft your API platform with a clear division of responsibilities, ensuring that business logic remains separate from the platform itself.

How Do I Implement a Successful IDP?

Note that if you choose not to take a proactive approach in your team’s development of their IDP, your developers will find a way to do it anyway. 

“I’d argue that anyone who develops software these days has a platform. Accidental platforms, in many cases, are more difficult to manage. Try to take an active role in managing your platform to avoid issues later on,” said Erik Wilde, an industry influencer and recent guest on the “Living on the Edge” podcast.

Therefore, to truly make internal developer platforms a centerpiece of platform engineering and get it right the first time, organizations need to adopt a few strategic practices:

  • Understand and anticipate developer needs: Continuous feedback mechanisms should be implemented to ensure the platform evolves in line with developer requirements. Any tool you select should be selected with your developers’ needs in mind, applying a holistic lens throughout every piece of your platform. Recognizing that with complex environments with numerous components, managing and certifying runtime dependencies is essential to maintaining high levels of trust with your developers and the stability in your platform.
  • Be aware that versioning is a significant challenge over time. Implementing a consistent versioning strategy from the start for all your APIs will enhance your platform’s scalability, reliability and predictability.
  • Invest in scalability: As the organization grows, the platform should be able to scale seamlessly to accommodate more developers and increase workload without performance dips. Ensure the tools you’re building your platform on come with the proper flexibility, room for integrations and composability to expand with your future anticipated growth.
  • Ensure robust security and compliance: The platform should incorporate strong security measures and compliance controls to protect sensitive data and meet regulatory requirements. Standardization and proper governance can help promote the security of your IDP, but ensure the proper code reviews, protections and protections from security risks are all in place before you socialize your IDP.
  • Promote internal adoption: Through internal promotion and demonstrating clear value propositions, you can encourage widespread adoption of the platform. Involve your own devs early and often in the process, and consider involving other relevant stakeholders as well (think product managers, business leadership, someone from your operations team, etc.). And remember: While it might be obvious to developers and their managers that an IDP could increase developer productivity and focus, it’s not always obvious to other stakeholders. Chances are you’re going to have to prove that your IDP can unlock developer effectiveness and efficiency by building a business case.

There Is No Platform Engineering Without IDPs

In the end, internal developer platforms (IDPs) are not merely a component of platform engineering; they are its core. As platform engineering evolves, placing IDPs at the heart of this transformation is essential for organizations aspiring to lead in the digital age. With the ongoing migration to the cloud and the customization of platforms atop this infrastructure, a deep understanding of IDPs and their pivotal role in platform engineering is becoming increasingly crucial.

Posted in VulnerabilityTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

What’s New for JavaScript Developers in ECMAScript 2024

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

The ECMAScript standard for JavaScript continues to add new language features in a deliberate way. This year there’s a mix of APIs that standardize common patterns that developers have been writing by hand or importing from third-party libraries — including some aimed specifically at library authors — plus improvements in string handling, regular expressions, multithreading and WebAssembly interoperability.

Meanwhile, the TC39 committee that assesses proposals is also making progress on some of the much larger proposals, like the long-awaited Temporal and Decorators that may be ready for ECMAScript 2025, Ecma vice president Daniel Ehrenberg told The New Stack.

“Looking at what we’ve done over the past year, ECMAScript 2024 is a little similar to ECMAScript 2023 in that it sees smaller features; but meanwhile, we’re building very strongly towards these big features.” Many of those only need “the last finishing touches.”

“You need to access the WebAssembly heap reasonably efficiently and frequently from the JavaScript side, because in real applications you will not have communication between the two.”
– Daniel Ehrenberg, Ecma vice president

In fact, since the completed feature proposals for ECMAScript 2024 were signed off in March of this year, ready for approval of the standard in July, at least one important proposal — Set Methods — has already reached stage four, ready for 2025.

Making Developers Happier With Promises

Although promises are a powerful JavaScript feature introduced in ECMAScript 2015, the pattern the promise constructor uses isn’t common elsewhere in JavaScript, and turned out not to be the way developers want to write code, Ehrenberg explained. “It takes some mental bandwidth to use these weird idioms.”

The hope was that over time on the web platform, enough APIs would natively return promises instead of callbacks that developers wouldn’t often need to use the promise constructor. However, existing APIs haven’t changed to be more ergonomic.

“It comes up at least once in every project. Almost every project was writing this same little helper so it being in the language is one of those really nice developer happiness APIs.”
– Ashley Claymore, Bloomberg software engineer

Instead, developers are left with a cumbersome workaround that many libraries, frameworks and other tools — from React to TypeScript — have implemented different versions of: it’s in jQuery as the deferred function. “People have this boilerplate pattern that they have to write over and over again, where they call the promise constructor, they get the resolve and reject callbacks, they write those to a variable, and then they inevitably do something else [with them]. It’s just an annoying piece of code to write,” said Ehrenberg.

Libraries that implemented promises before ECMAScript 2015 typically covered this, but the feature didn’t make it into the language; Chrome briefly supported and then removed a similar option. But developers still need this often enough that the Promise.withResolvers proposal to add a static method made it through the entire TC39 process in the twelve months between ECMAScript 2023 being finalized and the cutoff date for this year’s update to the language — an achievement so unusual that TC-39 co-chair Rob Palmer referred to it as a speedrun.

“Previously, when you created a promise, the ways that you resolve it and you give it its final state were APIs only accessible inside the function that you built the promise with,” Ehrenberg continued. “Promise.withResolvers gives you a way to create a promise and it gives you direct access to those resolution functions.”

Other functions in your code might depend on whether a promise is resolved or rejected, or you might want to pass the function to something else that can resolve the promise for you, reflecting the complex ways promises are used for orchestration in modern JavaScript, Ashley Claymore (a Bloomberg software engineer who has worked on multiple TC39 proposals) suggested.

“The classical way of creating a promise works well when it’s a small task that’s asynchronous; taking something that was purely callback based or something that was promise-like, and then wrapping it up so it was actually a promise,” Claymore said. “In any code where I start doing lots of requests and need to line them up with IDs from elsewhere, so I’m putting promises or resolve functions inside a map because I’m orchestrating lots of async things that aren’t promise based, you’re always having to do this. I need to pull these things out because I’m sending them to different places.”

“It comes up at least once in every project. Almost every project was writing this same little helper so it being in the language is one of those really nice developer happiness APIs.”

Other improvements to promises are much further down the line; Claymore is involved in a proposal to simplify combining multiple promises without using an array — which involves keeping track of which order all the promises are in. “That works fine for like one, two or three things: after that, it can start getting harder to follow the code,” he said. “What was the fifth thing? You’re counting lines of code to make sure you’ve got the right thing.”

Having an Await dictionary of Promises would let developers name promises: particularly helpful when they cover different areas — like gathering user information, database settings and network details that likely return at very different times. This wouldn’t be a difficult feature to develop: the delay is deciding whether it’s useful enough to be in the language because the TC39 committee wants to avoid adding too many niche features that could confuse new developers.

Keeping Compatibility and Categorizing Data

That’s not a concern for the second major feature in ECMAScript 2024, a new way to categorize objects into categories using Array grouping: something common in other languages (including SQL) that developers frequently import the Lodash userland library for.

You can pass in different items and classify them by some property, like color. “The result is a key value dictionary that is indexed by ‘here’s all your green things, here are your orange things’ and that dictionary can be expressed either as an object or a map”, Palmer explained. Use a map if you want to group keys that aren’t only strings and symbols; to extract multiple data values at the same time (known as destructuring), you need an object.

“As a standards committee we shouldn’t be asking them to incur the cost of risking outages when we already know that something is highly risky.”
– Ehrenberg

That’s useful for everything from bucketing performance data about your web site to grouping a list of settled promises by status, a common use with Promise.allSettled, Claymore said. “You give it an array of promises, it will wait for all of them to settle, then you get back an array of objects that says, ‘did this reject or did it resolve?’ They’re in the same order as you started, but it’s quite common to have one bit of code I want to give all the promises that were successful and resolved, and another bit of code I want to give rejected [promises].” For that you can pass the result of Promise.allSettled to groupBy to group by promise status, which groups all the resolved promises and all the rejected promises separately.

Building the new grouping functionality also delivered a useful lesson about web compatibility.

The utilities in Lodash are functionality that developers could write in 5-10 lines of code, Palmer noted. “But when you look at the frequency at which they’re used, they’re so widely used by so many programs that at some point it’s worth taking the top usage ones and then putting them in the platform, so people don’t have to write their own.” A number of them have now ended up as native constructs.

“This functionality being in the language is a really nice convenience for projects that are trying not to have a large set of dependencies while still having access to these really common things,” Claymore agreed. “They’re not the hardest things to write by hand, but it’s no fun rewriting them by hand and they can subtly get them wrong.”

Unusually, the new Map.groupBy and Object.groupBy methods are static methods rather than array methods, the way Lodash functionality has previously been added to JavaScript. That’s because two previous attempts to add this functionality as array methods both clashed (in different ways) with existing code on websites already using the same two names the proposal came up with, including the popular Sugar library.

This problem could recur any time TC39 proposals try to add new prototype methods to arrays or instance methods, Palmer warned. “Whenever you try and think of any reasonable English language verb you might want to add, it seems it triggers web compatibility problems somewhere on the internet.”

Ideally, good coding standards would avoid that, but part of the reason it takes time to add new features to JavaScript is the need to test for exactly these kinds of issues and work around them when they crop up.

“We can say that the best practice for the web is that users should not be polluting the global prototypes: people should not be adding properties to array or prototype [in their code], because it can lead to web compatibility issues. But it doesn’t matter how much we say that; these sites are already up there, and we have a responsibility to not break them.”

Shipping then withdrawing implementations in browsers makes it more expensive to add new features, Ehrenberg added. “As a standards committee, we shouldn’t be asking them to incur the cost of risking outages when we already know that something is highly risky.” That means similar proposals might use static methods more in the future to avoid the issue.

Updating JavaScript for Better Unicode Handling

JavaScript already has a /u flag for regexp that needs to handle Unicode (introduced in ECMAScript 2015), but that turned out to have some oddities and missing features. The new /v flag fixes some of those (like getting different results if you use an upper or lowercase character when matching, even if you specify that you don’t care about the case) and forces developers to escape special characters. It also allows you to do more complex pattern matching and string manipulation using a new unicodeSets mode, which lets you name Unicode sets so that you can refer to the ASCII character set or the emoji character set.

The new options will simplify internationalization and make it easier to support features for diversity. The /u flag already lets you refer to emoji, but only if they were only a single character — excluding emoji that combine multiple characters to get a new emoji or to specify the gender or skin tone of an emoji representing a person, and even some country flags.

It also simplifies operations like sanitizing or verifying inputs, by adding more set operations including intersections and nesting, making complex regular expressions more readable. “It adds subtraction so you could say, for example, ‘all the ASCII characters’, but then subtract the digits zero to nine, and that would match a narrower range than all of the ASCII characters,” Palmer explained. You could remove invisible characters or convert numbers expressed as words into digits.

“It’s easy to make assumptions about Unicode, and it’s such a big topic; the number of people in the world [who] understand these things well enough to not make mistakes is very small.”
– Claymore

You can also match against various properties of strings, such as what script they’re written in, so that you can find characters like π and treat them differently from p in another language.

You can’t use the /u flag and the /v flag together and you will probably always want to use /v. Palmer described the choice as “the /v flag enables all the good parts of the/ u flag with new features and improvements, but some of them are backwards incompatible with the /u flag.”

ECMAScript 2025 will add another useful improvement for regexp: being able to use the same names in different branches of a regexp. Currently, if you’re writing a regular expression to match something that can be expressed in multiple ways, like the year in a date that might be /2024 or just /24, you can’t use ‘year’ in both branches of the regular expression, even though only one branch can ever match, so you have to say ‘year1’ and ‘year2’ or ‘shortyear’ and ‘longyear’.

“Now we say that’s no longer an error, and you can have multiple parts of the regular expression given the same name, as long as they are on different branches and as long as only one of them can ever be matched,” Claymore explained.

Another new feature in ECMAScript 2024 improves Unicode handling by ensuring that code is using well-formed Unicode strings.

Strings in JavaScript are technically UTF-16 encoded: in practice, JavaScript (like a number of other languages) doesn’t enforce that those encodings are valid UTF-16 even though that’s important for the URI API and the WebAssembly Component Model, for example. “There are various APIs in the web platform that need well-formed strings and they might throw an error or silently replace the string if they get bad data,” Palmer explained.

Because it’s possible for valid JavaScript code to use strings that are invalid UTF sequences, developers need ways to check for that. The new isWellFormed method checks that a JavaScript string is correctly encoded; if not, the new .toWellFormed method fixes the string by replacing anything that isn’t correctly encoded with the 0xFFFD replacement character �.

While experienced Unicode developers could already write checks for this, “It’s very easy to get wrong,” Claymore noted. “It’s easy to make assumptions about Unicode, and it’s such a big topic; the number of people in the world that actually properly understand these things well enough to not make mistakes is very small. This encourages people to fall into the pit of success rather than try and do these things by hand and make mistakes because of not knowing all the edge cases.”

Having it in the language itself might even prove more efficient, Palmer suggested. “Potentially, one of the advantages of delegating this to the JavaScript engine is that it might be able to find faster ways to do this check. You could imagine, for example, it might just cache a single bit of information with every string to say ’I’ve already checked this, this string is not good’ so that every time you pass it somewhere that needs a good string, it doesn’t need to walk every character to check it again, but just look at that one bit.”

Adding Locks With Async Code

“On the main thread, where you’re providing interactivity to the user, it’s one of the rules of the web: thou shalt not halt the main thread!”
– Rob Palmer, TC-39 co-chair

JavaScript is technically a single-threaded language that supports multithreading and asynchronous code. That’s because as well as having web workers and service workers that are isolated from the main thread that provides the user interface, it’s the nature of the web that quite often you’re waiting for something from the network or the operating system, so the main thread can run other code.

The tools in JavaScript for managing this continue to get more powerful with a new option in ECMAScript 2024, Atomics.waitAsync.

“If you want to do multithreading in JavaScript, we have web workers, you can spin up another thread, and the model that was originally based on is message passing, which is nice and safe,” Palmer explained. “But for people [who] want to go faster, where that becomes a bottleneck, shared memory is the more raw, lower-level way of having these threads cooperate on shared data, and SharedArrayBuffer was introduced long ago to permit this memory sharing. And when you’ve got a multithreaded system with shared memory, you need locks to make sure that you got orderly operations.”

“When you wait, you lock up the thread, it can’t make any progress. And that’s fine on a worker thread. On the main thread, where you’re providing interactivity to the user, it’s one of the rules of the web: thou shalt not halt the main thread!”

Async avoids blocking the main thread because it can move on to any other tasks that are ready to go, like loading data that has come in from the network, and the Atomics.wait API offers event-based waiting when you’re not on the main thread. But sometimes you do want the main thread to wait for something.

“Even if you’re not on the main thread, you shouldn’t be blocking most of the time,” Ehrenberg warned, while noting that “it was important for game engines to be allowed to block when they’re not on the main thread, to be able to recreate what they could do in C++ code bases.”

Developers who need this have created workarounds for this, again using message passing, but this had some overheads and slowed things down. Atomics.waitAsync can be used on the main thread and provides a first-class way of waiting on a lock. “The key thing is that it doesn’t stall the main thread,” Palmer said.

“If you call it and the lock is not ready, it will instead give you a backup promise so [that] you can use regular async.await and treat it just like any other promise. This solves how to have high-performance access operations on locks.”

Another proposal still in development takes a slightly different approach to waiting on the main thread and would be useful for making multithreaded applications written in Emscripten more efficient. Atomics.pause promises ‘microwaits’ that can be called from the main thread or worker threads. “It does block, but it’s limited in how long it can block for,” Claymore told us.

Most JavaScript developers won’t use either of these options directly, Palmer pointed out: “It’s very hard to write threaded code.” But mutex libraries would likely rely on it, as might tools for compiling to WebAssembly.

“We can all benefit from this, even if we’re not using it directly.”

Easier WebAssembly Integration

Another proposal for adding a JavaScript API for features previously handled in DOM APIs or available to WebAssembly bytecode, but not JavaScript, is resizable array buffers.

WebAssembly and JavaScript programs need to share memory. “You need to access the WebAssembly heap reasonably efficiently and frequently from the JavaScript side, because in real applications you will not have communication between the two sides; they’re largely sharing their information via the heap,” Ehrenberg explained.

“If you’ve got a WebAssembly toolchain like Emscripten, it means it can do this without creating wrapper objects.”
– Palmer

WebAssembly memory can grow if required: but if it does and you want to access it from JavaScript, that means detaching the ArrayBuffer you’re using for handling binary data in memory and building a new TypedArray over the underlying ArrayBuffer next time you need to access the heap. That’s extra work that fragments the memory space on 32-bit systems.

Now you can create a new type of array buffer that’s resizable, so it can grow or shrink without needing to be detached.

“If you’ve got a WebAssembly toolchain like Emscripten, it means it can do this without creating wrapper objects, which just add inefficiencies,” Palmer added. Again, while few JavaScript developers will use this directly, libraries will likely use resizable arrays for efficiency and no longer need to work around a missing part of the language to do it — making them smaller and faster to load by reducing the amount of code that needs to be downloaded.

Developers have to explicitly choose this, because array buffers and typed arrays are the most common way hackers try to attack browsers, and making the existing ArrayBuffer resizable would mean changing a lot of code that’s been extensively tested and hardened, possibly creating bugs and vulnerabilities.

“That detach operation was initially created to enable transfer to and from workers,” Ehrenberg explained. “When you send an ArrayBuffer via a POST message to transfer it to another worker, the ownership transfers and all the typed arrays that are closing over that ArrayBuffer become detached.” A companion proposal lets developers transfer the ownership of array buffers so they can’t be tampered with.

“If you pass it into a function, the receiver can use this transfer function to acquire ownership, so anyone who used to have a handle to it no longer has it,” Palmer explained. “That’s good for integrity.”

As well as being part of ECMAScript 2024, resizable array buffers have been integrated into the WebAssembly JS API; Ehrenberg called this out as an example of collaboration between different communities in the web ecosystem that’s working well.

“These are efforts that span multiple different standards committees and require explicit cooperation among them. That can be complicated because you have to bring a lot of people in the loop, but ultimately, I think it leads to [a] good design process. You get a lot of vetting.”

Posted in VulnerabilityTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

Jumpstart AI Workflows With Kubernetes AI Toolchain Operator

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Generative AI is booming as industries and innovators look for ways to transform digital experiences. From AI chatbots to language translation tools, people are interacting with a common set of AI/ML models, known as language models in everyday scenarios. As a result, new language models have developed rapidly in size, performance and use cases in recent years.

As an application developer, you may want to integrate a high-performance model by simply making a REST call and pointing your app to an inferencing endpoint for options like Falcon, Mistral, Phi and other models. Just like that, you’ve unlocked the doors to the AI kingdom and all kinds of intelligent applications.

Open source language models are a cost-effective way to experiment with AI, and Kubernetes has emerged as the open source platform best suited for scaling and automating AI/ML applications without compromising the security of user and company data.

“Let’s make Kubernetes the engine of AI transformation,” said Jorge Palma, Microsoft principal product manager lead. He gave the keynote at KubeCon Europe 2024, where AI use cases were discussed everywhere. Palma talked about the number of developers he’s met who are deploying models locally in their own infrastructure, putting them in containers and using Kubernetes clusters to host them.

“Container images are a great format for models. They’re easy to distribute,” Palma told KubeCon. “Then you can deploy them to Kubernetes and leverage all the nice primitives and abstractions that it gives you — for example, managing that heterogeneous infrastructure, and at scale.”

Containers also help you avoid the annoying “but it runs fine on my machine” issue. They’re portable, so your models run consistently across environments. They simplify version control to better maintain iterations of your model as you fine-tune for performance improvements. Containers provide resource isolation, so you can run different AI projects without mixing up components. And, of course, running containers in Kubernetes clusters makes it easy to scale out — a crucial factor when working with large models.

“If you aren’t going to use Kubernetes, what are you going to do?” asked Ishaan Sehgal, a Microsoft software engineer. He is a contributor to the Kubernetes AI Toolchain Operator (KAITO) project and has helped develop its major components to simplify AI deployment on a given cluster. KAITO is a Kubernetes operator and open source project developed at Microsoft that runs in your cluster and automates the deployment of large AI models.

As Sehgal pointed out, Kubernetes gives you the scale and resiliency you need when running AI workloads. Otherwise, if a virtual machine (VM) fails or your inferencing endpoint goes down, you must attach another node and set everything up again. “The resiliency aspect, the data management — Kubernetes is great for running AI workloads, for those reasons,” he said.

Kubernetes Makes It Easier, KAITO Takes It Further

Kubernetes makes it easier to scale out AI models, but it’s not exactly easy. In my article “Bring your AI/ML workloads to Kubernetes and leverage KAITO,” I highlight some of the hurdles that developers face with this process. For example, just getting started is complicated. Without prior experience, you might need several weeks to correctly set up your environment. Downloading and storing the large model weights, upwards of 200 GB in size, is just the beginning. There are storage and loading time requirements for model files. Then you need to efficiently containerize your models and host them — choosing the right GPU size for your model while keeping costs in mind. And there are troubleshooting pesky quota limits on compute hardware.

Using KAITO, a workflow that previously could span weeks now takes only minutes. This tool streamlines the tedious details of deploying, scaling, and managing AI workloads on Kubernetes, so you can focus on other aspects of the ML life cycle. You can choose from a range of popular open source models or onboard your custom option, and KAITO tunes the deployment parameters and automatically provisions GPU nodes for you. Today, KAITO supports five model families and over 10 containerized models, ranging from small to large language models.

For an ML engineer like Sehgal, KAITO overcomes the hassle of managing different tools, add-ons and versions. You get a simple, declarative interface “that encapsulates all the requirements you need for running your inferencing model. Everything gets set up,” he explained.

How KAITO Works

Using KAITO is a two-step process. First, install KAITO on your cluster, and then select a preset that encapsulates all the requirements needed for inference with your model. Within the associated workspace custom resource definition (CRD), a minimum GPU size is recommended so you don’t have to search for the ideal hardware. You can always customize the CRD to your needs. After deploying the workspace, KAITO uses the node provisioner controller to automate the rest.

“KAITO is basically going to provision GPU nodes and add them to your cluster on your behalf,” explained Microsoft senior cloud evangelist Paul Yu. “As a user, I just have to deploy my workspace into the AKS cluster, and that creates the additional CR.”

As shown in the following KAITO architecture, the workspace invokes a provisioner to create and configure the right-sized infrastructure for you, and it even distributes your workload across smaller GPU nodes to reduce costs. The project uses open source Karpenter APIs for the VM configuration based on your requested size, installing the right drivers and device plug-ins for Kubernetes.

Graphic of Kubernetes AI Toolchain Operator

 Zoom

For applications with compliance requirements, KAITO provides granular control over data security and privacy. You can ensure that models are ring-fenced within your organization’s network and that your data never leaves the Kubernetes cluster.

Check out this tutorial on how to bring your own AI models to intelligent apps on Azure Kubernetes Service, where Sehgal and Yu integrate KAITO in a common e-commerce application in a matter of minutes.

Working With Managed Kubernetes

Currently, you can use KAITO to provision GPUs on Azure, but the project is evolving quickly. The roadmap includes support for other managed Kubernetes providers. When you use a managed Kubernetes service, you can interact with other services from that cloud platform more easily to add capabilities to your workflow or applications.

Earlier this year, at Microsoft Build 2024, BMW talked about its use of generative AI and Azure OpenAI Service in the company’s connected car app, My BMW, which runs on AKS.

Brendan Burns, co-founder of the Kubernetes open source project, introduced the demo. “What we’re seeing is, as people are using Azure OpenAI Service, they’re building the rest of the application on top of AKS,” he told the audience. “But, of course, just using OpenAI Service isn’t the only thing you might want to use. There are a lot of reasons why you might want to use open source large language models, including situations like data and security compliance, and fine-tuning what you want to do. Maybe there’s just a model out there that’s better suited to your task. But doing this inside of AKS can be tricky so we integrated the Kubernetes AI Toolchain Operator as an open source project.”

Make Your Language Models Smarter with KAITO

Open-source language models are trained on extensive amounts of text from a variety of sources, so the output for domain-specific prompts may not always meet your needs. Take the pet store application, for example. If a customer asks the integrated pre-trained model for dog food recommendations, it might give different pricing options across a few popular dog breeds. This is informative but not necessarily useful as the customer shops. After fine-tuning the model on historical data from the pet store, the recommendations can instead be tailored to well-reviewed, affordable options available in the store.

As a step in your ML lifecycle, fine-tuning helps customize open source models to your own data and use cases. The latest release of KAITO, v0.3.0, supports fine-tuning, and inference with adapters and a broader range of models. You can simply define your tuning method and data source in a KAITO workspace CR and see your intelligent app become more context-aware while maintaining data security and compliance requirements in your cluster.

To stay up to date on the project roadmap and test these new features, check out KAITO on GitHub.

Posted in VulnerabilityTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

Open Source or Closed? The AI Dilemma

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Artificial intelligence is in the middle of a perfect storm in the software industry, and now Mark Zuckerberg is calling for open sourced AI.

Three powerful perspectives are colliding on how to control AI:

  1. All AI should be open source for sharing and transparency.
  2. Keep AI closed-source and allow big tech companies to control it.
  3. Establish regulations for the use of AI.

There are a few facts that make this debate tricky. First, if you have a model’s source code, you know absolutely nothing about how the model will behave. Openness in AI requires far more than providing source code. Second, AI comes in many different flavors and can be used to solve a broad range of problems.

From traditional AI for fraud detection and targeted advertising to generative AI for creating chatbots that, on the surface, produce human-like results, pushing us closer and closer to the ultimate (and scary) goal of Artificially Generated Intelligence (AGI). Finally, the ideas listed above for controlling AI all have a proven track record for improving software in general.

Understanding the Different Perspectives

Let’s discuss the different perspectives listed above in more detail.

Perspective #1 — All AI should be open source for sharing and transparency: This comes from a push for transparency with AI. Open source is a proven way to share and improve software. It provides complete transparency when used for conventional software. Open source software has propelled the software industry forward by leaps and bounds.

Perspective #2 — Keep AI closed-source and allow big tech companies to control it: Closed-source, or proprietary software, is the idea that an invention can be kept a secret, away from the competition, to maximize financial gain. To open source idealists, this sounds downright evil; however, it is more of a philosophical choice than one that exists on the spectrum of good and evil. Most software is proprietary, and that is not inherently bad — it is the foundation of a competitive and healthy ecosystem. It is a fundamental right of any innovator who creates something new to choose the closed-source path. The question becomes, if you operate without transparency, what guarantees can there be around responsible AI?

Perspective #3 — Establish regulations for using AI: This comes from lawmakers and elected officials pushing for regulation. The basic idea is that if a public function or technology is so powerful that bad actors or irresponsible management could hurt the general public, a government agency should be appointed to develop controls and enforce those controls. A school of thought suggests that incumbent and current leaders in AI also want regulation, but for reasons that are less pure — they want to freeze the playing field with them in the lead. We will primarily focus on the public good area.

The True Nature of Open Source

Before generative AI burst onto the scene, most software running in data centers was conventional. You can determine precisely what it does if you have the source code for traditional software. An engineer fluent in the appropriate programming language can review the code and determine its logic. You can even modify it and alter its behavior. Open source (or open source code) is another way of saying — I am going to provide everything needed to determine behavior and change behavior. In short, the true nature of open source software is to provide everything you need to understand the software’s behavior and change it.

For a model to be fully open, you need the training data, the source code of the model, the hyperparameters used during training, and, of course, the trained model itself, which is composed of the billions (and soon trillions) of parameters that store the model’s knowledge — also known as parametric memory. Now, some organizations only provide the model, keep everything else to themselves, and claim it is “open source.” This practice is known as “open-washing” and is generally frowned upon by both the open and closed-source communities as disingenuous. I would like to see a new term used for AI models that are partially shared. Maybe “partially open model” or “model from an open washing company.”

There is one final rub when it comes to fully shared models. Let’s say an organization wants to do the right thing and shares everything about a model — the training data, the source code, the hyperparameters, and the trained model. You still can’t determine precisely how it will behave unless you test it extensively. The parametric memory that determines behavior is not human-readable. Again, the industry needs a different term for fully open models. A term that is different from “open source,” which should only be used for non-AI software because the source code of a model does not help determine the behavior of the model. Perhaps “open model.”

Common Arguments

Let’s look at some common arguments that endorse using only one of the previously described perspectives. These are passionate defenders of their perspective, but that passion can cloud judgment.

Argument: Closed AI supporters claim that big tech companies have the means to guard against potential dangers and abuse. Therefore, AI should be kept private and out of the open source community.

Rebuttal: Big tech companies have the means to guard against potential abuse, but that does not mean they will do it judiciously or at all. Furthermore, there are other objectives besides this. Their primary purpose is making money for their shareholders, which will always take precedence.

Argument: Those who think that AI could threaten humanity like to ask, “Would you open source the Manhattan Project?”

Rebuttal: This is an argument for governance. However, it is an unfair and incorrect analogy. The purpose of the Manhattan Project was to build a bomb during wartime by using radioactive materials to produce nuclear fusion. Nuclear fusion is not a general-purpose technology that can be applied to different tasks. You can make a bomb and generate power — that’s it. The ingredients and the results are dangerous to the general public, so all aspects should be regulated. AI is much different. As described above, it comes in varying flavors with varying risks.

Argument: Proponents of open sourcing AI say that open source facilitates the sharing of science, provides transparency, and is a means to prevent a few from monopolizing a powerful technology.

Rebuttal: This is primarily true, but it is not entirely true. Open source does provide sharing. For an AI model, it is only going to provide some transparency. Finally, whether “open models” will prevent a few from monopolizing their power is debatable. To run a model like ChatGPT at scale, you must compute that only a few companies can acquire it.

Needs of the Many Outweigh the Needs of the Few

In “Star Trek II: The Wrath of Khan,” Spock dies from radiation poisoning. Spock realizes that the ship’s main engines must be repaired to facilitate an escape, but the engine room is flooded with lethal radiation. Despite the danger, Spock enters the radiation-filled chamber to make the necessary repairs. He successfully restores the warp drive, allowing the Enterprise to reach a safe distance. Unfortunately, Vulcans are not immune to radiation. His dying words to Captain Kirk explain the logic behind his actions, “The needs of the many outweigh the needs of the few or the one.”

This is perfectly sound logic and will have to be used to control AI. Specific models pose a risk to the general public. For these models, the general public’s needs outweigh innovators’ rights.

Should All AI Be Open Source?

Let’s review the axioms established thus far:

  • Open Source should remain a choice.
  • Open models are not as transparent as non-AI software that is open sourced.
  • Close Source is a right of the innovator.
  • There is no guarantee that big tech will correctly control their AI.
  • The needs of the general public must take precedence over all others.

The five bullets above represent everything I tried to make clear about open source, closed source, and regulations. If you believe them to be accurate, the answer to the question, “Should All AI be Open Source?” is no because it will not control AI, nor will a closed source. Furthermore, in a fair world, open source and open models should remain a choice, and close source should remain a right.

We can go one step further and talk about the actions the industry can take as a whole to move toward effective control of AI:

  • Determine the types of models that pose a risk to the general public. Because they control information (chatbots) or dangerous resources (automated cars), models with high risk should be regulated.
  • Organizations should be encouraged to share their models as fully open models. The open source community will need to step up and either prevent or label models that are only partially shared. The open source community should also put together tests that can be used to rate models.
  • Closed models should still be allowed if they do not pose a risk to the general public. Big Tech should develop its controls and tests that it funds and shares. This may be a chance for Big Tech to work closely with the open source community to solve a common problem.

Posted in VulnerabilityTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

SAP AI Core Vulnerabilities Expose Customer Data to Cyber Attacks

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Cybersecurity researchers have uncovered security shortcomings in SAP AI Core cloud-based platform for creating and deploying predictive artificial intelligence (AI) workflows that could be exploited to get hold of access tokens and customer data.

The five vulnerabilities have been collectively dubbed SAPwned by cloud security firm Wiz.

“The vulnerabilities we found could have allowed attackers to access customers’ data and contaminate internal artifacts – spreading to related services and other customers’ environments,” security researcher Hillai Ben-Sasson said in a report shared with The Hacker News.

Following responsible disclosure on January 25, 2024, the weaknesses were addressed by SAP as of May 15, 2024.

In a nutshell, the flaws make it possible to obtain unauthorized access to customers’ private artifacts and credentials to cloud environments like Amazon Web Services (AWS), Microsoft Azure, and SAP HANA Cloud.

They could also be used to modify Docker images on SAP’s internal container registry, SAP’s Docker images on the Google Container Registry, and artifacts hosted on SAP’s internal Artifactory server, resulting in a supply chain attack on SAP AI Core services.

Furthermore, the access could be weaponized to gain cluster administrator privileges on SAP AI Core’s Kubernetes cluster by taking advantage of the fact that the Helm package manager server was exposed to both read and write operations.

“Using this access level, an attacker could directly access other customer’s Pods and steal sensitive data, such as models, datasets, and code,” Ben-Sasson explained. “This access also allows attackers to interfere with customer’s Pods, taint AI data and manipulate models’ inference.”

Wiz said the issues arise due to the platform making it feasible to run malicious AI models and training procedures without adequate isolation and sandboxing mechanisms.

“The recent security flaws in AI service providers like Hugging Face, Replicate, and SAP AI Core highlight significant vulnerabilities in their tenant isolation and segmentation implementations,” Ben-Sasson told The Hacker News. “These platforms allow users to run untrusted AI models and training procedures in shared environments, increasing the risk of malicious users being able to access other users’ data.”

“Unlike veteran cloud providers who have vast experience with tenant-isolation practices and use robust isolation techniques like virtual machines, these newer services often lack this knowledge and rely on containerization, which offers weaker security. This underscores the need to raise awareness of the importance of tenant isolation and to push the AI service industry to harden their environments.”

As a result, a threat actor could create a regular AI application on SAP AI Core, bypass network restrictions, and probe the Kubernetes Pod’s internal network to obtain AWS tokens and access customer code and training datasets by exploiting misconfigurations in AWS Elastic File System (EFS) shares.

“People should be aware that AI models are essentially code. When running AI models on your own infrastructure, you could be exposed to potential supply chain attacks,” Ben-Sasson said.

“Only run trusted models from trusted sources, and properly separate between external models and sensitive infrastructure. When using AI services providers, it’s important to verify their tenant-isolation architecture and ensure they apply best practices.”

The findings come as Netskope revealed that the growing enterprise use of generative AI has prompted organizations to use blocking controls, data loss prevention (DLP) tools, real-time coaching, and other mechanisms to mitigate risk.

“Regulated data (data that organizations have a legal duty to protect) makes up more than a third of the sensitive data being shared with generative AI (genAI) applications — presenting a potential risk to businesses of costly data breaches,” the company said.

They also follow the emergence of a new cybercriminal threat group called NullBulge that has trained its sights on AI- and gaming-focused entities since April 2024 with an aim to steal sensitive data and sell compromised OpenAI API keys in underground forums while claiming to be a hacktivist crew “protecting artists around the world” against AI.

“NullBulge targets the software supply chain by weaponizing code in publicly available repositories on GitHub and Hugging Face, leading victims to import malicious libraries, or through mod packs used by gaming and modeling software,” SentinelOne security researcher Jim Walter said.

“The group uses tools like AsyncRAT and XWorm before delivering LockBit payloads built using the leaked LockBit Black builder. Groups like NullBulge represent the ongoing threat of low-barrier-of-entry ransomware, combined with the evergreen effect of info-stealer infections.”

Posted in Cyber Attacks, VulnerabilityTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

SolarWinds Patches 8 Critical Flaws in Access Rights Manager Software

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

SolarWinds has addressed a set of critical security flaws impacting its Access Rights Manager (ARM) software that could be exploited to access sensitive information or execute arbitrary code.

Of the 13 vulnerabilities, eight are rated Critical in severity and carry a CVSS score of 9.6 out of 10.0. The remaining five weaknesses have been rated High in severity, with four of them having a CVSS score of 7.6 and one scoring 8.3.

The most severe of the flaws are listed below –

  • CVE-2024-23472 – SolarWinds ARM Directory Traversal Arbitrary File Deletion and Information Disclosure Vulnerability
  • CVE-2024-28074 – SolarWinds ARM Internal Deserialization Remote Code Execution Vulnerability
  • CVE-2024-23469 – Solarwinds ARM Exposed Dangerous Method Remote Code Execution Vulnerability
  • CVE-2024-23475 – Solarwinds ARM Traversal and Information Disclosure Vulnerability
  • CVE-2024-23467 – Solarwinds ARM Traversal Remote Code Execution Vulnerability
  • CVE-2024-23466 – Solarwinds ARM Directory Traversal Remote Code Execution Vulnerability
  • CVE-2024-23470 – Solarwinds ARM UserScriptHumster Exposed Dangerous Method Remote Command Execution Vulnerability
  • CVE-2024-23471 – Solarwinds ARM CreateFile Directory Traversal Remote Code Execution Vulnerability

Successful exploitation of the aforementioned vulnerabilities could allow an attacker to read and delete files and execute code with elevated privileges.

The shortcomings have been addressed in version 2024.3 released on July 17, 2024, following responsible disclosure as part of the Trend Micro Zero Day Initiative (ZDI).

The development comes after the U.S. Cybersecurity and Infrastructure Security Agency (CISA) placed a high-severity path traversal flaw in SolarWinds Serv-U Path (CVE-2024-28995, CVSS score: 8.6) to its Known Exploited Vulnerabilities (KEV) catalog following reports of active exploitation in the wild.

The network security company was the victim of a major supply chain attack in 2020 after the update mechanism associated with its Orion network management platform was compromised by Russian APT29 hackers to distribute malicious code to downstream customers as part of a high-profile cyber espionage campaign.

The breach prompted the U.S. Securities and Exchange Commission (SEC) to file a lawsuit against SolarWinds and its chief information security officer (CISO) last October alleging the company failed to disclose adequate material information to investors regarding cybersecurity risks.

However, much of the claims pertaining to the lawsuit were thrown out by the U.S. District Court for the Southern District of New York (SDNY) on July 18, stating “these do not plausibly plead actionable deficiencies in the company’s reporting of the cybersecurity hack” and that they “impermissibly rely on hindsight and speculation.”

Posted in VulnerabilityTagged Cyber Attacks, Data Security, malware, vulnerabilityLeave a comment

SocGholish Malware Exploits BOINC Project for Covert Cyberattacks

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

The JavaScript downloader malware known as SocGholish (aka FakeUpdates) is being used to deliver a remote access trojan called AsyncRAT as well as a legitimate open-source project called BOINC.

BOINC, short for Berkeley Open Infrastructure Network Computing Client, is an open-source “volunteer computing” platform maintained by the University of California with an aim to carry out “large-scale distributed high-throughput computing” using participating home computers on which the app is installed.

“It’s similar to a cryptocurrency miner in that way (using computer resources to do work), and it’s actually designed to reward users with a specific type of cryptocurrency called Gridcoin, designed for this purpose,” Huntress researchers Matt Anderson, Alden Schmidt, and Greg Linares said in a report published last week.

These malicious installations are designed to connect to an actor-controlled domain (“rosettahome[.]cn” or “rosettahome[.]top”), essentially acting as a command-and-control (C2) server to collect host data, transmit payloads, and push further commands. As of July 15, 10,032 clients are connected to the two domains.

The cybersecurity firm said while it hasn’t observed any follow-on activity or tasks being executed by the infected hosts, it hypothesized that the “host connections could be sold off as initial access vectors to be used by other actors and potentially used to execute ransomware.”

SocGholish attack sequences typically begin when users land on compromised websites, where they are prompted to download a fake browser update that, upon execution, triggers the retrieval of additional payloads to the infiltrated machines.

The JavaScript downloader, in this case, activates two disjointed chains, one that leads to the deployment of a fileless variant of AsyncRAT and the other resulting in the BOINC installation.

SocGholish Malware

The BOINC app, which is renamed as “SecurityHealthService.exe” or “trustedinstaller.exe” to evade detection, sets up persistence using a scheduled task by means of a PowerShell script.

The misuse of BOINC for malicious purposes hasn’t gone unnoticed by the project maintainers, who are currently investigating the problem and finding a way to “defeat this malware.” Evidence of the abuse dates back to at least June 26, 2024.

“The motivation and intent of the threat actor by loading this software onto infected hosts isn’t clear at this point,” the researchers said.

“Infected clients actively connecting to malicious BOINC servers present a fairly high risk, as there’s potential for a motivated threat actor to misuse this connection and execute any number of malicious commands or software on the host to further escalate privileges or move laterally through a network and compromise an entire domain.”

The development comes as Check Point said it’s been tracking the use of compiled V8 JavaScript by malware authors to sidestep static detections and conceal remote access trojans, stealers, loaders, cryptocurrency miners, wipers, and ransomware.

“In the ongoing battle between security experts and threat actors, malware developers keep coming up with new tricks to hide their attacks,” security researcher Moshe Marelus said. “It’s not surprising that they’ve started using V8, as this technology is commonly used to create software as it is very widespread and extremely hard to analyze.”

Posted in VulnerabilityTagged Cyber Attacks, Data Security, malware, vulnerabilityLeave a comment

Chinese Hackers Target Taiwan and U.S. NGO with MgBot and MACMA Malware

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Organizations in Taiwan and a U.S. non-governmental organization (NGO) based in China have been targeted by a Beijing-affiliated state-sponsored hacking group called Daggerfly using an upgraded set of malware tools.

The campaign is a sign that the group “also engages in internal espionage,” Symantec’s Threat Hunter Team, part of Broadcom, said in a new report published today. “In the attack on this organization, the attackers exploited a vulnerability in an Apache HTTP server to deliver their MgBot malware.”

Daggerfly, also known by the names Bronze Highland and Evasive Panda, was previously observed using the MgBot modular malware framework in connection with an intelligence-gathering mission aimed at telecom service providers in Africa. It’s known to be operational since 2012.

“Daggerfly appears to be capable of responding to exposure by quickly updating its toolset to continue its espionage activities with minimal disruption,” the company noted.

The latest set of attacks are characterized by the use of a new malware family based on MgBot as well as an improved version of a known Apple macOS malware called MACMA, which was first exposed by Google’s Threat Analysis Group (TAG) in November 2021 as distributed via watering hole attacks targeting internet users in Hong Kong by abusing security flaws in the Safari browser.

The development marks the first time the malware strain, which is capable of harvesting sensitive information and executing arbitrary commands, has been explicitly linked to a particular hacking group.

“The actors behind macOS.MACMA at least were reusing code from ELF/Android developers and possibly could have also been targeting Android phones with malware as well,” SentinelOne noted in a subsequent analysis at the time.

MACMA’s connections to Daggerly also stem from source code overlaps between the malware and Mgbot, and the fact that it connects to a command-and-control (C2) server (103.243.212[.]98) that has also been used by a MgBot dropper.

Another new malware in its arsenal is Nightdoor (aka NetMM and Suzafk), an implant that uses Google Drive API for C2 and has been utilized in watering hole attacks aimed at Tibetan users since at least September 2023. Details of the activity were first documented by ESET earlier this March.

“The group can create versions of its tools targeting most major operating system platform,” Symantec said, adding it has “seen evidence of the ability to trojanize Android APKs, SMS interception tools, DNS request interception tools, and even malware families targeting Solaris OS.”

The development comes as China’s National Computer Virus Emergency Response Center (CVERC) claimed Volt Typhoon – which has been attributed by the Five Eyes nations as a China-nexus espionage group – to be an invention of the U.S. intelligence agencies, describing it as a misinformation campaign.

“Although its main targets are U.S. congress and American people, it also attempt[s] to defame China, sow discords [sic] between China and other countries, contain China’s development, and rob Chinese companies,” the CVERC asserted in a recent report.

Posted in VulnerabilityTagged Cyber Attacks, Data Security, malware, vulnerabilityLeave a comment

CISA Adds Twilio Authy and IE Flaws to Exploited Vulnerabilities List

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has added two security flaws to its Known Exploited Vulnerabilities (KEV) catalog, based on evidence of active exploitation.

The vulnerabilities are listed below –

  • CVE-2012-4792 (CVSS score: 9.3) – Microsoft Internet Explorer Use-After-Free Vulnerability
  • CVE-2024-39891 (CVSS score: 5.3) – Twilio Authy Information Disclosure Vulnerability

CVE-2012-4792 is a decade-old use-after-free vulnerability in Internet Explorer that could allow a remote attacker to execute arbitrary code via a specially crafted site.

It’s currently not clear if the flaw has been subjected to renewed exploitation attempts, although it was abused as part of watering hole attacks targeting the Council on Foreign Relations (CFR) and Capstone Turbine Corporation websites back in December 2012.

On the other hand, CVE-2024-39891 refers to an information disclosure bug in an unauthenticated endpoint that could be exploited to “accept a request containing a phone number and respond with information about whether the phone number was registered with Authy.”

Earlier this month, Twilio said it resolved the issue in versions 25.1.0 (Android) and 26.1.0 (iOS) after unidentified threat actors took advantage of the shortcoming to identify data associated with Authy accounts.

“These types of vulnerabilities are frequent attack vectors for malicious cyber actors and pose significant risks to the federal enterprise,” CISA said in an advisory.

Federal Civilian Executive Branch (FCEB) agencies are required to remediate the identified vulnerabilities by August 13, 2024, to protect their networks against active threats.

Posted in VulnerabilityTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

Microsoft Defender Flaw Exploited to Deliver ACR, Lumma, and Meduza Stealers

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

A now-patched security flaw in the Microsoft Defender SmartScreen has been exploited as part of a new campaign designed to deliver information stealers such as ACR Stealer, Lumma, and Meduza.

Fortinet FortiGuard Labs said it detected the stealer campaign targeting Spain, Thailand, and the U.S. using booby-trapped files that exploit CVE-2024-21412 (CVSS score: 8.1).

The high-severity vulnerability allows an attacker to sidestep SmartScreen protection and drop malicious payloads. Microsoft addressed this issue as part of its monthly security updates released in February 2024.

“Initially, attackers lure victims into clicking a crafted link to a URL file designed to download an LNK file,” security researcher Cara Lin said. “The LNK file then downloads an executable file containing an [HTML Application] script.”

The HTA file serves as a conduit to decode and decrypt PowerShell code responsible for fetching a decoy PDF file and a shellcode injector that, in turn, either leads to the deployment of Meduza Stealer or Hijack Loader, which subsequently launches ACR Stealer or Lumma.

ACR Stealer, assessed to be an evolved version of the GrMsk Stealer, was advertised in late March 2024 by a threat actor named SheldIO on the Russian-language underground forum RAMP.

“This ACR stealer hides its [command-and-control] with a dead drop resolver (DDR) technique on the Steam community website,” Lin said, calling out its ability to siphon information from web browsers, crypto wallets, messaging apps, FTP clients, email clients, VPN services, and password managers.

ACR, Lumma, and Meduza Stealers

It’s worth noting that recent Lumma Stealer attacks have also been observed utilizing the same technique, making it easier for the adversaries to change the C2 domains at any time and render the infrastructure more resilient, according to the AhnLab Security Intelligence Center (ASEC).

The disclosure comes as CrowdStrike has revealed that threat actors are leveraging last week’s outage to distribute a previously undocumented information stealer called Daolpu, making it the latest example of the ongoing fallout stemming from the faulty update that has crippled millions of Windows devices.

The attack involves the use of a macro-laced Microsoft Word document that masquerades as a Microsoft recovery manual listing legitimate instructions issued by the Windows maker to resolve the issue, leveraging it as a decoy to activate the infection process.

The DOCM file, when opened, runs the macro to retrieve a second-stage DLL file from a remote that’s decoded to launch Daolpu, a stealer malware equipped to harvest credentials and cookies from Google Chrome, Microsoft Edge, Mozilla Firefox, and other Chromium-based browsers.

It also follows the emergence of new stealer malware families such as Braodo and DeerStealer, even as cyber criminals are exploiting malvertising techniques promoting legitimate software such as Microsoft Teams to deploy Atomic Stealer.

“As cyber criminals ramp up their distribution campaigns, it becomes more dangerous to download applications via search engines,” Malwarebytes researcher Jérôme Segura said. “Users have to navigate between malvertising (sponsored results) and SEO poisoning (compromised websites).”

Posted in VulnerabilityTagged Cyber Attacks, Data Security, vulnerability2 Comments

CISA Warns of Exploitable Vulnerabilities in Popular BIND 9 DNS Software

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

The Internet Systems Consortium (ISC) has released patches to address multiple security vulnerabilities in the Berkeley Internet Name Domain (BIND) 9 Domain Name System (DNS) software suite that could be exploited to trigger a denial-of-service (DoS) condition.

“A cyber threat actor could exploit one of these vulnerabilities to cause a denial-of-service condition,” the U.S. Cybersecurity and Infrastructure Security Agency (CISA) said in an advisory.

The list of four vulnerabilities is listed below –

  • CVE-2024-4076 (CVSS score: 7.5) – Due to a logic error, lookups that triggered serving stale data and required lookups in local authoritative zone data could have resulted in an assertion failure
  • CVE-2024-1975 (CVSS score: 7.5) – Validating DNS messages signed using the SIG(0) protocol could cause excessive CPU load, leading to a denial-of-service condition.
  • CVE-2024-1737 (CVSS score: 7.5) – It is possible to craft excessively large numbers of resource record types for a given owner name, which has the effect of slowing down database processing
  • CVE-2024-0760 (CVSS score: 7.5) – A malicious DNS client that sent many queries over TCP but never read the responses could cause a server to respond slowly or not at all for other clients

Successful exploitation of the aforementioned bugs could cause a named instance to terminate unexpectedly, deplete available CPU resources, slow down query processing by a factor of 100, and render the server unresponsive.

The flaws have been addressed in BIND 9 versions 9.18.28, 9.20.0, and 9.18.28-S1 released earlier this month. There is no evidence that any of the shortcomings have been exploited in the wild.

The disclosure comes months after the ISC addressed another flaw in BIND 9 called KeyTrap (CVE-2023-50387, CVSS score: 7.5) that could be abused to exhaust CPU resources and stall DNS resolvers, resulting in a denial-of-service (DoS).

Posted in VulnerabilityTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

Posts navigation

Older posts
Newer posts

Recent Posts

  • New Malicious PyPI Packages used by Lazarus(By Shusei Tomonaga)
  • Recent Cases of Watering Hole Attacks, Part 1(By Shusei Tomonaga)
  • Recent Cases of Watering Hole Attacks Part 2(By Shusei Tomonaga)
  • Tempted to Classifying APT Actors: Practical Challenges of Attribution in the Case of Lazarus’s Subgroup(By Hayato Sasaki)
  • SPAWNCHIMERA Malware: The Chimera Spawning from Ivanti Connect Secure Vulnerability(By Yuma Masubuchi)
  • DslogdRAT Malware Installed in Ivanti Connect Secure(By Yuma Masubuchi)
  • DslogdRAT Malware Targets Ivanti Connect Secure via CVE-2025-0282 Zero-Day Exploit
  • Lazarus Group’s “Operation SyncHole” Targets South Korean Industries
  • North Korean APT ‘Contagious Interview’ Launches Fake Crypto Companies to Spread Malware Trio
  • SocGholish and RansomHub: Sophisticated Attack Campaign Targeting Corporate Networks
  • Critical Flaw Exposes Linux Security Blind Spot: io_uring Bypasses Detection
  • Discord Used as C2 for Stealthy Python-Based RAT
  • Earth Kurma APT Targets Southeast Asia with Stealthy Cyberespionage
  • Triada Trojan Evolves: Pre-Installed Android Malware Now Embedded in Device Firmware
  • Fake GIF and Reverse Proxy Used in Sophisticated Card Skimming Attack on Magento
  • Fog Ransomware Group Exposed: Inside the Tools, Tactics, and Victims of a Stealthy Threat
  • Weaponized Uyghur Language Software: Citizen Lab Uncovers Targeted Malware Campaign
  • 4Chan Resumes Operation After Hack, Cites Funding Issues
  • ResolverRAT Targets Healthcare and Pharmaceutical Sectors Through Sophisticated Phishing Attacks
  • CVE-2024-8190: Investigating CISA KEV Ivanti Cloud Service Appliance Command Injection Vulnerability
  • Dissecting the Cicada
  • LockBit Analysis
  • Attacking PowerShell CLIXML Deserialization
  • Threat Hunting Report: GoldPickaxe
  • Exploiting Microsoft Kernel Applocker Driver (CVE-2024-38041)
  • Acquiring Malicious Browser Extension Samples on a Shoestring Budget
  • Type Juggling and Dangers of Loose Comparisons
  • Exploring Deserialization Attacks and Their Effects
  • Hunting for Unauthenticated n-days in Asus Routers
  • Element Android CVE-2024-26131, CVE-2024-26132 – Never Take Intents From Strangers
  • A Journey From sudo iptables To Local Privilege Escalation
  • AlcaWASM Challenge Writeup – Pwning an In-Browser Lua Interpreter
  • Fortinet Confirms Third-Party Data Breach Amid Hacker’s 440 GB Theft Claim
  • Adversary Emulation is a Complicated Profession – Intelligent Cyber Adversary Emulation with the Bounty Hunter
  • Cloudflare blocks largest recorded DDoS attack peaking at 3.8Tbps
  • RPKI Security Under Fire: 53 Vulnerabilities Exposed in New Research
  • CVE-2024-5102: Avast Antivirus Flaw Could Allow Hackers to Delete Files and Run Code as SYSTEM
  • Build Your Own Google: Create a Custom Search Engine with Trusted Sources
  • Rogue AI: What the Security Community is Missing
  • Ransomware Roundup – Underground
  • Emansrepo Stealer: Multi-Vector Attack Chains
  • Threat Actors Exploit GeoServer Vulnerability CVE-2024-36401
  • In-depth analysis of Pegasus spyware and how to detect it on your iOS device
  • GoldPickaxe exposed: How Group-IB analyzed the face-stealing iOS Trojan and how to do it yourself
  • Beware CraxsRAT: Android Remote Access malware strikes in Malaysia
  • Boolka Unveiled: From web attacks to modular malware
  • Ajina attacks Central Asia: Story of an Uzbek Android Pandemic
  • SMTP/s — Port 25,465,587 For Pentesters
  • POC – CVE-2024–4956 – Nexus Repository Manager 3 Unauthenticated Path Traversal
  • Unauthenticated RCE Flaw in Rejetto HTTP File Server – CVE-2024-23692
  • CVE-2024–23897 — Jenkins File Read Vulnerability — POC
  • Why Django’s [DEBUG=True] is a Goldmine for Hackers
  • Extracting DDosia targets from process memory
  • Dynamic Binary Instrumentation for Malware Analysis
  • Meduza Stealer or The Return of The Infamous Aurora Stealer
  • Unleashing the Viper : A Technical Analysis of WhiteSnake Stealer
  • MetaStealer – Redline’s Doppelgänger
  • Pure Logs Stealer Fails to Impress
  • MetaStealer Part 2, Google Cookie Refresher Madness and Stealer Drama
  • From Russia With Code: Disarming Atomic Stealer

Recent Comments

  1. Maq Verma on Turla APT used two new backdoors to infiltrate a European ministry of foreign affairs
  2. binance Registrera on Turla APT used two new backdoors to infiltrate a European ministry of foreign affairs
  3. Hal on FBI: BlackSuit ransomware made over $500 million in ransom demands
  4. canadian pharmaceuticals on Linux: Mount Remote Directories With SSHFS
  5. situs togel resmi on Extracting DDosia targets from process memory

Archives

  • April 2025 (19)
  • November 2024 (20)
  • October 2024 (13)
  • September 2024 (2)
  • August 2024 (119)
  • July 2024 (15)

Categories

  • Crack Tutorials
  • Cyber Attacks
  • Data Breaches
  • Exploits
  • Programming
  • Tools
  • Vulnerability

Site Visitors

  • Users online: 0 
  • Visitors today : 6
  • Page views today : 6
  • Total visitors : 2,222
  • Total page view: 2,831

$22 Million AWS Bitmagnet BlackCat Bytecode CrowdStrike Cyber Attacks cyber security Data Breach Data Security DDOS Decentralized Encryption fake github Indexer Injection Activity kernel Linux Maestro malware Microsoft Model Architecture Netflix Open Source Phishing Phishing Scam Programming Ransomware Reverse Engineering Safe Delete Safe Erase Scam Security tool Software Crack Software Design software protection SOLID SOLID Principles Sophos Intercept X Advanced Spyware Tools Torrent TryCloudflare vulnerability Workflow Engine

Proudly powered by Admiration Tech News | Copyright ©2023 Admiration Tech News | All Rights Reserved