DigiCert has started revoking thousands of certificates impacted by a recently discovered verification issue, but some customers in critical infrastructure and other sectors are asking for more time.
The certificate authority (CA) informed customers on July 29 of an incident related to domain validation, saying that it needs to revoke some certificates within 24 hours due to strict CA/Browser Forum (CABF) rules.
The company initially said roughly 0.4% of applicable domain validations were impacted. A DigiCert representative clarified in discussions with stakeholders that 83,267 certificates and 6,807 subscribers are affected.
DigiCert said some of the impacted customers were able to quickly reissue their certificates, but others would not be able to do so within the 24-hour time frame.
“Unfortunately, many other customers operating critical infrastructure, vital telecommunications networks, cloud services, and healthcare industries are not in a position to be revoked without critical service interruptions. While we have deployed automation with several willing customers, the reality is that many large organizations cannot reissue and deploy new certificates everywhere in time,” said Jeremy Rowley, CISO at DigiCert.
DigiCert said in an updated notification that it has been working with browser representatives and customers in an effort to delay revocations under exceptional circumstances in order to avoid disruption to critical services.
However, the company highlighted that “all certificates impacted by this incident, regardless of circumstances, will be revoked no later than Saturday, August 3rd 2024, 19:30 UTC.”
Rowley noted that some customers have initiated legal action against DigiCert in an attempt to block the revocation of certificates.
The certificates are being revoked due to an issue related to the process used by DigiCert to validate that a customer requesting a TLS certificate for a domain is actually the owner or administrator of that domain.
One option is for customers to add a DNS CNAME record with a random value provided by DigiCert to their domain. The random value provided by DigiCert is prefixed by an underscore character to prevent collisions between the value and the domain name. However, the underscore prefix was not added in some cases since 2019.
In order to comply with CABF rules, DigiCert has to revoke certificates with an issue in their domain validation within 24, without exception.
Andrew Ayer, founder of SSLMate and an expert in digital certificates, believes that DigiCert’s public notification about this incident “gets the security impact of the noncompliance completely wrong”.
“[…] this is truly a security-critical incident, as there is a real risk […] that this flaw could have been exploited to get unauthorized certificates. Revocation of the improperly validated certificates is security-critical,” Ayer said.
The European Union’s world-first artificial intelligence law formally took effect on Thursday, marking the latest milestone in the bloc’s efforts to regulate the technology.
Officials say the Artificial Intelligence Act will protect the “fundamental rights” of citizens in the 27-nation bloc while also encouraging investment and innovation in the booming AI industry.
Years in the making, the AI Act is a comprehensive rulebook for governing AI in Europe, but it could also act as a guidepost for other governments still scrambling to draw up guardrails for the rapidly advancing technology.
The AI Act covers any product or service offered in the EU that uses artificial intelligence, whether it’s a platform from a Silicon Valley tech giant or a local startup. The restrictions are based on four levels of risk, and the vast majority of AI systems are expected to fall under the low-risk category, such as content recommendation systems or spam filters.
“The European approach to technology puts people first and ensures that everyone’s rights are preserved,” European Commission Executive Vice President Margrethe Vestager said. “With the AI Act, the EU has taken an important step to ensure that AI technology uptake respects EU rules in Europe.”
The provisions will come into force in stages, and Thursday’s implementation date starts the countdown for when they’ll kick in over the next few years.
AI systems that pose “unacceptable risk,” such as social scoring systems that influence how people behave, some types of predictive policing and emotion recognition systems in schools and workplaces, will face a blanket ban by February.
Rules covering so-called general-purpose AI models like OpenAI’s GPT-4 system will take force by August 2025.
Brussels is setting up a new AI Office that will act as the bloc’s enforcer for the general purpose AI rules.
OpenAI said in a blog post that it’s “committed to complying with the EU AI Act and we will be working closely with the new EU AI Office as the law is implemented.”
By mid-2026, the complete set of regulations, including restrictions on high-risk AI such as systems that decide who gets a loan or that operate autonomous robots, will be in force.
There’s also a fourth category for AI systems that pose a limited risk, and face transparency obligations. Chatbots must be informed that they’re interacting with a machine and AI-generated content like deepfakes will need to be labelled.
Companies that don’t comply with the rules face fines worth as much as 7% of their annual global revenue.
Threat actors have hijacked more than 35,000 registered domains in so-called Sitting Ducks attacks that allow claiming a domain without having access to the owner’s account at the DNS provider or registrar.
In a Sitting Ducks attack, cybercriminals exploit configuration shortcomings at the registrar level and insufficient ownership verification at DNS providers.
Researchers at DNS-focused security vendor Infoblox and at firmware and hardware protection company Eclypsium discovered that there are more than a million domains that can be hijacked every day via the Sitting Ducks attacks.
Multiple Russian cybercriminal groups have been using this attack vector for years and leveraged the hijacked domains in spam campaigns, scams, malware delivery, phishing, and data exfiltration.
Sitting Ducks details
Although the issues that make Sitting Ducks possible were first documented in 2016 [1, 2] by Matthew Bryant, a security engineer at Snap, the attack vector continues to be an easier way to hijack domains than other better-known methods.
For the attack to be possible, the following conditions are required:
– registered domain either uses or delegates authoritative DNS services to a provider other than the registrar
– the authoritative name server of the record cannot resolve queries because it lacks the info about the domain (lame delegation)
– the DNS provider needs to allow claiming a domain without properly verifying ownership or requiring access to the owner’s account
Variations of the attack include partially lame delegation (not all name servers are configured incorrectly) and redelegation to another DNS provider. However, if lame delegation and exploitable provider conditions are met, the domain can be hijacked.
Prerequisites diagram Source: Infoblox
Infoblox explains that attackers can use the Sitting Ducks method on domains that use authoritative DNS services from a provider that is different from the registrar, such as a web hosting service.
If the authoritative DNS or web hosting service for the target domain expires, an attacker can simply claim it after creating an account with the DNS service provider.
The threat actor can now set up a malicious website under the domain and configure DNS settings to resolve IP address record requests to the fake address; and the legitimate owner won’t be able to modify the DNS records.
“Sitting Ducks” overview Source: Infoblox
Attacks in the wild
Infoblox and Eclypsium report that they have observed multiple threat actors exploiting the Sitting Ducks (or Ducks Now Sitting – DNS) attack vector since 2018 and 2019.
Since then, there have been at least 35,000 domain hijacking cases using this method. Typically, the cybercriminals held the domains for a short period but there were some instances where they kept them up to a year.
There have also been occurrences where the same domain was hijacked by multiple threat actors successively, who used it in their operations for one to two months and then passed it on.
GoDaddy is confirmed as a victim of Sitting Ducks attacks, but the researchers say there are six DNS providers who are currently vulnerable.
The observed clusters of activity leveraging Sitting Ducks is summarized as follows:
“Spammy Bear” – Hijacked GoDaddy domains in late 2018 for use in spam campaigns.
“Vacant Viper” – Started using Sitting Ducks in December 2019, and hijacks 2,500 yearly since then, used in the 404TDS system that distributes IcedID, and setting up command and control (C2) domains for malware.
“VexTrio Viper” – Started using Sitting Ducks in early 2020 to utilize the domains in a massive traffic distribution system (TDS) that facilitates the SocGholish and ClearFake operations.
Unnamed actors – Several smaller and unknown threat actors creating TDS, spam distribution, and phishing networks.
Defense tips
Domain owners should regularly review their DNS configurations for lame delegations, especially on older domains, and update the delegation records at the registrar or authoritative name server with proper, active DNS services.
Registrars are advised to perform proactive checks for lame delegations and alert owners. They should also ensure that a DNS service is established before propagating name server delegations.
Ultimately, regulators and standards bodies must develop long-term strategies to address DNS vulnerabilities and press DNS providers under their jurisdictions to take more action to mitigate Sitting Ducks attacks.
Privacy-focused search engine DuckDuckGo has been blocked in Indonesia by its government after citizens reportedly complained about pornographic and online gambling content in its search results.
The government’s choice to block DuckDuckGo isn’t surprising considering the cultural and religious context, with Indonesia being a Muslim country where gambling is prohibited and porn is viewed as morally unacceptable.
The government has previously blocked numerous pornography sites, Reddit, and Vimeo, and imposed temporary or partial restrictions on Tumblr, Telegram, TikTok, Netflix, and Badoo.
DuckDuckGo has now confirmed to BleepingComputer that Indonesia blocked its search engine in the country and that it has no means to respond to it.
“We can confirm that DuckDuckGo has been blocked in Indonesia due to their censorship policies. Unfortunately, there is no current path to being unblocked, similar to how we’ve been blocked in China for about a decade now,” DuckDuckGo told BleepingComputer.
At the same time, Google Search remains accessible in Indonesia, which suggests that either the tech giant has implemented effective self-censorship mechanisms for its local search engine or its size makes blocking too disruptive for internet usage in the country.
Indonesians have resorted to using VPN software to bypass the government’s restrictions. However, the Indonesian government plans to block free VPNs, making gaining access to blocked sites costly.
Free VPNs next
Virtual Private Network (VPN) tools are commonly used to bypass censorship imposed by governments and internet service providers.
When using VPNs, users can make connections from other countries to once again access DuckDuckGo, but free offerings may soon be removed.
Minister of Communication and Information Budi Arie Setiadi stated that the government intends to restrict access to free VPN tools, as they know these are used to access blocked online gambling portals.
“Yesterday, Mr. Hokky (Ministry’s Director General of Informatics Applications) had a meeting with Mr. Wayan (Ministry’s Director General of Postal and Information Technology Operations), and we will shut down free VPNs to reduce access to networks for the general public to curb the spread of online gambling,” stated Setiadi on June 31, 2024.
“I specifically have to include the issue of online gambling to make it clear that this is the darkest side of digitalization.”
The same ministry announcement highlighted the risks of free VPN services, underlining personal data theft, malware infections, and making internet connectivity slow or unreliable.
Cybersecurity company CrowdStrike has been sued by investors who say it provided false claims about its Falcon platform after a bad security update led to a massive global IT outage causing the stock price to tumble almost 38%.
The plaintiffs claim that the massive IT outage that occurred on July 19, 2024, proves CrowdStrike’s claims that their cybersecurity platform is thoroughly tested and validated are false.
As a result of this incident and its aftermath, CrowdStrike’s stock price has tumbled almost 38% from $343 on July 18 to $214, causing significant financial losses to investors.
The class action lawsuit submitted by the Plymouth County Retirement Association in the U.S. District Court of Austin, Texas, seeks compensatory damages for these losses.
A bad update causes a global IT outage
On July 19, Crowdstrike pushed out a faulty Falcon sensor update to Windows devices running the security software. The update slipped past Crowdstrike’s internal tests due to a bug in its content validator and inadequate testing procedures.
The update was received by 8,500,000 Windows devices, if not more, causing an out-of-bounds memory read when processed by Falcon, leading to the operating system crashing with Blue Screen of Death (BSOD).
CrowdStrike is widely used in enterprises, including airports, hospitals, government organizations, the media, and financial firms, causing catastrophic, costly, and even dangerous IT outages.
As restoring systems required staff to remove the faulty update manually, it took days for some companies to resume normal operations, leading to extended outages and delays.
While most have returned to normal operations, the fallout from the incident continues to unfold on multiple levels, including elevated cybercrime activity, loss of trust, and litigation threats.
According to the plaintiffs, the faulty Falcon update proved that contrary to CrowdStrike’s assurances around the diligence in its procedures and the efficacy and reliability of the Falcon platform, updates were inadequately tested and controlled, and the risk of outages is high.
The class action alleges that stockholders were defrauded by CrowdStrike’s knowingly false statements about the quality of its products and procedures.
“Because of their positions and access to material, nonpublic information, the Individual Defendants knew or recklessly disregarded that the adverse facts specified herein had not been disclosed to and were being concealed from the investing public and that the positive representations that were being made were false and misleading.” – Class action document.
To reflect the extent of the losses, the lawsuit mentions that the CrowdStrike stock price fell by 11% on the day of the incident, then another 13.5% on July 22, when Congress called CEO George Kurtz for a testimony, and another 10% on July 29 following news that Delta Airlines, one of the impacted entities, hire an attorney to seek damages.
The plaintiff alleges violations of Sections 10(b) and 20(a) of the Exchange Act and seeks compensation.
Financial impact
The IT outage caused by the CrowdStrike Falcon update has caused massive financial losses to impacted organizations, with many of them exploring litigation pathways to get some of it back.
Delta Airlines CEO Ed Bastian previously stated that the outage forced the cancellation of 2,200 flights for the company, resulting in losses estimated at $500,000,000.
The firm has already hired a law firm that will seek compensation from CrowdStrike and Microsoft, which is now in the crosshairs despite not being responsible for the incident.
A Facebook malvertising campaign targets users searching for AI image editing tools and steals their credentials by tricking them into installing fake apps that mimic legitimate software.
The attackers exploit the popularity of AI-driven image-generation tools by creating malicious websites that closely resemble legitimate services and trick potential victims into infecting themselves with information stealer malware, as Trend Micro researchers who analyzed the campaign found.
The attacks start with phishing messages sent to Facebook page owners or administrators, which will send them to fake account protection pages designed to trick them into providing their login information.
After stealing their credentials, the threat actors hijack their accounts, take control of their pages, publish malicious social media posts, and promote them via paid advertising.
“We discovered a malvertising campaign involving a threat actor that steals social media pages (typically related to photography), changing their names to make them seem connected to popular AI photo editors,” said Trend Micro threat researcher Jaromir Horejsi.
“The threat actor then creates malicious posts with links to fake websites made to resemble the actual website of the legitimate photo editor. To increase traffic, the perpetrator then boosts the malicious posts via paid ads.”
Fake AI photo editor website (Trend Micro)
Facebook users who click the URL promoted in the malicious ad are sent to a fake web page impersonating legitimate AI photo editing and generating software, where they are prompted to download and install a software package.
However, instead of AI image editing software, the victims install the legitimate ITarian remote desktop tool configured to launch a downloader that automatically deploys the Lumma Stealer malware.
The malware then quietly infiltrates their system, allowing the attackers to collect and exfiltrate sensitive information like credentials, cryptocurrency wallet files, browser data, and password manager databases.
This data is later sold to other cybercriminals or used by the attackers to compromise the victims’ online accounts, steal their money, and promote further scams.
Attack flow (Trend Micro)
“Users should enable multi-factor authentication (MFA) on all social media accounts to add an extra layer of protection against unauthorized access,” Horejsi advised.
“Organizations should educate their employees on the dangers of phishing attacks and how to recognize suspicious messages and links. Users should always verify the legitimacy of links, especially those asking for personal information or login credentials.”
In April, a similar Facebook malvertising campaign promoted a malicious page impersonating Midjourney to target almost 1.2 million users with the Rilide Stealer Chrome browser extension.
A recent Google Chrome update has broken the drag-and-drop feature in the Downloads bubble that previously allowed you to drag and drop downloaded files onto any website or tab in the browser.
Google Chrome’s downloads menu originally appeared in a bar at the bottom of the browser, but this changed last year when it was replaced with a downloads bubble, which appears within the toolbar.
Google Chrome download bubble Source: BleepingComputer
As first reported by BleepingComputer, hundreds of users are now saying that the drag-and-drop feature no longer works after recent Google Chrome updates.
“Specifically files can’t be dragged into another chrome window, or be used to open a new chrome window (opening the file with chrome),” reads a post in the Google support forums.
“Dragging onto local desktop or into other programs seems to work fine. The downloads page, also works fine. Only the downloads bubble. I was able to recreate the issue by updating chrome, my current version is: 127.0.6533.73 (Official Build) (64-bit).”
In a Chromium issue found by BleepingComputer, Google Chrome engineers say reports about this bug began on July 29th and affect versions 127.0.6533.73 and 126.0.6478.185.
Google determined it was caused by a new performance-enhancing feature, ‘UIPumpImprovementsWin,’ which was rolled out to 50% of the Chrome Stable users on Sunday night.
A fix has been created by rolling back the UIPumpImprovementsWin change, which should be rolled out to users soon.
A Chinese hacking group tracked as StormBamboo has compromised an undisclosed internet service provider (ISP) to poison automatic software updates with malware.
Also tracked as Evasive Panda, Daggerfly, and StormCloud, this cyber-espionage group has been active since at least 2012, targeting organizations across mainland China, Hong Kong, Macao, Nigeria, and various Southeast and East Asian countries.
On Friday, Volexity threat researchers revealed that the Chinese cyber-espionage gang had exploited insecure HTTP software update mechanisms that didn’t validate digital signatures to deploy malware payloads on victims’ Windows and macOS devices.
“When these applications went to retrieve their updates, instead of installing the intended update, they would install malware, including but not limited to MACMA and POCOSTICK (aka MGBot),” cybersecurity company Volexity explained in a report published on Friday.
To do that, the attackers intercepted and modified victims’ DNS requests and poisoned them with malicious IP addresses. This delivered the malware to the targets’ systems from StormBamboo’s command-and-control servers without requiring user interaction.
For instance, they took advantage of 5KPlayer requests to update the youtube-dl dependency to push a backdoored installer hosted on their C2 servers.
After compromising the target’s systems, the threat actors installed a malicious Google Chrome extension (ReloadText), which allowed them to harvest and steal browser cookies and mail data.
StormBamboo attack flow (Volexity)
“Volexity observed StormBamboo targeting multiple software vendors, who use insecure update workflows, using varying levels of complexity in their steps for pushing malware,” the researchers added.
“Volexity notified and worked with the ISP, who investigated various key devices providing traffic-routing services on their network. As the ISP rebooted and took various components of the network offline, the DNS poisoning immediately stopped.”
In April 2023, ESET threat researchers also observed the hacking group deploying the Pocostick (MGBot) Windows backdoor by abusing the automatic update mechanism for the Tencent QQ messaging application in attacks targeting international NGOs (non-governmental organizations).
Almost a year later, in July 2024, Symantec’s threat hunting team spotted the Chinese hackers targeting an American NGO in China and multiple organizations in Taiwan with new Macma macOS backdoor and Nightdoor Windows malware versions.
In both cases, although the attackers’ skill was evident, the researchers believed it was either a supply chain attack or an adversary-in-the-middle (AITM) attack but weren’t able to pin down the exact attack method.
A novel Linux Kernel cross-cache attack named SLUBStick has a 99% success in converting a limited heap vulnerability into an arbitrary memory read-and-write capability, letting the researchers elevate privileges or escape containers.
The discovery comes from a team of researchers from the Graz University of Technology who demonstrated the attack on Linux kernel versions 5.9 and 6.2 (latest) using nine existing CVEs in both 32-bit and 64-bit systems, indicating high versatility.
Additionally, the attack worked with all modern kernel defenses like Supervisor Mode Execution Prevention (SMEP), Supervisor Mode Access Prevention (SMAP), and Kernel Address Space Layout Randomization (KASLR) active.
SLUBStick will be presented in detail at the upcoming Usenix Security Symposium conference later this month. The researchers will showcase privilege escalation and container escape in the latest Linux with state-of-the-art defenses enabled.
In the meantime, the published technical paper contains all the details about the attack and the potential exploitation scenarios.
SLUBStick details
One way the Linux kernel manages memory efficiently and securely is by allocating and de-allocating memory chunks, called “slabs,” for different types of data structures.
Flaws in this memory management process may allow attackers to corrupt or manipulate data structures, called cross-cache attacks. However, these are effective roughly 40% of the time and typically lead to system crashes sooner or later.
SLUBStick exploits a heap vulnerability, such as a double-free, user-after-free, or out-of-bounds write, to manipulate the memory allocation process.
CVEs successfully used in the researchers’ experiments Source: stefangast.eu
Next, it uses a timing side channel to determine the exact moment of memory chunk allocation/deallocation, allowing the attacker to predict and control memory reuse.
Using this timing information raises the success of the cross-change exploitation to 99%, making SLUBStick very practical.
Measured success rates Source: stefangast.eu
The conversion of the heap flaw into arbitrary memory read-and-write primitive is done in three steps:
Free specific memory chunks and wait for the kernel to reuse them.
Reallocate these chunks in a controlled manner, ensuring they get repurposed for critical data structures like page tables.
Once reclaimed, the attacker overwrites the page table entries, getting the ability to read and write any memory location.
SLUBStick overview Source: stefangast.eu
Real-world impact
As with most attacks involving a side channel, SLUBStick requires local access on the target machine with code execution capabilities. Furthermore, the attack requires the presence of a heap vulnerability in the Linux kernel, which will then be used to gain read and write access to the memory.
While this may make the attack appear impractical, it does introduce some benefits to attackers.
Even for attackers with code execution capabilities, SLUBStick provides the ability to achieve privilege escalation, bypass kernel defenses, perform container escapes, or use it as part of a complex attack chain.
Privilege escalation can be used to elevate privileges to root, allowing unlimited operations, while container escape can be used to break from sandboxed environments and access the host system.
Additionally, in the post-exploitation phase, SLUBStick could modify kernel structures or hooks to maintain persistence, making malware harder for defenders to detect.
Tampering with ‘/etc/passwd’ data Source: stefangast.eu
Those who want to dive deeper into SLUBStick and experiment with the exploits used by the Graz University researchers can find them in the researcher’s GitHub repository.
As platform engineering continues to grow in popularity inside enterprises, an all-new training curriculum is being launched that will provide one of the first platform engineering certifications in the industry.
The training and certification, offered by the global platform engineering developer’s community PlatformEngineering.org, will begin on Aug. 6 and include two courses and a certification exam that will help IT administrators gain new skills in this burgeoning field.
The courses include an eight-week Platform Engineering Fundamentals class and an MVP (Minimum Viable Platform) to IDP (Internal Development Platform) course, followed by a certification exam combining platform engineering theory with hands-on tasks. Each of the courses will be held for eight to 10 weeks with live, interactive, online meetup sessions taught by platform engineering experts and some of the top platform architects in the industry, according to the group. The live nature of the courses will provide instant feedback, back-and-forth Q&As and an up-to-the-minute feel for the instruction. The live sessions are expected to run for up to 2.5 hours per class module.
“The number-one question from the community is ‘How do I become a platform engineer?’” Luca Galante, the lead instructor and vice president of product and growth for platform engineering vendor, Humanitec, told The New Stack. “So, we wanted to create a resource for IT admins [based on] the three or four years of really good content we have under our belt.”
ngrok is a secure unified ingress platform for developers. It combines reverse proxy, firewall, API gateway and global load balancing to deliver apps and APIs. Over 5 million developers use ngrok for testing webhooks and previewing apps, including the world’s top technology brands.
The certification course will be a combination of theory and hands-on internal developer platform building. The first course, the fundamentals instruction, is ready to go for the students and includes eight weekly modules on topics from an introduction to platform engineering to how to build an internal developer platform. Other modules include getting started with reference architectures, designing golden paths for developers, finding the right abstractions, infrastructure platform engineering, how to build minimum viable platforms and selling your minimum viable platforms to key stakeholders inside enterprises.
The second course is still being developed, but it will go beyond the basics to teach full rollout strategies and adoption paths across multiple workflows and teams, according to the group.
The fundamentals course is priced at $2,149, but registrants can use the code EARLYBIRD20 when registering to drop the price by 20% to $1,720. Once the development of the second course is completed, Galante said he expects that a special bundle price will be available for the combined courses and the exam for the students in the program.
The curriculum is being built to ensure that students who successfully complete the courses and exam will be able to demonstrate an elevated level of understanding and knowledge of platform engineering, from its theory to firsthand skills and expertise.
By using one of these custom, standardized internal developer platforms (IDPs) and tools inside a company, developers are freed to do their core work activities, rather than spending valuable time searching for applications they need to create and test their code. These IDPs provide developers with the best code-building tools in an easy-to-use environment built to provide automation, standardization and flexibility for busy application developers.
Who Is Interested in Platform Engineering?
Many of the first students coming into the new certification program are individuals who want to learn the skills and fundamentals for their own edification, said Galante. “They are DevOps engineers or some junior engineers that want to have these skills. But we are already seeing teams purchasing the course. And we are getting applicants from large companies where they have systems administrators or IT admins who they want to upskill to become platform engineers or upskill their existing platform team to educate them around best practices.”
The guest lecturers who will participate in the courses include experts like Manuel Pais, co-author of the book, Team Topologies, and Ajay Chankramath, former head of platform engineering at the technology consultancy Thoughtworks.
An Enrollee Shares Why He Is Taking the Certification Course
Giovanny Velez, a platform engineering technical lead for a cloud-based applications vendor in Colombia, told The New Stack that he is taking the certification coursework to gain more experience and understanding of this still-emerging technology from a reliable and knowledgeable source.
“This platform engineering course and certification will significantly boost my professional growth and career prospects and will validate my expertise, increasing my credibility and making me a stronger candidate for advanced roles,” Velez wrote in an email. “It will open new career opportunities and potentially accelerate my path to senior positions.”
Velez said the courses will also teach him industry best practices and innovative tools that will improve his problem-solving abilities and help him stay current with the latest trends in platform engineering. “This knowledge is essential in our team as we are starting to implement the practice, and it helps us to streamline the process,” he said.
Velez, who is paying for the course himself, said he learned of the program after seeing announcements on the PlatformEngineering.org webpage and in its Slack channel.
“The program will also provide valuable networking opportunities, connecting me with other professionals in the field,” he said. “These connections can lead to knowledge sharing and future collaborations. Overall, I think that I will be more prepared to tackle the challenges of modern, cloud native environments and deliver increased value in my role. I see this field growing constantly and I see more job offers in the market.”
As lead for his company’s recently formed platform engineering team, Velez said the company is working to establish communication channels and that he will share his experiences with the course at his company “so the rest of the teams know what we will be doing and how.”
Analyst Reactions to Platform Engineering Training
Two analysts contacted by The New Stack said they believe that platform engineering training and certifications are a smart move for enterprises.
“Training is always smart, particularly if it is done by a company that offers related products and services because they will be up to date on the kind of unique, new and changing problems that IT is dealing with,” said Rob Enderle, principal analyst with the Enderle Group.
Enderle said he often hears from IT managers that employees are learning on the job rather than getting advanced training, which is creating huge deployment and operational problems.
“They need training like this to reduce those problems,” he added. “Learning on the job or from some obsolete lesson plan continues to be problematic. I recommend training from current practitioners so that those learning can hit the ground running and become immediately able to do the job right. It makes for better implementations, and it makes employees more valuable and less likely to get laid off.”
Another analyst, Dan Olds, CEO of Olds Research, agrees.
“The concept of platform engineering is what the hyperscale organizations — think Amazon, Google, and Meta — pioneered to allow them to efficiently operate at vast scale,” said Olds. “The idea behind it is to have a specific set of development tools — and only that set — which pays dividends when it comes to scalability, reliability, support and troubleshooting, to name a few. This is something that is catching on with other large organizations that have lots of applications in flight all the time as it simplifies the development, deployment and support functions.”
Such an approach simplifies the overall application development/deployment and life-cycle management processes for companies, said Olds. “In many organizations today, developers are able to pick and choose from a huge slate of development tools, which can lead to problems down the road when it comes to deployment, troubleshooting and the inevitable changes — unless those functions are performed by the engineer or engineers who actually developed the app. Standardizing development tools ensures that everyone is using the same size building blocks and understands how they fit together.”
Olds said he recommends that enterprise IT shops should investigate platform engineering, along with their counterparts in a company’s business units.
“I can see where this approach would increase the speed of application development and result in smoother deployments,” he said. But at the same time, while this holistic platform engineering approach can be beneficial for organizations, a potential problem could be getting all developers on board with it.
“I think that it might be a hard sell to some developers,” said Olds. “They are experts at the set of tools they use and have reasons why they do not use alternatives. It might take some work to get them out of their comfort zones and get them on board with this new approach.”
React and its virtual DOM paradigm has been at the forefront of frontend development for a decade now, but there’s been a swing towards simplicity and web-native features in more recent frameworks. Astro is a great example; and it also now has the support of Netlify, a leading player in the current web landscape.
Earlier this month Netlify announced Astro as its “Official Deployment Partner,” which in practice means it will contribute $12,500 each month “towards the ongoing open source maintenance and development of Astro.”
As Netlify CEO Matt Biilmann noted, Astro was “the first framework to popularize the idea of island architecture, where islands are interactive widgets floating in a sea of otherwise static, lightweight, server-rendered HTML.” As part of the new official partnership, Netlify will help Astro roll out a related new feature: Server Islands. Astro defines this as a “solution to integrate high-performance static HTML and dynamic server-generated components together.”
Astro concept drawing of server islands.
How Is Astro Different to React Frameworks?
We’ll get back to server islands shortly. But first, let’s look at why Astro has become a trending framework. Conceptually, the main difference between Astro and the React-based frameworks that came before it is this: most of the work is done server-side, instead of client-side. Here’s how Astro’s founders explained it in their introductory post, back in June 2021:
“Astro renders your entire site to static HTML during the build. The result is a fully static website with all JavaScript removed from the final page. No monolithic JavaScript application required, just static HTML that loads as fast as possible in the browser regardless of how many UI components you used to generate it.”
Astro is perhaps closest to a static site generator, like Eleventy and Hugo, but it also cunningly incorporates other approaches too — including React itself. “In Astro, you compose your website using UI components from your favorite JavaScript web framework (React, Svelte, Vue, etc),” wrote the founders in 2021. The trick was, that all of that complexity was rendered into HTML in the build phase, meaning it was never foisted onto the user.
Astro creator Fred K. Schott demoed the product in April 2021.
But undoubtedly the key to Astro’s subsequent success is the “islands architecture.” That’s because, for a web application of any scale, chances are JavaScript will be needed at some point. But whereas previous frameworks, like Angular and Next.js, focused on client-side rendering — the so-called single-page application (SPA) approach — Astro wanted to limit client-side rendering to certain parts of an app (the “islands”). While both Angular and Next.js can implement partial hydration and server-side rendering, Astro’s architecture inherently avoids client-side JavaScript by default, unless explicitly required.
“When a component needs some JavaScript, Astro only loads that one component (and any dependencies),” explained the Astro founders. “The rest of your site continues to exist as static, lightweight HTML.”
Astro credited Jason Miller with coining “islands architecture,” pointing to an August 2020 article on Miller’s blog. He in turn credited Etsy frontend architect Katie Sylor-Miller for coining the “Component Islands” pattern in 2019.
There was some skepticism about “web islands” when they were first discussed in 2021.
In April 2021, Astro creator Fred K. Schott demonstrated Astro for the first time. He acknowledged that “partial hydration” (a key part of the islands architecture) had been difficult to achieve in practice to this point. He said that frameworks like Next.js and Gatsby had to try and “take this application and pull it apart” if they wanted to implement partial hydration. That could cause significant problems, so Astro took a completely different approach.
“What we try and do is, by default, it’s all server-rendered,” said Schott in the April 2021 podcast. “So by default, you’re speaking this language of a server-rendered document, and then individual components, instead of being pulled out of an application, are actually being injected into a document. So it’s a totally different kind of static-first approach, where the result is that you actually have to opt into all of your payloads, and everything becomes much lighter as a result.”
Server Islands
Fast forward to 2024 and Astro has become a rising web framework. As well as the technical advantages outlined above, Astro offers a kind of ‘back to basics’ approach to web development that harkens back to early Web 2.0 frameworks, like Ruby on Rails and Django, which were also server-rendered. Now that Astro has become a viable alternative to the likes of Next.js, it is trying to expand its capabilities. Enter “Server Islands.”
In its announcement post, Astro referenced the original islands architecture as “interactive client-side components.” With Static Islands, the idea is to add “dynamic server-generated components” to the mix as well. In practice, you use “server:defer” to defer running a particular component until after the page loads. So it’s a kind of caching mechanism, similar perhaps to Next.js “partial prerendering” — or at least “solving the same problem,” as Schott said on X.
Server islands comparison to Next.js tech.
In a recent four-and-a-half-hour podcast with Ryan Carniato, the creator of Solid.js, Astro developer Matthew Phillips discussed Server Islands. “What it really is, is a way to differentiate types of content to run at different times,” he explained, adding later that Server Islands “essentially cache different parts of your page differently.”
Carniato noted that when Astro first came out, they positioned the client-side islands as being “interactive,” whereas the rest of the HTML was “non-interactive.” He pointed out that Astro is now using the word “static” to indicate the non-changing content, and “dynamic” to mean a server-side island — for example, an “x rooms left” button on an Airbnb-type website, which requires checking with a database.
Ryan Carniato and Matthew Phillips discuss server islands.
Note that although Server Islands are different to traditional Astro islands, a component can be both at the same time. Netlify explains this well in a recent blog post:
“It’s worth clarifying that Astro Server Islands are different to standard Astro Islands, which improve performance by allowing you to selectively ‘hydrate’ components, i.e. load JavaScript only for components that need it, instead of making the entire page interactive. In fact, a component can be both an Island and a Server Island!”
Back to the Future
One can’t help but wonder whether Astro is slowly falling into the same trap as Next.js, in that it will get progressively more complex over time. But the concept of Server Islands isn’t that difficult to grok, so currently it feels like a sensible extension of the core “islands architecture” concept of Astro.
What I love about Astro is that it gets us back to that server-based paradigm that we grew up with on the web in the 1990s (at least, those of us of a certain vintage), and which early Web 2.0 frameworks like Ruby on Rails and Django extended. But Astro adds component-based development to the mix, including options to use React and Vue, which makes it an entirely modern approach.
Ultimately, anything that takes the bulk of the JavaScript load away from the client (read: end users) is a good thing, and Astro is certainly helping in that regard.
“After giving it a lot of thought, we made the decision to discontinue new access to a small number of services, including AWS CodeCommit,” AWS Chief Evangelist Jeff Barr wrote, sharing a prepared message on the X social media service Tuesday.
Although existing customers can continue to use CodeCommit for the time being, AWS has stopped accepting new customers. And it has not given a date on when the service would be shuttered.
“While we are no longer onboarding new customers to these services, there are no plans to change the features or experience you get today, including keeping them secure and reliable,” Barr wrote in an extended Tweet. “We also support migrations to other AWS or third-party solutions better aligned with your evolving needs.”
How to delete a CodeCommit repository — after migrating to another service (AWS)
“After migration, you have the option to continue to use your current AWS CodeCommit repository, but doing so will likely require a regular sync operation between AWS CodeCommit and the new repository provider,” the cloud giant provider advised.
Nonetheless, the service had trouble gaining a foothold in the competitive code repository market, despite the natural appeal for AWS shops to stick with AWS for additional services.
TNS Analyst Lawrence Hecht noted that, in last year’s JetBrains survey, 3.2% of the developers surveyed used CodeCommit. Even among developers whose company primarily uses AWS for the cloud, only 9% used AWS CodeCommit.
Those same AWS-centric accounts were much more likely to say their company was using BitBucket (39%), GitLab (45%) and GitHub (63%)
“That is not a large user base, but it will be interesting to see where those people will migrate to,” noted Hecht in a Slack message.
https://datawrapper.dwcdn.net/p0BfG/1
One place where AWS CodeCommit has a strong userbase was Japan. In the JetBrains survey, 11% of developers said their company uses it, a larger user base than BitBucket in that market.
Despite the marginal use of CodeCommit, many observers still had feelings about the matter.
“Given AWS is a competitor to Azure, it’s so odd to see AWS making a business case for their customers to move” to Microsoft, wrote Gergely Orosz, author of the Pragmatic Engineer newsletter, in an X thread.
“To me, this is a clear sign to not adopt any dev-related tooling from AWS. It’s not the business they want to be in,” replied Acorn Labs chief architect and co-founder Darren Shepherd.
For GitLab, AWS shuttering CodeCommit is a sign that the market for code hosting has matured.
“The market has moved from point solutions to platforms that address the entire software development lifecycle,” wrote Emilio Salvador, GitLab vice president for strategy and developer relations, in a statement. “Buyers are now looking to platforms that provide one workflow that unifies developer, security, and operations teams with integrated native security.”
GitLab has set up two options for migrations from CodeCommit, using either self-managed GitLab or through the GitLab.com hosted service.
“Self-managed customers can install, administer, and maintain their GitLab instance on bare metal, VMs, or containers. GitLab.com requires no installation,” Salvador explained.
Other companies in the space are focusing on how their products can help in the transition.
“Migration to a new source code management tool is always challenging, The right considerations and migration strategies can significantly help with the process,” wrote Patrick Wolf, principal product manager at Harness, a DevOps software provider, in an e-mail to TNS. “Some important considerations for selecting a new Source Code Manager are integration with a DevOps platform, security and governance features, and developer productivity features.”
Video and gaming streaming service Netflix has released as open source the workflow orchestrator that its army of data scientists and analysts use every day to understand user behaviors and other large-scale data-driven trends.
The Maestro workflow orchestrator, released under an Apache 2.0 license, was designed to support hundreds of thousands of workflows and has completed up to 2 million jobs in a single day for the media company.
How Maestro Works
According to company engineers, it is highly scalable, extensible and able to meet strict service level objectives (SLO) even during spikes of traffic.
It is built on top of a range of open source technologies, namely Git, Java (21), Gradle and Docker.
Percona is widely recognized as a world-class open source database software, support, and services company for MySQL®, MongoDB®, and PostgreSQL® databases. We are dedicated to helping make your databases and applications run better through a unique combination of expertise and open source software.
Maestro can be evoked from the cURL command line, which provides the ability to create, run, and delete a workflow and an associated batch of data. The workflow is defined in JSON, and the user’s business logic can be packaged into Docker images, Jupyter notebooks, bash scripts, SQL, Python, and other formats.
Behind the scenes, Maestro manages the entire lifecycle of a workflow, handling retries, queuing, and task distribution to compute engines. Not only does it support Directed Acyclic Graphs (DAGs) — table stakes in the AI-driven world of 2024 — but also cyclic workflows and multiple reusable patterns, through for each loop, sub workflows, and conditional branching.
“It supports a wide range of workflow use cases, including ETL pipelines, ML workflows, AB test pipelines, pipelines to move data between different storages,” a group of Netflix engineers collectively wrote in a recent blog post announcing the release. “Maestro’s horizontal scalability ensures it can manage both a large number of workflows and a large number of jobs within a single workflow.”
Birth of Maestro
Netflix is no stranger to open source software, having released many tools it developed internally as open source. System stress-testing tool Chaos Monkey was released in 2011, and inspired a whole generation of chaos testing tools. Other open source projects that Netflix has spun off include the routing gateway Zuul and the microservices routing engine Conductor, since deprecated.
Netflix first let the world know about Maestro in 2022 in a blog post that explained its origins. The orchestrator then being used, called Meson, was straining under the workloads of thousands of daily jobs, particularly around peak usage time.
“Meson was based on a single leader architecture with high availability. As the usage increased, we had to vertically scale the system to keep up and were approaching AWS instance type limits,” the engineers wrote in the 2022 post.
Worse, the workloads were expected to increase by at least 100% per year, and the sizes of the workflows were expected to grow as well.
From the start, Maestro was designed to be highly-scale and extensible. It was built on a DAG architecture, where each workflow was comprised of a series of steps. And each step can have dependencies, triggers and other conditionals. The business logic of each workflow is run in isolation, guaranteeing SLOs are met. All the services are designed to be stateless so they can be scaled out as needed.
The Registry Editor is a powerful application that allows you to access and edit the configuration settings of the Windows operating system.
The Windows Registry is a database containing various settings used by the operating system and installed software applications.
However, it is essential to be careful when using the Registry Editor, as making incorrect changes to the Registry can cause serious problems with your system, including preventing it from booting correctly.
The Windows Registry database stores the configuration options and settings for the Windows operating system and software installed on your computer.
The Registry is organized in a hierarchical structure containing keys and values, with five different ‘root’ keys at the top, as shown below, that serve a particular purpose.
The Windows Registry Editor showing the five root keys Source: BleepingComputer
The five different root keys in the Windows registry and their purposes are:
HKEY_CLASSES_ROOT (HKCR): This key stores information about file associations and OLE object classes.
HKEY_CURRENT_USER (HKCU): This key stores information about the current user’s settings and a specific user’s preferences for various applications.
HKEY_LOCAL_MACHINE (HKLM): This key stores information that affects the entire computer, regardless of the logged-in user. These settings are for the computer’s hardware, operating system configuration, and software settings that affect all users.
HKEY_USERS (HKU): This key stores information about all users who have logged on to the computer.
HKEY_CURRENT_CONFIG (HKCC): This key stores information about the current hardware configuration of the computer.
For the most part, you will be modifying keys and values under the HKCU and HKLM root keys.
Registry keys are like folders containing other keys and values used to organize and group related settings.
For example, the HKEY_CURRENT_USER key contains settings for the current user account, such as a user’s desktop wallpaper, installed application preferences, and personal settings.
While keys are like folders, Registry values are the files stored within them, containing the specific settings for a particular key. They can have different data types, including text, numbers, and binary data.
There are several different types of registry value data types:
REG_SZ – A string value that contains text data, such as a username or folder path.
REG_DWORD – A numeric value that contains a 32-bit integer.
REG_QWORD – A numeric value that contains a 64-bit integer.
REG_BINARY – A value that contains binary data, such as an image or sound file.
REG_MULTI_SZ – A string value that contains multiple strings separated by null characters. This is often used for lists or arrays of values.
Each registry value type is used for a specific purpose, and understanding them can help you better manage and customize your Windows operating system.
Viewing Registry keys and values in the Windows Registry Editor Source: BleepingComputer
However, for the most part, when editing the Registry, you will be modifying REG_SZ values for text data and REG_DWORD for numeric data, as they are the most common data types used to stored user-editable data.
Using the Windows Registry Editor
The Windows Registry Editor is a software application created by Microsoft and built into all versions of Windows that allows you to edit the data in the Registry.
The Registry Editor application is located at C:\Windows\regedit.exe,and for the most part, requires administrative privileges to use it properly.
To open the Registry Editor, press the Windows key + R to open the Run dialog box. Next, type “regedit” in the search box and press Enter. If you are shown a prompt asking if you would like to allow the program to make changes, select ‘Yes‘ to continue.
When the Registry Editor is opened, you will see that the window is divided into two panes. The left pane displays a hierarchical tree structure of the Registry’s various keys (folders) and subkeys (subfolders).
The right pane displays the values and data associated with the selected key in the left pane.​
The Windows Registry Editor Source: BleepingComputer
To open a specific Registry key, use the left pane to navigate to the key you want to edit. Then expand a key and click on the plus sign (+) next to it.
To collapse a key, click the minus sign (-) next to it.
When you click on a key in the left pane, the values stored within it will be shown in the right pane.
Now that we know how to navigate the Windows Registry let’s learn how to modify data stored within it.
CREATE A NEW REGISTRY KEY
When configuring new settings in the Windows Registry, you may need to create a key at some point.
Use these steps to create a new key in the Registry:
Right-click on the key you want to create a new subkey under in the left pane.
Select New -> Key.
Enter a name for the new key and press Enter.
CREATING A REGISTRY VALUE
As Registry values contain the data that configure how an application or Windows works, it is common to need to create Registry values.
Use these steps to create a Registry value:
Navigate to the key where you want to create the value.
Right-click on the key and select “New” and then select the type of value you want to create. The kinds of values you can make are explained in the previous section.
Give the new value a name by typing it in the box that appears.
Double-click on the new value to edit its data.
Enter the desired value data and click OK.
RENAMING REGISTRY KEY
Sometimes you may need to rename a Registry value, such as when introducing a typo.
To rename a Registry key, please follow these steps:
Navigate to the key you want to rename.
Right-click the key and select Rename.
Enter the new name for the key in the editable field.
Click anywhere to save the changes.
RENAMING A REGISTRY VALUE
Sometimes you may need to rename a Registry value, such as when introducing a typo.
To rename a Registry value, please follow these steps:
Right-click the value you want to rename in the right pane and select Rename.
Enter the new name for the key in the editable field.
Click anywhere to save the changes.
EDITING REGISTRY VALUE DATA
To make changes to a Registry value, follow these steps:
Double-click on the value you want to edit in the right pane.
The Edit dialog box will appear.
Enter the new value for the key in the ‘Value data’ field.
Click OK to save your changes.
EXPORT A REGISTRY KEY
It is possible to export Registry keys and all their subkeys and values to a registry file.
A registry file ends with .reg, and when you double-click on one in Windows, it will import the data back into the Registry. Exporting a registry key helps create a backup before you make changes to the Registry.
Right-click on the key you want to export in the left pane.
Select Export.
Choose a location to save the exported key and give it a name.
Click Save.
IMPORT A REGISTRY KEY
If you had previously exported a key, you could import its data into the Registry by importing the file using these steps:
Click on File -> Import.
Navigate to the location where you saved the exported key.
Select the key file and click Open.
DELETE A REGISTRY KEY
If you need to delete a Registry key, use the following steps.
Right-click on the key you want to delete in the left pane.
Select ‘Delete‘from the context menu.
Confirm that you want to delete the key by clicking Yes.
Note: If you delete a key, all subkeys and values underneath them will be deleted too!
Note 2: You should only delete a registry key if you know it will not cause issues with your computer!
DELETE A REGISTRY VALUE
If you need to delete a Registry value, use the following steps.
Right-click on the value you want to delete in the right pane.
Select ‘Delete‘ from the context menu.
Confirm that you want to delete the value by clicking Yes.
Note: You should only delete a registry value if you know it will not cause issues with your computer
When done using the Registry Editor, you can close it by clicking on the X in the Window or clicking on File > Exit.
With these steps, you should now be able to use the Windows Registry Editor to navigate and edit the Registry.
However, it is vital to be careful when making changes to the Registry, as incorrect changes can cause severe problems with your system.
Researchers are warning of threat actors increasingly abusing the Cloudflare Tunnel service in malware campaigns that usually deliver remote access trojans (RATs).
This cybercriminal activity was frst detected in February and it is leveraging the TryCloudflare free service to distribute multiple RATs, including AsyncRAT, GuLoader, VenomRAT, Remcos RAT, and Xworm.
Campaigns attributed to the same activity cluster Source: Proofpoint
The Cloudflare Tunnel service allows proxying traffic through an encrypted tunnel to access local services and servers over the internet without exposing IP addresses. This should come with added security and convenience because there is no need to open any public inbound ports or to set up VPN connections.
With TryCloudflare, users can create temporary tunnels to local servers and test the service without the need of a Cloudflare account.
Each tunnel generates a temporary random subdomain on the trycloudflare.com domain, which is used to route traffic through Cloudflare’s network to the local server.
In a report today, cybersecurity company Proofpoint says that it observed malware activity targeting law, finance, manufacturing, and technology organizations with malicious .LNK files hosted on the legitimate TryCloudflare domain.
The threat actors are luring targets with tax-themed emails with URLs or attachments leading to the LNK payload. When launched, the payload runs BAT or CMD scripts that deploy PowerShell.
Two attack chains used in the campaign Source: Proofpoint
In the final stage of the attack, Python installers are downloaded for the final payload.
Proofpoint reports that the email distribution wave that started on July 11 has distributed over 1,500 malicious messages, while an earlier wave from May 28 contained less than 50 messages.
Malicious email sample Source: Proofpoint
Hosting LNK files on Cloudflare offers several benefits, including making the traffic appear legitimate due to the service’s reputation.
Moreover, the TryCloudflare Tunnel feature offers anonymity, and the LNK-serving subdomains are temporary, so blocking them does not help defenders too much.
Ultimately, the service is free and reliable, so the cybercriminals do not need to cover the cost of setting up their own infrastructure. If automation is employed to evade blocks from Cloudflare, the cybercriminals can abuse those tunnels even for large-scale operations.
BleepingComputer has reached Cloudflare for a comment on the activity reported by Proofpoint, and a company representative replied with the following statement:
Cloudflare immediately disables and takes down malicious tunnels as they are discovered by our team or reported on by third parties.
In the past few years, Cloudflare has introduced machine learning detections on our tunnel product in order to better contain malicious activity that may occur.
We encourage Proofpoint and other security vendors to submit any suspicious URLs and we will take action against any customers that use our services for malware.
Operating in ‘kernel-space’ – the most privileged layer of an operating system, with direct access to memory, hardware, resource management, and storage – is vitally important for security products. It enables them to monitor ‘user-space’ – the non-privileged environment where applications run – and protect against malware that executes in that environment, even when it tries to evade detection. But kernel access also allows security products to counter more insidious threats within the kernel itself. As we’ve reported previously, for example, some threat actors use BYOVD (Bring Your Own Vulnerable Driver) attacks, or attempt to get their own malicious drivers cryptographically signed, in order to access kernel-space and take advantage of that elevated level of access.
However, from a security standpoint, working in kernel-space comes with its own risks. A wrong step in this environment – such as a bad update to a kernel driver – can cause outages. If the driver in question starts at boot time, when the operating system first loads, that can lead to prolonged impacts, potentially requiring affected hosts to be started in a recovery mode to mitigate the problem and allow the machines to boot normally.
Sophos’ Intercept X Advanced product uses five kernel drivers as of release 2024.2. All drivers are extensively tested* with applicable flags enabled and disabled, and shipped with new flags disabled. (Sophos Intercept X and Sophos Central use feature flags to gradually enable new features. Feature flags are deployed through Sophos Central. New features are typically ‘guarded’ by feature flags – turned off unless the flag is enabled – so that the feature can be rolled out gradually and potentially revised before wider enablement.)
In this article, in the interests of transparency, we’ll explore what those drivers are, what they do, when they start, how they’re signed, and what their inputs are. We’ll also explore some of the safeguards we put in place around these drivers to minimize the risk of disruption (such as staged rollouts, as mentioned above; we provide an example of this later in the article), and the options available to customers when it comes to configuring them. It’s also worth noting that Intercept X Advanced and all its components, including the kernel drivers, has been part of an external bug bounty program since December 14, 2017; we welcome scrutiny via external bug bounty submissions, and we foster a culture of collaboration with the research community.
* ‘Testing’ refers to a range of internal testing, including Microsoft-provided tools and verifiers
Overview
The following table provides an at-a-glance overview of the five kernel drivers which are part of Intercept X Advanced release 2024.2.
Sophos ELAM driver: can prevent execution of malicious boot start drivers
SophosED.sys
3.3.0.1727
File System Driver
Boot Start
Yes
WHCP+
The main Sophos anti-malware driver
Sntp.sys
1.15.1121
Network Filter Driver
System Start
Yes
WHCP+
Sophos Network Threat Protection driver
Hmpalert.sys
3.9.4.990
File System Driver
System Start
Yes
WHCP+
Sophos HitmanPro.Alert driver
SophosZtnaTap.sys
9.24.6.3
Network Filter Driver
On Demand
Yes
WHCP+
Sophos Zero Trust Network Access (ZTNA) Tap driver
Table 1: An overview of the kernel drivers in Intercept X Advanced2024.2 * Microsoft Windows Early Launch Anti-malware Publisher + Microsoft Windows Hardware Compatibility Publisher
Figure 1: A conceptual depiction of user-space/kernel boundaries and where Intercept X Advanced components operate
SophosEL.sys
What it does: SophosEL.sys is the Sophos Early Launch Anti-Malware (ELAM) driver.
Inputs: This driver has one input – a blocklist of known-bad drivers which must be prevented from executing as boot start drivers at machine startup. This blocklist, located at the registry key below, is set by Sophos user-space threat detection logic when it detects a malicious driver. At the next boot cycle, SophosEL.sys ensures that this driver is not loaded.
Customer options: Customers can configure remediation and allowed items in the Threat Protection policy from Sophos Central.
Additional measures: Any Microsoft or Sophos-signed driver is exempt from cleanup/blocking.
SophosED.sys
What it does: SophosED.sys (Endpoint Defense) is a boot start driver, started during ELAM processing and before many other kernel drivers are loaded, Windows user-space is initialized, and the system drive is mounted. It has three broad responsibilities:
Providing tamper protection for the Sophos installation and configuration
Exposing system activity events to Sophos user-space components for protection and detection
Recording low-level system activity events to the Sophos Event Journals for after-the-fact forensics and analysis
Inputs: Since SophosED.sys starts before the filesystem is available, its entire configuration is provided through its service key. Note that all the below inputs are under HKLM\SYSTEM\CurrentControlSet\Services\Sophos Endpoint Defense.
* The configuration password is hashed with PBKDF2-SHA512 and a salt
System Activity Events inputs
The Sophos Central Threat Protection policy supports multiple configuration options, which Sophos user-space processes write to the SophosED.sys registry key, so that they’re available when the driver is loaded.
If a subkey exists with a DWORD value Enabled =1, event journals are enabled
DACLs; Sophos Tamper Protected
Customer options: Customers can configure disk limits and manage exclusions in Sophos Central Threat Protection policy.
Additional measures: If a driver facility is available (based on a combination of Customer Policy plus Sophos flag), then Sophos user-space processes can configure various parameters at runtime:
A bitmask of mitigations to apply per-process
What events to enable or disable for each process
The amount of time the driver should wait for a response from user-space (or whether it should be an asynchronous notification).
Sntp.sys
What it does: Sntp.sys (Sophos Network Threat Protection) is a kernel driver that registers for various Windows Filtering Platform events to intercept and potentially modify network flow data. Depending on features enabled by Sophos Central Threat Protection and Web Control policies, different filters and callouts are registered.
Inputs: Feature configuration is communicated to the driver from one or more of the following user mode processes:
SophosNtpService.exe
SophosNetFilter.exe
SophosIPS.exe
SSPService.exe
User-space processes communicate with the driver via the Windows Driver Framework, using IOCTLs, Read, and Write. Communications to and from the driver are protected, only accepting connections from authorized and authentic Sophos processes.
Customer options: The filter driver intercepts network traffic by browser and non-browser processes based on the policies defined in Sophos Central. Processing of the intercepted traffic is performed in user-space by SophosNetFilter.exe and SophosIPS.exe, which may send modified content back to the driver (for example, to display a block page for malicious content).
Additional measures: Customers can add individual sites to their allow or block list in Sophos Central.
Hmpalert.sys
What it does: Hmpalert.sys enforces Sophos CryptoGuard, which detects and prevents bulk encryption of files by ransomware. It also configures what exploit mitigations are enforced as processes are executed.
Inputs: Hmpalert.sys has a number of inputs, including several registry subkeys and IOCTLS.
Customer options: Customers can enable/disable exploit mitigations and manage exclusions in Sophos Central Threat Protection policy.
Additional measures: N/A
SophosZtnaTap.sys
What it does: SophosZtnaTap.sys is a Sophos-built OpenVPN TAP driver. If the customer deploys the Sophos Zero Trust Network Access (ZTNA) agent, the driver intercepts DNS lookups for managed applications, and redirects traffic for those applications to the applicable Sophos ZTNA gateways. ZTNA applications and gateways are configured through Sophos Central policies and stored in the registry.
Inputs: Inputs into SophosZtnaTap.sys are via a registry subkey.
Customer policy (ZTNA Applications, Gateways, and certificates)
DACLs; Sophos Tamper Protected
Customer options: Customers can manage their ZTNA-protected applications and gateways from Sophos Central.
Additional measures: N/A
Gradual deployment example: CryptoGuard ExFAT
Sophos CryptoGuard has protected against bulk encryption on permanent disks for over a decade. Intercept X version 2024.1.1 introduced a new feature, CryptoGuard ExFAT, which extends this protection to ExFAT partitions (typically found on removable USB drives).
CryptoGuard ExFAT development and testing took place in September 2023 through March 2024. This feature was guarded by the flag ‘hmpa.cryptoguard-exfat.available.’
Sophos Engineering ran the software internally with the flag enabled (our ‘Dogfood release’) starting March 22, 2024.
Intercept X version 2024.1.1 was released to Sophos, then to customers using our gradual software deployment process, between May 21, 2024 through June 6, 2024. At this stage the feature was still dormant for everyone apart from Sophos engineers.
The ’hmpa.cryptoguard-exfat.available’ flag was enabled using our gradual flag enablement process, between June 10, 2024 through June 26, 2024.
Customer control
Customers can select a fixed software version (Endpoint Software Management: Fixed-term support, Long-term support). This locks the software and flags until the customer selects a different software package. Customers who use the ‘Sophos recommended’ option receive new software periodically. Aside from software rollouts, they also receive gradual feature-flag enablements for new features in the software, as with a normal software release. Sophos has evolved this process to improve stability and avoid enabling new events globally for all customers.
Conclusion
Kernel drivers are fundamental to the Intercept X Advanced product – and to robust Windows endpoint security, in general – but we also acknowledge that operating in kernel-space is not without its risks.
In this article, we’ve walked through the kernel drivers in Intercept X Advanced (as of release 2024.02), what they do, how they’re signed, what their inputs are, the control customers have over their management, and additional safeguards we’ve put in place – including gradual, phased rollouts of new features, and exemptions to minimize the risk of disruption.
While no safeguard can ever eliminate risk altogether, we wanted to share the details of our drivers in the interests of transparency, and to explain in depth how we approach the complex problem of trying to protect our customers from threats, in as safe a manner as possible.
BitTorrent is often characterized as a decentralized file-sharing technology. However, its reliance on centralized indexes runs contrary to this idea. Over the years, several ‘indestructible’ alternatives have been proposed, including the relatively new Bitmagnet software. With Bitmagnet, people can run their own private BitTorrent index, relying on DHT and the BEP51 protocol.
When Bram Cohen released the first version of BitTorrent in 2002, it sparked a file-sharing revolution.
At the time bandwidth was a scarce resource, making it impossible to simultaneously share large files with millions of people over the Internet. BitTorrent not only thrived in that environment, the protocol remains effective even to this day.
BitTorrent transfers rely on peer-to-peer file-sharing without a central storage location. With updated additions to the protocol, such as the BitTorrent Distributed Hash Table (DHT), torrent files no longer require a tracker server either, making it decentralized by nature.
In theory, it doesn’t always work like that though. People who use BitTorrent, for research purposes or to grab the latest Linux distros, often use centralized search engines or indexes. If these go offline, the .torrent files they offer go offline too.
Decentralizing Torrents
This problem isn’t new and solutions have been around for quite a few years. There’s the University-sponsored Tribler torrent client, for example, and the BitTorrent protocol extension (BEP51), developed by ‘The 8472’, that also helps to tackle this exact problem.
BEP51 makes it possible to discover and collect infohashes through DHT, without the need for a central tracker. These infohashes can be converted to magnet links and when paired with relevant metadata, it’s possible to create a full BitTorrent index that easily rivals most centralized torrent sites.
Some centralized torrent sites, such as BTDigg, have already done just that. However, the beauty of the proposition involving DHT is that centralized sites are not required to act as search engines. With the right code, anyone can set up their own personalized and private DHT crawler, torrent index, and search engine.
Bitmagnet: A Private Decentralized Torrent Index
Bitmagnet is a relatively new self-hosted tool that does exactly that. The software, which is still in an early stage of development, was launched publicly a few months ago.
“The project aims to reduce reliance on public torrent sites that are prone to takedown and expose users to ads and malware,” Mike, the lead developer, tells us.
Those who know how to create a Docker container can have an instance up and running in minutes and for the privacy conscious, the docker-compose file on GitHub supports VPNs via Gluetun. Once Bitmagnet is up and running, it starts collecting torrent data from DHT, neatly classifies what it finds, and makes everything discoverable through its own search engine.Bitmagnet UI
Decentralization is just one of the stated advantages. The developer was also positively surprised by the sheer amount of content that was discovered and categorized through Bitmagnet. This easily exceeds the libraries of most traditional torrent sites.
“Run it for a month and you’ll have a personal index and search engine that dwarfs the popular torrent websites, and includes much content that can often only be found on difficult-to-join private trackers,” Mike tells us.
After running the software for four months, the developer now has more than 12 million indexed torrents. However, other users with more bandwidth and better connections have many more already. This also brings us to one of the main drawbacks; a lack of curation.
Curation
Unlike well-moderated torrent sites, Bitmagnet adds almost any torrent it finds to its database. This includes mislabeled files, malware-ridden releases, and potentially illegal content. The software tries to limit abuse by filtering metadata for CSAM content, however.
There are plans to add more curation by adding support for manual postings and federation. That would allow people with similar interests to connect, acting more like a trusted community. However, this is still work in progress.
Another downside is that it could take longer to index rare content, as it has to be discovered first. Widely shared torrents tend to distribute quickly over DHT, but rare releases will take much longer to be picked up. In addition, users may occasionally stumble upon dead or incomplete torrents.
Thus far, these drawbacks are not stopping people from trying the software.
While Bitmagnet is only out as an “alpha” release it’s getting plenty of interest. The Docker image has been downloaded nearly 25k times and the repository has been starred by more than a thousand other developers so far.
Caution is Advised!
Mike doesn’t know how many people are running an instance or how they’re using them. Bitmagnet is designed and intended for people to run on their own computer and network, but people could turn it into a public-facing search engine as well.
Running a public search engine comes with legal risks of course. Once there’s serious traffic, that will undoubtedly alert anti-piracy groups.
Even those who use the software privately to download legitimate content might receive complaints. By crawling the DHT, the software presents itself as a torrent client. While it doesn’t download any content automatically, some rudimentary anti-piracy tracking tools might still (incorrectly) flag this activity.
There are no examples of this happening at the moment, but the potential risk is why Bitmagnet advises users to opt for VPN routing.
Impossible to Shut Down
All in all, Bitmagnet is an interesting tool that uses some of BitTorrent’s underutilized powers, which have become increasingly rare in recent years.
The idea behind Bitmagnet is similar to Magnetico, which first came out in 2017. While that no longer appears to be actively maintained, it remains available on GitHub. During these years, we haven’t seen any takedown notices targeting the software.
Mike hopes that his project will be spared from copyright complaints too. The developer sees it simply as a content-neutral tool, much like a web browser.
“I hope that the project is immune from such issues, because the source code contains no copyright infringing material. How people choose to use the app is up to them – if you access copyrighted content using a web browser or BitTorrent client, that does not make the vendors of those apps liable.”
“Bitmagnet cannot be ‘taken down’ – even if the GitHub repository were threatened by an illegitimate takedown request, the code can easily be hosted elsewhere,” Mike concludes.
Researchers from Delft University of Technology plan to amplify their BitTorrent client “Tribler” with decentralized AI-powered search. A new demo shows that generative AI models make it possible to search for content in novel ways, without restriction. The ultimate goal of the research project is to shift the Internet’s power balance from governments and large corporations back to consumers.
Twenty-five years ago, peer-to-peer file-sharing took the Internet by storm.
The ability to search for and share content with complete strangers was nothing short of a revolution.
In the years that followed, media consumption swiftly moved online. This usually involved content shared without permission, but pirate pioneers ultimately paved the way for new business models.
The original ‘pirate’ ethos has long since gone. There are still plenty of unauthorized sites and services, but few today concern themselves with decentralization and similar technical advances; centralized streaming is the new king with money as the main motivator.
AI Meets BitTorrent
There are areas where innovation and technological progress still lead today, mostly centered around artificial intelligence. Every month, numerous new tools and services appear online, as developers embrace what many see as unlimited potential.
How these developments will shape the future is unknown, but they have many rightsholders spooked. Interestingly, an ‘old’ research group, that was already active during BitTorrent’s heyday, is now using AI to amplify its technology.
Researchers from the Tribler research group at Delft University of Technology have been working on their Tribler torrent client for nearly two decades. They decentralized search, removing the need for torrent sites, and implemented ‘anonymity‘ by adding an onion routing layer to file transfers.
Many millions of euros have been spent on the Tribler research project over the years. Its main goal is to advance decentralized technology, not to benefit corporations, but to empower the public at large.
“Our entire research portfolio is driven by idealism. We aim to remove power from companies, governments, and AI in order to shift all this power to self-sovereign citizens,” the Tribler team explains.
Decentralized AI-powered Search
While not every technological advancement has been broadly embraced, yet, Tribler has just released a new paper and a proof of concept which they see as a turning point for decentralized AI implementations; one that has a direct BitTorrent link.
The scientific paper proposes a new framework titled “De-DSI”, which stands for Decentralised Differentiable Search Index. Without going into technical details, this essentially combines decentralized large language models (LLMs), which can be stored by peers, with decentralized search.
This means that people can use decentralized AI-powered search to find content in a pool of information that’s stored across peers. For example, one can ask “find a magnet link for the Pirate Bay documentary,” which should return a magnet link for TPB-AFK, without mentioning it by name.
This entire process relies on information shared by users. There are no central servers involved at all, making it impossible for outsiders to control.
Endless Possibilities, Limited Use
While this sounds exciting, the current demo version is not yet built into the Tribler client. Associate Professor Dr. Johan Pouwelse, leader of the university’s Tribler Lab, explains that it’s just a proof of concept with a very limited dataset and AI capabilities.
“For this demo, we trained an end-to-end generative Transformer on a small dataset that comprises YouTube URLs, magnet links, and Bitcoin wallet addresses. Those identifiers are each annotated with a title and represent links to movie trailers, CC-licensed music, and BTC addresses of independent artists,” Pouwelse says.
We tried some basic searches with mixed results. That makes sense since there’s only limited content, but it can find magnet links and videos without directly naming the title. That said, it’s certainly not yet as powerful as other AI tools.
In essence, De-DSI operates by sharing the workload of training large language models on lists of document identifiers. Every peer in the network specializes in a subset of data, which other peers in the network can retrieve to come up with the best search result.
A Global Human Brain to Fight Torrent Spam and Censors
The proof of concept shows that the technology is sound. However, it will take some time before it’s integrated into the Tribler torrent client. The current goal is to have an experimental decentralized-AI version of Tribler ready at the end of the year.
While the researchers see this as a technological breakthrough, it doesn’t mean that things will improve for users right away. AI-powered search will be slower to start with and, if people know what they’re searching for, it offers little benefit.
Through trial and error, the researchers ultimately hope to improve things though, with a “global brain” for humanity as the ultimate goal.
Most torrent users are not looking for that, at the moment, but Pouwelse says that they could also use decentralized machine learning to fight spam, offer personal recommendations, and to optimize torrent metadata. These are concrete and usable use cases.
The main drive of the researchers is to make technology work for the public at large, without the need for large corporations or a central government to control it.
“The battle royale for Internet control is heating up,” Pouwelse says, in a Pirate Bay-esque fashion.
“Driven by our idealism we will iteratively take away their power and give it back to citizens. We started 18 years ago and will take decades more. We should not give up on fixing The Internet, just because it is hard.”
The very limited De-DSI proof of concept and all related code is available on Huggingface. All technological details are available in the associated paper. The latest Tribler version, which is fully decentralized without AI, can be found on the official project page.
Attackers can hide their attempts to execute malicious code by inserting commands into the machine code stored in memory by the software interpreters used by many programming languages, such as VBScript and Python, a group of Japanese researchers will demonstrate at next week’s Black Hat USA conference.
Interpreters take human-readable software code and translate each line into bytecode — granular programming instructions understood by the underlying, often virtual, machine. The research team successfully inserted malicious instructions into the bytecode held in memory prior to execution, and because most security software does not scan bytecode, their changes escaped detection.
The technique could allow attackers to hide their malicious activity from most endpoint security software. Researchers from NTT Security Holdings Corp. and the University of Tokyo will demonstrate the capability at Black Hat using the VBScript interpreter, says Toshinori Usui, research scientist with NTT Security. The researchers have already confirmed that the technique also works for inserting malicious code in the in-memory processes of both the Python and the Lua interpreters.
“Malware often hides its behavior by injecting malicious code into benign processes, but existing injection-type attacks have characteristic behaviors … which are easily detected by security products,” Usui says. “The interpreter does not care about overwriting by a remote process, so we can easily replace generated bytecode with our malicious code — it’s that feature we exploit.”
Bytecode attacks are not necessarily new, but they are relatively novel. In 2018, a group of researchers from the University of California at Irvine published a paper, “Bytecode Corruption Attacks Are Real — And How to Defend Against Them,” introducing bytecode attacks and defenses. Last year, the administrators of the Python Package Index (PyPI) removed a malicious package, known as fshec2, which escaped initial detection because all its malicious code was compiled as bytecode. Python compiles its bytecode into PYC files, which can be executed by the Python interpreter.
“It may be the first supply chain attack to take advantage of the fact that Python byte code (PYC) files can be directly executed, and it comes amid a spike in malicious submissions to the Python Package Index,” Karlo Zanki, reverse engineer at ReversingLabs, said in a June 2023 analysis of the incident. “If so, it poses yet another supply chain risk going forward, since this type of attack is likely to be missed by most security tools, which only scan Python source code (PY) files.”
Going Beyond Precompiled Malware
After an initial compromise, attackers have a few options to expand their control of a targeted system: They can perform reconnaissance, try to further compromise the system using malware, or run tools already existing on the system — the so-called strategy of “living off the land.”
The NTT researchers’ variation of bytecode attack techniques essentially falls into the last category. Rather than using pre-compiled bytecode files, their attack — dubbed Bytecode Jiu-Jitsu — involves inserting malicious bytecode into the memory space of a running interpreter. Because most security tools do not look at bytecode in memory, the attack is able to hide the malicious commands from inspection.
The approach allows attacker to skip other more obviously malicious steps, such as calling suspicious APIs to create threads, allocating executable memory, and modifying instruction pointers, Usui says.
“While native code has instructions directly executed by the CPU, bytecode is just data to the CPU and is interpreted and executed by the interpreter,” he says. “Therefore, unlike native code, bytecode does not require execution privilege, [and our technique] does not need to prepare a memory region with execution privilege.”
Better Interpreter Defenses
Developers of interpreters, security-tools developers, and operating-system architects can all have some impact on the problem. While attacks targeting bytcode do not exploit vulnerabilities in interpreters, but rather the way that they execute code, certain security modifications such as pointer checksums could mitigate the risk, according to the UC Irvine paper.
The NTT Security researchers noted that checksum defenses would not likely be effective against their techniques and recommend that developers enforce write protections to help eliminate the risk. “The ultimate countermeasure is to restrict the memory write to the interpreter,” Usui says.
The purpose of presenting a new attack technique is to show security researchers and defenders what could be possible, and not to inform attackers’ tactics, he stresses. “Our goal is not to abuse defensive tactics, but to ultimately be an alarm bell for security researchers around the world,” he says.
China-linked advanced persistent threat group APT41 appears to have compromised a government-affiliated institute in Taiwan that conducts research on advanced computing and associated technologies.
The intrusion began in July 2023, with the threat actor gaining initial access to the victim environment via undetermined means. Since then, it has deployed multiple malware tools, including the well-known ShadowPad remote access Trojan (RAT), the Cobalt Strike post compromise tool, and a custom loader for injecting malware using a 2018 Windows remote code execution vulnerability (CVE-2018-0824).
APT41 is an attribution that several vendors use to track a loose collective of China-nexus threat groups that have been engaged in a broad range of cyber espionage and financially motivated cyberattacks around the world, going back to 2012. Members of the group such as Wicked Panda, Winnti, Barium, and SuckFly have plundered and pillaged trade secrets, intellectual property, and other sensitive data from organizations in the US and multiple other countries in recent years.
Most recently, Mandiant reported observing members of the group targeting global shipping and logistics companies and organizations in the technology, entertainment, and automotive sectors. The US government indicted several members of the Chengdu-based APT41 in 2020, though that has done little slow it down.
Academic Research: A Valuable Cyber Target
Researchers at Cisco Talos discovered the intrusion when investigating abnormal activity involving attempts to download and execute PowerShell scripts in the Taiwan research institute’s network environment last year.
“The nature of research-and-development work carried out by the entity makes it a valuable target for threat actors dedicated to obtaining proprietary and sensitive technologies of interest to them,” Talos researchers Joey Chen, Ashley Shen, and Vitor Ventura said in a report this week. Over the course of the intrusion, APT41 actors broke into three systems in the target environment and stole at least some documents from there, they said.
ShadowPad is malware that researchers first discovered embedded in the source code of NetSarang Computer’s Xmanager server management software back in 2017. That supply chain attack impacted several NetSarang customers in the APAC region. Initially, researchers believed that APT41 was the sole user of the backdoor. Over the years however, they have identified multiple groups — all of them China-linked — that have used the RAT in numerous cyber-espionage campaigns and software supply chain attacks.
With the attack on the Taiwanese research institute, APT41 used two different ShadowPad iterations — one that leveraged a previously known packing mechanism called “ScatterBee,” and another that used an outdated and vulnerable version of Microsoft Input Method Editors (IME), the Cisco Talos researchers said.
ShadowPad & Cobalt Strike Anchor Espionage Effort
The attackers used ShadowPad to run commands for mapping out the victim network, collecting data on hosts, and trying to find other exploitable systems on the same network. Cisco Talos also found the APT harvesting passwords and user credentials stored in Web browsers from the compromised environment, using tools such as Mimikatz and WebBrowserPassView.
“From the environment the actor executes several commands, including using ‘net,’ ‘whoami,’ ‘quser,’ ‘ipconfig,’ ‘netstat,’ and ‘dir’ commands to obtain information on user accounts, directory structure, and network configurations from the compromised systems,” the researchers said. “In addition, we also observed query to the registry key to get the current state of software inventory collection on the system.”
As part of their attack chain, the threat actors also deployed the Cobalt Strike post compromise tool on the victim network using a loader they cloned from a GitHub project. It’s designed to evade antivirus detection tools.
“It’s important to highlight that this Cobalt Strike beacon shellcode used steganography to hide in a picture and executed by this loader,” the researchers said. “In other words, its download, decryption, and execution routines all happen in runtime in memory.”
A Fortune 50 company paid $75 million to its cyberattackers earlier this year, greatly exceeding any other confirmed ransom payment in history. The beneficiary of the payout is an outfit called Dark Angels. And Dark Angels isn’t just effective — in some ways, the gang turns so much of what we thought we knew about ransomware on its head.
Sure, there have been other big amounts forked over in the past: In 2021, Illinois-based CNA Financial was reported to have paid a then unprecedented $40 million ransom in order to restore its systems after a ransomware attack (the company never confirmed that figure). Later that year, the meat manufacturer JBS admitted to paying $11 million to end a disruption affecting its factories. Caesars Palace last year paid $15 million to make its ransomware disruption problems go away.
But those figures pale in comparison against the $75 million in equivalent Bitcoin paid by the aforementioned large organization, which Zscaler chose to keep anonymous in its 2024 annual ransomware report, where the payout was first recorded. The dollar amount has also been corroborated by Chainalysis.
Meet the Dark Angels
Dark Angels first appeared in the wild in May 2022. Ever since, its specialty has been defeating fewer but higher-value targets than its ransomware brethren. Past victims have included multiple S&P 500 companies spread across varied industries: healthcare, government, finance, education, manufacturing, telecommunications, and more.
For example, there was its headline-grabbing attack on the megalith Johnson Controls International (JCI) last year. It breached the company’s VMware ESXi hypervisors, freezing them with Ragnar Locker and stealing a reported 27 terabytes worth of data. The ransom demand: $51 million. It’s unclear how Johnson Controls responded but, considering its $27 million-plus cleanup effort, it’s likely that the company did not cave.
$27 million would have been the second-largest ransom payment in recorded history at the time (after the reported CNA payment). But there’s evidence to suggest that this wasn’t just some outlandish negotiating tactic — that Dark Angels has good reason to think it can pull off that kind of haul.
Dark Angels Does Ransomware Differently
Forget everything you know about ransomware, and you’ll start to understand Dark Angels.
Against the grain, the group does not operate a ransomware-as-a-service business. Nor does it have its own malware strain — it prefers to borrow encryptors like Ragnar Locker and Babuk.
Its success instead comes down to three primary factors. First: the extra care it can take by attacking fewer, higher-yielding targets.
Second is its ability to exfiltrate gobs of sensitive data. As Brett Stone-Gross, senior director of threat intelligence at Zscaler explains, “If you look at a lot of these other ransomware groups, their affiliates are stealing maybe a few hundred gigabytes of data. Sometimes even less than 100 gigabytes of data. They usually top out around, maybe, one terabyte or so. In contrast, Dark Angels are stealing tens of terabytes of data.”
In that, Dark Angels differs only in degree, not in kind. Where it really separates itself from other groups is in its subtlety. Its leak site isn’t flashy. It doesn’t make grand pronouncements about its latest victims. Besides the obvious operational security benefits to stealth (it’s largely escaped media scrutiny in recent years, despite pulling off major breaches), its aversion to the limelight also helps it earn larger returns on investment.
For example, the group often avoids encrypting victims’ data, with the express purpose of allowing them to continue to operate without disruption. This seems to defy common wisdom. Surely the threat of downtime and media scrutiny are effective tools to get victims to pay up?
“You would think that, but the results say otherwise,” Stone-Gross suggests.
Dark Angels makes paying one’s ransom easy and quiet — an attractive prospect for companies that just want to put their breaches behind them. And avoiding business disruption is mutually beneficial: Without the steep bills associated with downtime, companies have more money to pay Dark Angels.
Can Dark Angels’ Wings Be Clipped?
In its report, Zscaler predicted “that other ransomware groups will take note of Dark Angels’ success and may adopt similar tactics, focusing on high value targets and increasing the significance of data theft to maximize their financial gains.”
If that should come to pass, companies will face much steeper, yet more compelling ransom demands. Luckily, Dark Angels’ approach has an Achilles’ heel.
“If it’s a terabyte of data, [a hacker] can probably complete that transfer in several days. But when you’re talking terabytes — you know, tens of terabytes of data — now you’re talking weeks,” Stone-Gross notes. So, companies that can catch Dark Angels in the act may be able to stop them before it’s too late.
The U.S. Department of Justice (DoJ), along with the Federal Trade Commission (FTC), filed a lawsuit against popular video-sharing platform TikTok for “flagrantly violating” children’s privacy laws in the country.
The agencies claimed the company knowingly permitted children to create TikTok accounts and to view and share short-form videos and messages with adults and others on the service.
They also accused it of illegally collecting and retaining a wide variety of personal information from these children without notifying or obtaining consent from their parents, in contravention of the Children’s Online Privacy Protection Act (COPPA).
TikTok’s practices also infringed a 2019 consent order between the company and the government in which it pledged to notify parents before collecting children’s data and remove videos from users under 13 years old, they added.
COPPA requires online platforms to gather, use, or disclose personal information from children under the age of 13, unless they have obtained consent from their parents. It also mandates companies to delete all the collected information at the parents’ request.
“Even for accounts that were created in ‘Kids Mode‘ (a pared-back version of TikTok intended for children under 13), the defendants unlawfully collected and retained children’s email addresses and other types of personal information,” the DoJ said.
“Further, when parents discovered their children’s accounts and asked the defendants to delete the accounts and information in them, the defendants frequently failed to honor those requests.”
The complaint further alleged the ByteDance-owned company subjected millions of children under 13 to extensive data collection that enabled targeted advertising and allowed them to interact with adults and access adult content.
It also faulted TikTok for not exercising adequate due diligence during the account creation process by building backdoors that made it possible for children to bypass the age gate aimed at screening those under 13 by letting them sign in using third-party services like Google and Instagram and classifying such accounts as “age unknown” accounts.
“TikTok human reviewers allegedly spent an average of only five to seven seconds reviewing each account to make their determination of whether the account belonged to a child,” the FTC said, adding it will take steps to protect children’s privacy from firms that deploy “sophisticated digital tools to surveil kids and profit from their data.”
TikTok has more than 170 million active users in the U.S. While the company has disputed the allegations, it’s the latest setback for the video platform, which is already the subject of a law that would force a sale or a ban of the app by early 2025 because of national security concerns. It has filed a petition in federal court seeking to overturn the ban.
“We disagree with these allegations, many of which relate to past events and practices that are factually inaccurate or have been addressed,” TikTok said. “We offer age-appropriate experiences with stringent safeguards, proactively remove suspected underage users, and have voluntarily launched features such as default screen time limits, Family Pairing, and additional privacy protections for minors.”
The social media platform has also faced scrutiny globally over child protection. European Union regulators handed TikTok a €345 million fine in September 2023 for violating data protection laws in relation to its handling of children’s data. In April 2023, it was fined £12.7 million by the ICO for illegally processing the data of 1.4 million children under 13 who were using its platform without parental consent.
The lawsuit comes as the U.K. Information Commissioner’s Office (ICO) revealed it asked 11 media and video-sharing platforms to improve their children’s privacy practices or risk facing enforcement action. The names of the offending services were not disclosed.
“Eleven out of the 34 platforms are being asked about issues relating to default privacy settings, geolocation or age assurance, and to explain how their approach conforms with the [Children’s Code],” it said. “We are also speaking to some of the platforms about targeted advertising to set out expectations for changes to ensure practices are in line with both the law and the code.”
Cybersecurity researchers have disclosed details of a new distributed denial-of-service (DDoS) attack campaign targeting misconfigured Jupyter Notebooks.
The activity, codenamed Panamorfi by cloud security firm Aqua, utilizes a Java-based tool called mineping to launch a TCP flood DDoS attack. Mineping is a DDoS package designed for Minecraft game servers.
Attack chains entail the exploitation of internet-exposed Jupyter Notebook instances to run wget commands for fetching a ZIP archive hosted on a file-sharing site called Filebin.
The ZIP file contains two Java archive (JAR) files, conn.jar and mineping.jar, with the former used to establish connections to a Discord channel and trigger the execution of the mineping.jar package.
“This attack aims to consume the resources of the target server by sending a large number of TCP connection requests,” Aqua researcher Assaf Morag said. “The results are written to the Discord channel.”
The attack campaign has been attributed to a threat actor who goes by the name yawixooo, whose GitHub account has a public repository containing a Minecraft server properties file.
This is not the first time internet-accessible Jupyter Notebooks have been targeted by adversaries. In October 2023, a Tunisian threat dubbed Qubitstrike was observed breaching Jupyter Notebooks in an attempt to illicitly mine cryptocurrency and breach cloud environments.
Free Download Burp Suite Professional for Windows PC is a reliable and practical platform that provides a simple means of performing security testing of web applications.
It gives you complete control, combining advanced manual techniques with various tools that seamlessly support the entire testing process.
Burp Suite Professional Overview
Burp Suite Professional is easy-to-use and intuitive and does not require you to perform advanced actions to analyze, scan and exploit web apps. It is highly configurable and has useful features to assist experienced testers with their work.
The main window displays all the available tools you can choose from and set each one’s settings the way you want. The application is designed to work alongside your browser and functions as an HTTP proxy; thus, all the HTTP/s traffic from your browser passes through the utility. If you want to perform any testing, you need to configure the browser to work with it.
Features of Burp Suite Professional
Coverage of over 100 generic vulnerabilities
Cutting-edge web application crawler accurately maps content and functionality.
Burp Scanner includes a full JavaScript analysis engine
Detect server-side vulnerabilities that are completely invisible
Project files to save your work incrementally in real-time
Export beautifully formatted HTML reports of discovered vulnerabilities.
Configuration library to quickly launch targeted scans with different settings.
Optionally report all reflected and stored inputs.
Burp Proxy allows manual testers to intercept all requests and responses
HTML5 WebSockets messages are intercepted and logged to a separate history
The Decoder tool lets you convert data between common encoding schemes and formats used on the modern web.
Clickbait tool generates working clickjacking attacks against vulnerable application functions.
Burp helps eliminate browser security warnings.
Burp supports invisible proxying for non-proxy-aware clients
System Requirements and Technical Details
Supported OS: Windows 11 / Windows 10 / Windows 8.1 / Windows 7
Processor: Multi-core Intel Series or above, Xeon or AMD equivalent
Free Download PrivacyRoot Secure Delete Professional full version standalone offline installer for Windows; it is an effective tool to erase files and folders securely.
Overview of Secure Delete Professional
This is a straightforward but very effective tool to erase files and folders securely. After using this tool, selected files and folders will not be available for recovery. We have attached several security options to choose the level of security you need (this reflects the processing speed).
You can use this application in 3 ways. First, you can securely drag and drop files or folders you want to delete to the software’s main window.
Moreover, you can choose the needed files or folders by clicking the “Add files” link. Or, the third way, you can click any file or folder in Windows Explorer, and in the popup menu, choose “Send to” and choose “Secure Deleter.” That’s easy!
Features of Secure Delete Professional
Securely erases files
Convenient interface
Security guaranteed
Light on resources
Technical Details and System Requirements
Supported OS: Windows 11, Windows 10, Windows 8.1, Windows 7
Free Download Heidi Eraser latest version standalone offline installer for Windows; it removes sensitive data by overwriting it several times.
Overview of Eraser
This program is an advanced security tool for Windows that allows you to remove sensitive data from your hard drive by overwriting it several times with carefully selected patterns.
It is currently supported under Windows XP (with Service Pack 3), Windows Server 2003 (with Service Pack 2), Windows Vista, Windows Server 2008, Windows 7, 8, 10 and Windows Server 2012, 2016.
Features of Eraser
Erases residue from deleted files
Erases MFT and MFT-resident files (for NTFS volumes) and Directory Indices (for FAT)
Free Download Fort Firewall’s latest version standalone offline installer for Windows. It is a simple firewall for Windows with lots of rich features.
Overview of Fort Firewall
Protecting your computer against threats on the internet usually means one has to filter internet access more strictly. What better way to do that if not by using a firewall? Windows comes packed with a firewall by default.
If you’re not satisfied with it and are looking for more ways to control and monitor your PC’s access to the internet, Fort Firewall could be one particular way to go.
As far as its features are concerned, you get the complete package. Start with filtering content by network addresses or grouping the apps on your PC. Limit internet access and speed or usage for specific applications.
Features of Fort Firewall
Filter by network addresses, application groups
Support wildcards in program path names
Application group speed limits
Stores traffic statistics
Graphical display of bandwidth
Based on the Windows Filtering Platform (WFP)
Technical Details and System Requirements
Supported OS: Windows 11, Windows 10, Windows 8.1, Windows 7
Free Download Antivirus Live CD full version standalone offline installer for Windows. It is an official 4MLinux fork, including the ClamAV scanner.
Overview of Antivirus Live CD
This software helps you protect your computer against viruses. It’s designed for users who need a lightweight live CD, which will help them to protect their computers against viruses. It automatically updates its virus signature databases by using ethernet, WiFi, PPP, and PPPoE.
All partitions are mounted during the boot process to be scanned by ClamAV. The Antivirus Live CD ISO images are fully compatible with UNetbootin, which can be used to create an easy-to-use Antivirus Live USB.
Features of Antivirus Live CD
An official 4MLinux fork
Including ClamAV scanner
A lightweight live CD
Protect computers against viruses
Fully compatible with UNetbootin
Technical Details and System Requirements
Supported OS: Windows 11, Windows 10, Windows 8.1, Windows 7
Free Download InterCrypto Advanced Encryption Package for Windows PC. It is a robust software solution designed to secure your confidential documents. With a range of powerful encryption algorithms, this tool ensures your sensitive data remains protected from unauthorized access. Whether dealing with personal files or professional documents, this package offers advanced security features to keep your information safe.
Overview of InterCrypto Advanced Encryption Package
It is designed for those who prioritize data security. It employs a variety of robust encryption algorithms to provide comprehensive protection for your files. This tool is essential for anyone needing to secure sensitive information against potential threats.
Powerful Encryption Algorithms
It stands out due to its use of multiple encryption algorithms. It supports well-known standards such as AES (Advanced Encryption Standard), Blowfish, Twofish, GOST, and Serpent. Each algorithm offers a unique level of security, making it possible to tailor the encryption process to your specific needs.
AES Encryption: AES is renowned for its strong security and efficiency. It is widely used for encrypting sensitive data and is a standard in the industry.
Blowfish Encryption: Known for its speed and effectiveness, Blowfish is a versatile encryption algorithm suitable for various applications.
Twofish Encryption: As a successor to Blowfish, Twofish provides enhanced security features and is ideal for encrypting larger data sets.
GOST Encryption: This Russian encryption standard is notable for its robustness and is used in various government and military applications.
Serpent Encryption: Offering a high level of security, Serpent is known for its resistance to cryptanalysis.
User-Friendly Interface
Despite its advanced features, the tool has a user-friendly interface. It allows beginners and experienced users to navigate its functions easily, ensuring you can encrypt and decrypt files without extensive technical knowledge.
File Encryption and Decryption
With this software, encrypting and decrypting files is straightforward. Users can select individual files or entire directories for encryption. The decryption process is equally simple, making it easy to access your data when needed. This functionality ensures that you can manage your encrypted files efficiently.
Customization Options
One critical feature of the tool is its customization options. You can choose various encryption algorithms and settings to match your security requirements. This flexibility allows you to adjust the protection level based on your data’s sensitivity.
Compatibility and Performance
The software is compatible with various Windows operating systems, ensuring broad accessibility. It performs efficiently, even with large files, without causing significant delays, making it suitable for personal and professional use.
Free Hard Disk Space: 200MB or more is recommended
Conclusion
InterCrypto Advanced Encryption Package is a powerful tool for securing sensitive data. Its array of strong encryption algorithms and user-friendly interface offer robust protection and ease of use. This software ensures your confidential documents remain safe from unauthorized access, whether for personal or professional purposes.