Skip to content

Admiration Tech News

  • Home
  • Cyber Attacks
  • Data Breaches
  • Vulnerability
  • Exploits
  • Crack Tutorials
  • Programming
  • Tools

Tag: Data Security

Open Source or Closed? The AI Dilemma

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Artificial intelligence is in the middle of a perfect storm in the software industry, and now Mark Zuckerberg is calling for open sourced AI.

Three powerful perspectives are colliding on how to control AI:

  1. All AI should be open source for sharing and transparency.
  2. Keep AI closed-source and allow big tech companies to control it.
  3. Establish regulations for the use of AI.

There are a few facts that make this debate tricky. First, if you have a model’s source code, you know absolutely nothing about how the model will behave. Openness in AI requires far more than providing source code. Second, AI comes in many different flavors and can be used to solve a broad range of problems.

From traditional AI for fraud detection and targeted advertising to generative AI for creating chatbots that, on the surface, produce human-like results, pushing us closer and closer to the ultimate (and scary) goal of Artificially Generated Intelligence (AGI). Finally, the ideas listed above for controlling AI all have a proven track record for improving software in general.

Understanding the Different Perspectives

Let’s discuss the different perspectives listed above in more detail.

Perspective #1 — All AI should be open source for sharing and transparency: This comes from a push for transparency with AI. Open source is a proven way to share and improve software. It provides complete transparency when used for conventional software. Open source software has propelled the software industry forward by leaps and bounds.

Perspective #2 — Keep AI closed-source and allow big tech companies to control it: Closed-source, or proprietary software, is the idea that an invention can be kept a secret, away from the competition, to maximize financial gain. To open source idealists, this sounds downright evil; however, it is more of a philosophical choice than one that exists on the spectrum of good and evil. Most software is proprietary, and that is not inherently bad — it is the foundation of a competitive and healthy ecosystem. It is a fundamental right of any innovator who creates something new to choose the closed-source path. The question becomes, if you operate without transparency, what guarantees can there be around responsible AI?

Perspective #3 — Establish regulations for using AI: This comes from lawmakers and elected officials pushing for regulation. The basic idea is that if a public function or technology is so powerful that bad actors or irresponsible management could hurt the general public, a government agency should be appointed to develop controls and enforce those controls. A school of thought suggests that incumbent and current leaders in AI also want regulation, but for reasons that are less pure — they want to freeze the playing field with them in the lead. We will primarily focus on the public good area.

The True Nature of Open Source

Before generative AI burst onto the scene, most software running in data centers was conventional. You can determine precisely what it does if you have the source code for traditional software. An engineer fluent in the appropriate programming language can review the code and determine its logic. You can even modify it and alter its behavior. Open source (or open source code) is another way of saying — I am going to provide everything needed to determine behavior and change behavior. In short, the true nature of open source software is to provide everything you need to understand the software’s behavior and change it.

For a model to be fully open, you need the training data, the source code of the model, the hyperparameters used during training, and, of course, the trained model itself, which is composed of the billions (and soon trillions) of parameters that store the model’s knowledge — also known as parametric memory. Now, some organizations only provide the model, keep everything else to themselves, and claim it is “open source.” This practice is known as “open-washing” and is generally frowned upon by both the open and closed-source communities as disingenuous. I would like to see a new term used for AI models that are partially shared. Maybe “partially open model” or “model from an open washing company.”

There is one final rub when it comes to fully shared models. Let’s say an organization wants to do the right thing and shares everything about a model — the training data, the source code, the hyperparameters, and the trained model. You still can’t determine precisely how it will behave unless you test it extensively. The parametric memory that determines behavior is not human-readable. Again, the industry needs a different term for fully open models. A term that is different from “open source,” which should only be used for non-AI software because the source code of a model does not help determine the behavior of the model. Perhaps “open model.”

Common Arguments

Let’s look at some common arguments that endorse using only one of the previously described perspectives. These are passionate defenders of their perspective, but that passion can cloud judgment.

Argument: Closed AI supporters claim that big tech companies have the means to guard against potential dangers and abuse. Therefore, AI should be kept private and out of the open source community.

Rebuttal: Big tech companies have the means to guard against potential abuse, but that does not mean they will do it judiciously or at all. Furthermore, there are other objectives besides this. Their primary purpose is making money for their shareholders, which will always take precedence.

Argument: Those who think that AI could threaten humanity like to ask, “Would you open source the Manhattan Project?”

Rebuttal: This is an argument for governance. However, it is an unfair and incorrect analogy. The purpose of the Manhattan Project was to build a bomb during wartime by using radioactive materials to produce nuclear fusion. Nuclear fusion is not a general-purpose technology that can be applied to different tasks. You can make a bomb and generate power — that’s it. The ingredients and the results are dangerous to the general public, so all aspects should be regulated. AI is much different. As described above, it comes in varying flavors with varying risks.

Argument: Proponents of open sourcing AI say that open source facilitates the sharing of science, provides transparency, and is a means to prevent a few from monopolizing a powerful technology.

Rebuttal: This is primarily true, but it is not entirely true. Open source does provide sharing. For an AI model, it is only going to provide some transparency. Finally, whether “open models” will prevent a few from monopolizing their power is debatable. To run a model like ChatGPT at scale, you must compute that only a few companies can acquire it.

Needs of the Many Outweigh the Needs of the Few

In “Star Trek II: The Wrath of Khan,” Spock dies from radiation poisoning. Spock realizes that the ship’s main engines must be repaired to facilitate an escape, but the engine room is flooded with lethal radiation. Despite the danger, Spock enters the radiation-filled chamber to make the necessary repairs. He successfully restores the warp drive, allowing the Enterprise to reach a safe distance. Unfortunately, Vulcans are not immune to radiation. His dying words to Captain Kirk explain the logic behind his actions, “The needs of the many outweigh the needs of the few or the one.”

This is perfectly sound logic and will have to be used to control AI. Specific models pose a risk to the general public. For these models, the general public’s needs outweigh innovators’ rights.

Should All AI Be Open Source?

Let’s review the axioms established thus far:

  • Open Source should remain a choice.
  • Open models are not as transparent as non-AI software that is open sourced.
  • Close Source is a right of the innovator.
  • There is no guarantee that big tech will correctly control their AI.
  • The needs of the general public must take precedence over all others.

The five bullets above represent everything I tried to make clear about open source, closed source, and regulations. If you believe them to be accurate, the answer to the question, “Should All AI be Open Source?” is no because it will not control AI, nor will a closed source. Furthermore, in a fair world, open source and open models should remain a choice, and close source should remain a right.

We can go one step further and talk about the actions the industry can take as a whole to move toward effective control of AI:

  • Determine the types of models that pose a risk to the general public. Because they control information (chatbots) or dangerous resources (automated cars), models with high risk should be regulated.
  • Organizations should be encouraged to share their models as fully open models. The open source community will need to step up and either prevent or label models that are only partially shared. The open source community should also put together tests that can be used to rate models.
  • Closed models should still be allowed if they do not pose a risk to the general public. Big Tech should develop its controls and tests that it funds and shares. This may be a chance for Big Tech to work closely with the open source community to solve a common problem.

Posted in VulnerabilityTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

How To Run an Agent on Federated Language Model Architecture

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

In the first part of this series, I introduced the idea of federated language models, where we take advantage of a capable cloud-based large language model (LLM) and a small language model (SLM) running at the edge.

To recap, an agent sends the user query (1) along with the available tools (2) to a cloud-based LLM to map the prompt into a set of functions and arguments (3). It then executes the functions to generate appropriate context from a database (4a). If there are no tools involved, it leverages the simple RAG mechanism to perform a semantic search in a vector database (4b). The context gathered from either of the sources is then sent (5) to an edge-based SLM to generate a factually correct response. The response (6) generated by the SLM is sent as the final answer (7) to the user query.

This tutorial focuses on the practical implementation of a federated LM architecture based on the below components:

  • OpenAI GPT-4 Omni as an LLM.
  • Microsoft Phi-3 as a SLM.
  • Ollama as the inference engine for Phi-3.
  • Nvidia Jetson AGX Orin as the edge device to run Ollama.
  • MySQL database and a Flask API server running locally.
  • Chroma as the local vector store for semantic search.

Refer to the tutorial on setting up Ollama on Jetson Orin and implementing the RAG agent for additional context and details.

Start by cloning the Git repository https://github.com/janakiramm/federated-llm.git, which has the scripts, data, and Jupyter Notebooks. This tutorial assumes that you have access to OpenAI and an Nvidia Jetson Orin device. You can also run Ollama on your local workstation and change the IP address in the code.

Step 1: Run Ollama on Jetson Orin

SSH into Jetson Orin and run the commands mentioned in the file, setup_ollama.sh.

Verify that you are able to connect to and access the model by running the below command on your workstation, where you run Jupyter Notebook.

123456789101112131415curl http://localhost:11434/v1/chat/completions \    -H “Content-Type: application/json” \    -d ‘{        “model”: “phi3:mini”,        “messages”: [            {                “role”: “system”,                “content”: “You are a helpful assistant.”            },            {                “role”: “user”,                “content”: “What is the capital of France?”            }        ]    }’


Replace localhost with the IP address of your Jetson Orin device. If everything goes well, you should be able to get a response from the model.

Congratulations, your edge inference server is now ready!

Step 2: Run MySQL DB and Flask API Server

The next step is to run the API server, which exposes a set of functions that will be mapped to the prompt. To make this simple, I built a Docker Compose file that exposes the REST API endpoint for the MySQL database backend.

Switch to the api folder and run the below command:

1start_api_server.sh


Check if two containers are running on your workstation with the docker ps command.

If you run the command curl "http://localhost:5000/api/sales/trends?start_date=2023-05-01&end_date=2023-05-30", you should see the response from the API.

Step 3: Index the PDF and Ingest the Embeddings in Chroma DB

With the API in place, it’s time to generate the embeddings for the datasheet PDF and store them in the vector database.

For this, run the Index_Datasheet.ipynb Jupyter Notebook, which is available in the notebooks folder.

A simple search retrieves the phrases that semantically match the query.

Step 4: Run the Federated LM Agent

The Jupyter Notebook, Federated-LM.ipynb, has the complete code to implement the logic. Let’s understand the key sections of the code.

We will import the API client that exposes the tools to the LLM.

We will import the API client that exposes the tools to the LLM.

First, we initialize two LLMs: GPT-4o (Cloud) and Phi3:mini (Edge)

After creating a Python list with the signatures of the tools, we will let GPT-4o map the prompt to appropriate functions and their arguments.

For example, passing the prompt What was the top selling product in Q2 based on revenue? to GPT-4o results in the model responding with the function get_top_selling_products and the corresponding arguments. Notice that a capable model is able to translate Q2 into date range, starting from April 1st to June 30th. This is exactly the power we want to exploit from the cloud-based LLM.

Once we enumerate the tool(s) suggested by GPT-4o, we execute, collect, and aggregate the output to form the context.

If the prompt doesn’t translate to tools, we attempt to use the retriever based on the semantic search from the vector database.

To avoid sending sensitive context to the cloud-based LLM, we leverage the model (edge_llm) at the edge for generation.

Finally, we implement the agent that orchestrates the calls between the cloud-based LLM and the edge-based LLM. It checks if the tools list is empty and then moves to the retriever to generate the context. If both are empty, the agent responds with the phrase “I don’t know.”

Below is the response from the agent based on tools, retriever, and unknown context.

To summarize, we implemented a federated LLM approach where an agent sends the user query along with available tools to a cloud-based LLM, which maps the prompt into functions and arguments. The agent executes these functions to generate context from a database. If no tools are involved, a simple RAG mechanism is used for semantic search in a vector database. The context is then sent to an edge-based SLM to generate a factually correct response, which is provided as the final answer to the user query.

Posted in VulnerabilityTagged Cyber Attacks, Data Security, Model ArchitectureLeave a comment

SAP AI Core Vulnerabilities Expose Customer Data to Cyber Attacks

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Cybersecurity researchers have uncovered security shortcomings in SAP AI Core cloud-based platform for creating and deploying predictive artificial intelligence (AI) workflows that could be exploited to get hold of access tokens and customer data.

The five vulnerabilities have been collectively dubbed SAPwned by cloud security firm Wiz.

“The vulnerabilities we found could have allowed attackers to access customers’ data and contaminate internal artifacts – spreading to related services and other customers’ environments,” security researcher Hillai Ben-Sasson said in a report shared with The Hacker News.

Following responsible disclosure on January 25, 2024, the weaknesses were addressed by SAP as of May 15, 2024.

In a nutshell, the flaws make it possible to obtain unauthorized access to customers’ private artifacts and credentials to cloud environments like Amazon Web Services (AWS), Microsoft Azure, and SAP HANA Cloud.

They could also be used to modify Docker images on SAP’s internal container registry, SAP’s Docker images on the Google Container Registry, and artifacts hosted on SAP’s internal Artifactory server, resulting in a supply chain attack on SAP AI Core services.

Furthermore, the access could be weaponized to gain cluster administrator privileges on SAP AI Core’s Kubernetes cluster by taking advantage of the fact that the Helm package manager server was exposed to both read and write operations.

“Using this access level, an attacker could directly access other customer’s Pods and steal sensitive data, such as models, datasets, and code,” Ben-Sasson explained. “This access also allows attackers to interfere with customer’s Pods, taint AI data and manipulate models’ inference.”

Wiz said the issues arise due to the platform making it feasible to run malicious AI models and training procedures without adequate isolation and sandboxing mechanisms.

“The recent security flaws in AI service providers like Hugging Face, Replicate, and SAP AI Core highlight significant vulnerabilities in their tenant isolation and segmentation implementations,” Ben-Sasson told The Hacker News. “These platforms allow users to run untrusted AI models and training procedures in shared environments, increasing the risk of malicious users being able to access other users’ data.”

“Unlike veteran cloud providers who have vast experience with tenant-isolation practices and use robust isolation techniques like virtual machines, these newer services often lack this knowledge and rely on containerization, which offers weaker security. This underscores the need to raise awareness of the importance of tenant isolation and to push the AI service industry to harden their environments.”

As a result, a threat actor could create a regular AI application on SAP AI Core, bypass network restrictions, and probe the Kubernetes Pod’s internal network to obtain AWS tokens and access customer code and training datasets by exploiting misconfigurations in AWS Elastic File System (EFS) shares.

“People should be aware that AI models are essentially code. When running AI models on your own infrastructure, you could be exposed to potential supply chain attacks,” Ben-Sasson said.

“Only run trusted models from trusted sources, and properly separate between external models and sensitive infrastructure. When using AI services providers, it’s important to verify their tenant-isolation architecture and ensure they apply best practices.”

The findings come as Netskope revealed that the growing enterprise use of generative AI has prompted organizations to use blocking controls, data loss prevention (DLP) tools, real-time coaching, and other mechanisms to mitigate risk.

“Regulated data (data that organizations have a legal duty to protect) makes up more than a third of the sensitive data being shared with generative AI (genAI) applications — presenting a potential risk to businesses of costly data breaches,” the company said.

They also follow the emergence of a new cybercriminal threat group called NullBulge that has trained its sights on AI- and gaming-focused entities since April 2024 with an aim to steal sensitive data and sell compromised OpenAI API keys in underground forums while claiming to be a hacktivist crew “protecting artists around the world” against AI.

“NullBulge targets the software supply chain by weaponizing code in publicly available repositories on GitHub and Hugging Face, leading victims to import malicious libraries, or through mod packs used by gaming and modeling software,” SentinelOne security researcher Jim Walter said.

“The group uses tools like AsyncRAT and XWorm before delivering LockBit payloads built using the leaked LockBit Black builder. Groups like NullBulge represent the ongoing threat of low-barrier-of-entry ransomware, combined with the evergreen effect of info-stealer infections.”

Posted in Cyber Attacks, VulnerabilityTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

SolarWinds Patches 8 Critical Flaws in Access Rights Manager Software

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

SolarWinds has addressed a set of critical security flaws impacting its Access Rights Manager (ARM) software that could be exploited to access sensitive information or execute arbitrary code.

Of the 13 vulnerabilities, eight are rated Critical in severity and carry a CVSS score of 9.6 out of 10.0. The remaining five weaknesses have been rated High in severity, with four of them having a CVSS score of 7.6 and one scoring 8.3.

The most severe of the flaws are listed below –

  • CVE-2024-23472 – SolarWinds ARM Directory Traversal Arbitrary File Deletion and Information Disclosure Vulnerability
  • CVE-2024-28074 – SolarWinds ARM Internal Deserialization Remote Code Execution Vulnerability
  • CVE-2024-23469 – Solarwinds ARM Exposed Dangerous Method Remote Code Execution Vulnerability
  • CVE-2024-23475 – Solarwinds ARM Traversal and Information Disclosure Vulnerability
  • CVE-2024-23467 – Solarwinds ARM Traversal Remote Code Execution Vulnerability
  • CVE-2024-23466 – Solarwinds ARM Directory Traversal Remote Code Execution Vulnerability
  • CVE-2024-23470 – Solarwinds ARM UserScriptHumster Exposed Dangerous Method Remote Command Execution Vulnerability
  • CVE-2024-23471 – Solarwinds ARM CreateFile Directory Traversal Remote Code Execution Vulnerability

Successful exploitation of the aforementioned vulnerabilities could allow an attacker to read and delete files and execute code with elevated privileges.

The shortcomings have been addressed in version 2024.3 released on July 17, 2024, following responsible disclosure as part of the Trend Micro Zero Day Initiative (ZDI).

The development comes after the U.S. Cybersecurity and Infrastructure Security Agency (CISA) placed a high-severity path traversal flaw in SolarWinds Serv-U Path (CVE-2024-28995, CVSS score: 8.6) to its Known Exploited Vulnerabilities (KEV) catalog following reports of active exploitation in the wild.

The network security company was the victim of a major supply chain attack in 2020 after the update mechanism associated with its Orion network management platform was compromised by Russian APT29 hackers to distribute malicious code to downstream customers as part of a high-profile cyber espionage campaign.

The breach prompted the U.S. Securities and Exchange Commission (SEC) to file a lawsuit against SolarWinds and its chief information security officer (CISO) last October alleging the company failed to disclose adequate material information to investors regarding cybersecurity risks.

However, much of the claims pertaining to the lawsuit were thrown out by the U.S. District Court for the Southern District of New York (SDNY) on July 18, stating “these do not plausibly plead actionable deficiencies in the company’s reporting of the cybersecurity hack” and that they “impermissibly rely on hindsight and speculation.”

Posted in VulnerabilityTagged Cyber Attacks, Data Security, malware, vulnerabilityLeave a comment

SocGholish Malware Exploits BOINC Project for Covert Cyberattacks

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

The JavaScript downloader malware known as SocGholish (aka FakeUpdates) is being used to deliver a remote access trojan called AsyncRAT as well as a legitimate open-source project called BOINC.

BOINC, short for Berkeley Open Infrastructure Network Computing Client, is an open-source “volunteer computing” platform maintained by the University of California with an aim to carry out “large-scale distributed high-throughput computing” using participating home computers on which the app is installed.

“It’s similar to a cryptocurrency miner in that way (using computer resources to do work), and it’s actually designed to reward users with a specific type of cryptocurrency called Gridcoin, designed for this purpose,” Huntress researchers Matt Anderson, Alden Schmidt, and Greg Linares said in a report published last week.

These malicious installations are designed to connect to an actor-controlled domain (“rosettahome[.]cn” or “rosettahome[.]top”), essentially acting as a command-and-control (C2) server to collect host data, transmit payloads, and push further commands. As of July 15, 10,032 clients are connected to the two domains.

The cybersecurity firm said while it hasn’t observed any follow-on activity or tasks being executed by the infected hosts, it hypothesized that the “host connections could be sold off as initial access vectors to be used by other actors and potentially used to execute ransomware.”

SocGholish attack sequences typically begin when users land on compromised websites, where they are prompted to download a fake browser update that, upon execution, triggers the retrieval of additional payloads to the infiltrated machines.

The JavaScript downloader, in this case, activates two disjointed chains, one that leads to the deployment of a fileless variant of AsyncRAT and the other resulting in the BOINC installation.

SocGholish Malware

The BOINC app, which is renamed as “SecurityHealthService.exe” or “trustedinstaller.exe” to evade detection, sets up persistence using a scheduled task by means of a PowerShell script.

The misuse of BOINC for malicious purposes hasn’t gone unnoticed by the project maintainers, who are currently investigating the problem and finding a way to “defeat this malware.” Evidence of the abuse dates back to at least June 26, 2024.

“The motivation and intent of the threat actor by loading this software onto infected hosts isn’t clear at this point,” the researchers said.

“Infected clients actively connecting to malicious BOINC servers present a fairly high risk, as there’s potential for a motivated threat actor to misuse this connection and execute any number of malicious commands or software on the host to further escalate privileges or move laterally through a network and compromise an entire domain.”

The development comes as Check Point said it’s been tracking the use of compiled V8 JavaScript by malware authors to sidestep static detections and conceal remote access trojans, stealers, loaders, cryptocurrency miners, wipers, and ransomware.

“In the ongoing battle between security experts and threat actors, malware developers keep coming up with new tricks to hide their attacks,” security researcher Moshe Marelus said. “It’s not surprising that they’ve started using V8, as this technology is commonly used to create software as it is very widespread and extremely hard to analyze.”

Posted in VulnerabilityTagged Cyber Attacks, Data Security, malware, vulnerabilityLeave a comment

Chinese Hackers Target Taiwan and U.S. NGO with MgBot and MACMA Malware

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Organizations in Taiwan and a U.S. non-governmental organization (NGO) based in China have been targeted by a Beijing-affiliated state-sponsored hacking group called Daggerfly using an upgraded set of malware tools.

The campaign is a sign that the group “also engages in internal espionage,” Symantec’s Threat Hunter Team, part of Broadcom, said in a new report published today. “In the attack on this organization, the attackers exploited a vulnerability in an Apache HTTP server to deliver their MgBot malware.”

Daggerfly, also known by the names Bronze Highland and Evasive Panda, was previously observed using the MgBot modular malware framework in connection with an intelligence-gathering mission aimed at telecom service providers in Africa. It’s known to be operational since 2012.

“Daggerfly appears to be capable of responding to exposure by quickly updating its toolset to continue its espionage activities with minimal disruption,” the company noted.

The latest set of attacks are characterized by the use of a new malware family based on MgBot as well as an improved version of a known Apple macOS malware called MACMA, which was first exposed by Google’s Threat Analysis Group (TAG) in November 2021 as distributed via watering hole attacks targeting internet users in Hong Kong by abusing security flaws in the Safari browser.

The development marks the first time the malware strain, which is capable of harvesting sensitive information and executing arbitrary commands, has been explicitly linked to a particular hacking group.

“The actors behind macOS.MACMA at least were reusing code from ELF/Android developers and possibly could have also been targeting Android phones with malware as well,” SentinelOne noted in a subsequent analysis at the time.

MACMA’s connections to Daggerly also stem from source code overlaps between the malware and Mgbot, and the fact that it connects to a command-and-control (C2) server (103.243.212[.]98) that has also been used by a MgBot dropper.

Another new malware in its arsenal is Nightdoor (aka NetMM and Suzafk), an implant that uses Google Drive API for C2 and has been utilized in watering hole attacks aimed at Tibetan users since at least September 2023. Details of the activity were first documented by ESET earlier this March.

“The group can create versions of its tools targeting most major operating system platform,” Symantec said, adding it has “seen evidence of the ability to trojanize Android APKs, SMS interception tools, DNS request interception tools, and even malware families targeting Solaris OS.”

The development comes as China’s National Computer Virus Emergency Response Center (CVERC) claimed Volt Typhoon – which has been attributed by the Five Eyes nations as a China-nexus espionage group – to be an invention of the U.S. intelligence agencies, describing it as a misinformation campaign.

“Although its main targets are U.S. congress and American people, it also attempt[s] to defame China, sow discords [sic] between China and other countries, contain China’s development, and rob Chinese companies,” the CVERC asserted in a recent report.

Posted in VulnerabilityTagged Cyber Attacks, Data Security, malware, vulnerabilityLeave a comment

CISA Adds Twilio Authy and IE Flaws to Exploited Vulnerabilities List

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

The U.S. Cybersecurity and Infrastructure Security Agency (CISA) has added two security flaws to its Known Exploited Vulnerabilities (KEV) catalog, based on evidence of active exploitation.

The vulnerabilities are listed below –

  • CVE-2012-4792 (CVSS score: 9.3) – Microsoft Internet Explorer Use-After-Free Vulnerability
  • CVE-2024-39891 (CVSS score: 5.3) – Twilio Authy Information Disclosure Vulnerability

CVE-2012-4792 is a decade-old use-after-free vulnerability in Internet Explorer that could allow a remote attacker to execute arbitrary code via a specially crafted site.

It’s currently not clear if the flaw has been subjected to renewed exploitation attempts, although it was abused as part of watering hole attacks targeting the Council on Foreign Relations (CFR) and Capstone Turbine Corporation websites back in December 2012.

On the other hand, CVE-2024-39891 refers to an information disclosure bug in an unauthenticated endpoint that could be exploited to “accept a request containing a phone number and respond with information about whether the phone number was registered with Authy.”

Earlier this month, Twilio said it resolved the issue in versions 25.1.0 (Android) and 26.1.0 (iOS) after unidentified threat actors took advantage of the shortcoming to identify data associated with Authy accounts.

“These types of vulnerabilities are frequent attack vectors for malicious cyber actors and pose significant risks to the federal enterprise,” CISA said in an advisory.

Federal Civilian Executive Branch (FCEB) agencies are required to remediate the identified vulnerabilities by August 13, 2024, to protect their networks against active threats.

Posted in VulnerabilityTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

Microsoft Defender Flaw Exploited to Deliver ACR, Lumma, and Meduza Stealers

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

A now-patched security flaw in the Microsoft Defender SmartScreen has been exploited as part of a new campaign designed to deliver information stealers such as ACR Stealer, Lumma, and Meduza.

Fortinet FortiGuard Labs said it detected the stealer campaign targeting Spain, Thailand, and the U.S. using booby-trapped files that exploit CVE-2024-21412 (CVSS score: 8.1).

The high-severity vulnerability allows an attacker to sidestep SmartScreen protection and drop malicious payloads. Microsoft addressed this issue as part of its monthly security updates released in February 2024.

“Initially, attackers lure victims into clicking a crafted link to a URL file designed to download an LNK file,” security researcher Cara Lin said. “The LNK file then downloads an executable file containing an [HTML Application] script.”

The HTA file serves as a conduit to decode and decrypt PowerShell code responsible for fetching a decoy PDF file and a shellcode injector that, in turn, either leads to the deployment of Meduza Stealer or Hijack Loader, which subsequently launches ACR Stealer or Lumma.

ACR Stealer, assessed to be an evolved version of the GrMsk Stealer, was advertised in late March 2024 by a threat actor named SheldIO on the Russian-language underground forum RAMP.

“This ACR stealer hides its [command-and-control] with a dead drop resolver (DDR) technique on the Steam community website,” Lin said, calling out its ability to siphon information from web browsers, crypto wallets, messaging apps, FTP clients, email clients, VPN services, and password managers.

ACR, Lumma, and Meduza Stealers

It’s worth noting that recent Lumma Stealer attacks have also been observed utilizing the same technique, making it easier for the adversaries to change the C2 domains at any time and render the infrastructure more resilient, according to the AhnLab Security Intelligence Center (ASEC).

The disclosure comes as CrowdStrike has revealed that threat actors are leveraging last week’s outage to distribute a previously undocumented information stealer called Daolpu, making it the latest example of the ongoing fallout stemming from the faulty update that has crippled millions of Windows devices.

The attack involves the use of a macro-laced Microsoft Word document that masquerades as a Microsoft recovery manual listing legitimate instructions issued by the Windows maker to resolve the issue, leveraging it as a decoy to activate the infection process.

The DOCM file, when opened, runs the macro to retrieve a second-stage DLL file from a remote that’s decoded to launch Daolpu, a stealer malware equipped to harvest credentials and cookies from Google Chrome, Microsoft Edge, Mozilla Firefox, and other Chromium-based browsers.

It also follows the emergence of new stealer malware families such as Braodo and DeerStealer, even as cyber criminals are exploiting malvertising techniques promoting legitimate software such as Microsoft Teams to deploy Atomic Stealer.

“As cyber criminals ramp up their distribution campaigns, it becomes more dangerous to download applications via search engines,” Malwarebytes researcher Jérôme Segura said. “Users have to navigate between malvertising (sponsored results) and SEO poisoning (compromised websites).”

Posted in VulnerabilityTagged Cyber Attacks, Data Security, vulnerability2 Comments

CISA Warns of Exploitable Vulnerabilities in Popular BIND 9 DNS Software

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

The Internet Systems Consortium (ISC) has released patches to address multiple security vulnerabilities in the Berkeley Internet Name Domain (BIND) 9 Domain Name System (DNS) software suite that could be exploited to trigger a denial-of-service (DoS) condition.

“A cyber threat actor could exploit one of these vulnerabilities to cause a denial-of-service condition,” the U.S. Cybersecurity and Infrastructure Security Agency (CISA) said in an advisory.

The list of four vulnerabilities is listed below –

  • CVE-2024-4076 (CVSS score: 7.5) – Due to a logic error, lookups that triggered serving stale data and required lookups in local authoritative zone data could have resulted in an assertion failure
  • CVE-2024-1975 (CVSS score: 7.5) – Validating DNS messages signed using the SIG(0) protocol could cause excessive CPU load, leading to a denial-of-service condition.
  • CVE-2024-1737 (CVSS score: 7.5) – It is possible to craft excessively large numbers of resource record types for a given owner name, which has the effect of slowing down database processing
  • CVE-2024-0760 (CVSS score: 7.5) – A malicious DNS client that sent many queries over TCP but never read the responses could cause a server to respond slowly or not at all for other clients

Successful exploitation of the aforementioned bugs could cause a named instance to terminate unexpectedly, deplete available CPU resources, slow down query processing by a factor of 100, and render the server unresponsive.

The flaws have been addressed in BIND 9 versions 9.18.28, 9.20.0, and 9.18.28-S1 released earlier this month. There is no evidence that any of the shortcomings have been exploited in the wild.

The disclosure comes months after the ISC addressed another flaw in BIND 9 called KeyTrap (CVE-2023-50387, CVSS score: 7.5) that could be abused to exhaust CPU resources and stall DNS resolvers, resulting in a denial-of-service (DoS).

Posted in VulnerabilityTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

Critical Docker Engine Flaw Allows Attackers to Bypass Authorization Plugins

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Docker is warning of a critical flaw impacting certain versions of Docker Engine that could allow an attacker to sidestep authorization plugins (AuthZ) under specific circumstances.

Tracked as CVE-2024-41110, the bypass and privilege escalation vulnerability carries a CVSS score of 10.0, indicating maximum severity.

“An attacker could exploit a bypass using an API request with Content-Length set to 0, causing the Docker daemon to forward the request without the body to the AuthZ plugin, which might approve the request incorrectly,” the Moby Project maintainers said in an advisory.

Docker said the issue is a regression in that the issue was originally discovered in 2018 and addressed in Docker Engine v18.09.1 in January 2019, but never got carried over to subsequent versions (19.03 and later).

The issue has been resolved in versions 23.0.14 and 27.1.0 as of July 23, 2024, after the problem was identified in April 2024. The following versions of Docker Engine are impacted assuming AuthZ is used to make access control decisions –

  • <= v19.03.15
  • <= v20.10.27
  • <= v23.0.14
  • <= v24.0.9
  • <= v25.0.5
  • <= v26.0.2
  • <= v26.1.4
  • <= v27.0.3, and
  • <= v27.1.0

“Users of Docker Engine v19.03.x and later versions who do not rely on authorization plugins to make access control decisions and users of all versions of Mirantis Container Runtime are not vulnerable,” Docker’s Gabriela Georgieva said.

“Users of Docker commercial products and internal infrastructure who do not rely on AuthZ plugins are unaffected.”

It also affects Docker Desktop up to versions 4.32.0, although the company said the likelihood of exploitation is limited and it requires access to the Docker API, necessitating that an attacker already has local access to the host. A fix is expected to be included in a forthcoming release (version 4.33).

“Default Docker Desktop configuration does not include AuthZ plugins,” Georgieva noted. “Privilege escalation is limited to the Docker Desktop [virtual machine], not the underlying host.”

Although Docker makes no mention of CVE-2024-41110 being exploited in the wild, it’s essential that users apply their installations to the latest version to mitigate potential threats.

Earlier this year, Docker moved to patch a set of flaws dubbed Leaky Vessels that could enable an attacker to gain unauthorized access to the host filesystem and break out of the container.

“As cloud services rise in popularity, so does the use of containers, which have become an integrated part of cloud infrastructure,” Palo Alto Networks Unit 42 said in a report published last week. “Although containers provide many advantages, they are also susceptible to attack techniques like container escapes.”

“Sharing the same kernel and often lacking complete isolation from the host’s user-mode, containers are susceptible to various techniques employed by attackers seeking to escape the confines of a container environment.”

Posted in VulnerabilityTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

Researchers Reveal ConfusedFunction Vulnerability in Google Cloud Platform

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Cybersecurity researchers have disclosed a privilege escalation vulnerability impacting Google Cloud Platform’s Cloud Functions service that an attacker could exploit to access other services and sensitive data in an unauthorized manner.

Tenable has given the vulnerability the name ConfusedFunction.

“An attacker could escalate their privileges to the Default Cloud Build Service Account and access numerous services such as Cloud Build, storage (including the source code of other functions), artifact registry and container registry,” the exposure management company said in a statement.

“This access allows for lateral movement and privilege escalation in a victim’s project, to access unauthorized data and even update or delete it.”

Cloud Functions refers to a serverless execution environment that allows developers to create single-purpose functions that are triggered in response to specific Cloud events without the need to manage a server or update frameworks.

The problem discovered by Tenable has to do with the fact that a Cloud Build service account is created in the background and linked to a Cloud Build instance by default when a Cloud Function is created or updated.

This service account opens the door for potential malicious activity owing to its excessive permissions, thereby permitting an attacker with access to create or update a Cloud Function to leverage this loophole and escalate their privileges to the service account.

This permission could then be abused to access other Google Cloud services that are also created in tandem with the Cloud Function, including Cloud Storage, Artifact Registry, and Container Registry. In a hypothetical attack scenario, ConfusedFunction could be exploited to leak the Cloud Build service account token via a webhook.

ConfusedFunction Vulnerability

Following responsible disclosure, Google has updated the default behavior such that Cloud Build uses the Compute Engine default service account to prevent misuse. However, it’s worth noting that these changes do not apply to existing instances.

“The ConfusedFunction vulnerability highlights the problematic scenarios that may arise due to software complexity and inter-service communication in a cloud provider’s services,” Tenable researcher Liv Matan said.

“While the GCP fix has reduced the severity of the problem for future deployments, it didn’t completely eliminate it. That’s because the deployment of a Cloud Function still triggers the creation of the aforementioned GCP services. As a result, users must still assign minimum but still relatively broad permissions to the Cloud Build service account as part of a function’s deployment.”

The development comes as Outpost24 detailed a medium-severity cross-site scripting (XSS) flaw in the Oracle Integration Cloud Platform that could be weaponized to inject malicious code into the application.

The flaw, which is rooted in the handling of the “consumer_url” parameter, was resolved by Oracle in its Critical Patch Update (CPU) released earlier this month.

“The page for creating a new integration, found at https://<instanceid>.integration.ocp.oraclecloud.com/ic/integration/home/faces/link?page=integration&consumer_url=<payload>, did not require any other parameters,” security researcher Filip Nyquist said.

ConfusedFunction Vulnerability

“This meant that an attacker would only need to identify the instance-id of the specific integration platform to send a functional payload to any user of the platform. Consequently, the attacker could bypass the requirement of knowing a specific integration ID, which is typically accessible only to logged-in users.”

It also follows Assetnote’s discovery of three security vulnerabilities in the ServiceNow cloud computing platform (CVE-2024-4879, CVE-2024-5178, and CVE-2024-5217) that could be fashioned into an exploit chain in order to gain full database access and execute arbitrary code on the within the context of the Now Platform.

The ServiceNow shortcomings have since come under active exploitation by unknown threat actors as part of a “global reconnaissance campaign” designed to gather database details, such as user lists and account credentials, from susceptible instances.

The activity, targeting companies in various industry verticals such as energy, data centers, software development, and government entities in the Middle East, could be leveraged for “cyber espionage and further targeting,” Resecurity said.

ServiceNow, in a statement shared with The Hacker News said it has “not observed evidence that the activity […] is related to instances that ServiceNow hosts.

“We have encouraged our self-hosted and ServiceNow-hosted customers to apply relevant patches if they have not already done so. We will also continue to work directly with customers who need assistance in applying those patches. It is important to note that these are not new vulnerabilities, but rather were previously addressed and disclosed in CVE-2024-4879, CVE-2024-5217, and CVE-2024-5178.”

(The story was updated after publication to include details about active exploitation of ServiceNow flaws.)

Posted in VulnerabilityTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

SAP AI Core Vulnerabilities Expose Customer Data to Cyber Attacks

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Cybersecurity researchers have uncovered security shortcomings in SAP AI Core cloud-based platform for creating and deploying predictive artificial intelligence (AI) workflows that could be exploited to get hold of access tokens and customer data.

The five vulnerabilities have been collectively dubbed SAPwned by cloud security firm Wiz.

“The vulnerabilities we found could have allowed attackers to access customers’ data and contaminate internal artifacts – spreading to related services and other customers’ environments,” security researcher Hillai Ben-Sasson said in a report shared with The Hacker News.

Following responsible disclosure on January 25, 2024, the weaknesses were addressed by SAP as of May 15, 2024.

In a nutshell, the flaws make it possible to obtain unauthorized access to customers’ private artifacts and credentials to cloud environments like Amazon Web Services (AWS), Microsoft Azure, and SAP HANA Cloud.

They could also be used to modify Docker images on SAP’s internal container registry, SAP’s Docker images on the Google Container Registry, and artifacts hosted on SAP’s internal Artifactory server, resulting in a supply chain attack on SAP AI Core services.

Furthermore, the access could be weaponized to gain cluster administrator privileges on SAP AI Core’s Kubernetes cluster by taking advantage of the fact that the Helm package manager server was exposed to both read and write operations.

“Using this access level, an attacker could directly access other customer’s Pods and steal sensitive data, such as models, datasets, and code,” Ben-Sasson explained. “This access also allows attackers to interfere with customer’s Pods, taint AI data and manipulate models’ inference.”

Wiz said the issues arise due to the platform making it feasible to run malicious AI models and training procedures without adequate isolation and sandboxing mechanisms.

“The recent security flaws in AI service providers like Hugging Face, Replicate, and SAP AI Core highlight significant vulnerabilities in their tenant isolation and segmentation implementations,” Ben-Sasson told The Hacker News. “These platforms allow users to run untrusted AI models and training procedures in shared environments, increasing the risk of malicious users being able to access other users’ data.”

“Unlike veteran cloud providers who have vast experience with tenant-isolation practices and use robust isolation techniques like virtual machines, these newer services often lack this knowledge and rely on containerization, which offers weaker security. This underscores the need to raise awareness of the importance of tenant isolation and to push the AI service industry to harden their environments.”

As a result, a threat actor could create a regular AI application on SAP AI Core, bypass network restrictions, and probe the Kubernetes Pod’s internal network to obtain AWS tokens and access customer code and training datasets by exploiting misconfigurations in AWS Elastic File System (EFS) shares.

“People should be aware that AI models are essentially code. When running AI models on your own infrastructure, you could be exposed to potential supply chain attacks,” Ben-Sasson said.

“Only run trusted models from trusted sources, and properly separate between external models and sensitive infrastructure. When using AI services providers, it’s important to verify their tenant-isolation architecture and ensure they apply best practices.”

The findings come as Netskope revealed that the growing enterprise use of generative AI has prompted organizations to use blocking controls, data loss prevention (DLP) tools, real-time coaching, and other mechanisms to mitigate risk.

“Regulated data (data that organizations have a legal duty to protect) makes up more than a third of the sensitive data being shared with generative AI (genAI) applications — presenting a potential risk to businesses of costly data breaches,” the company said.

They also follow the emergence of a new cybercriminal threat group called NullBulge that has trained its sights on AI- and gaming-focused entities since April 2024 with an aim to steal sensitive data and sell compromised OpenAI API keys in underground forums while claiming to be a hacktivist crew “protecting artists around the world” against AI.

“NullBulge targets the software supply chain by weaponizing code in publicly available repositories on GitHub and Hugging Face, leading victims to import malicious libraries, or through mod packs used by gaming and modeling software,” SentinelOne security researcher Jim Walter said.

“The group uses tools like AsyncRAT and XWorm before delivering LockBit payloads built using the leaked LockBit Black builder. Groups like NullBulge represent the ongoing threat of low-barrier-of-entry ransomware, combined with the evergreen effect of info-stealer infections.”

Posted in VulnerabilityTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

Ongoing Cyberattack Targets Exposed Selenium Grid Services for Crypto Mining

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Cybersecurity researchers are sounding the alarm over an ongoing campaign that’s leveraging internet-exposed Selenium Grid services for illicit cryptocurrency mining.

Cloud security firm Wiz is tracking the activity under the name SeleniumGreed. The campaign, which is targeting older versions of Selenium (3.141.59 and prior), is believed to be underway since at least April 2023.

“Unbeknownst to most users, Selenium WebDriver API enables full interaction with the machine itself, including reading and downloading files, and running remote commands,” Wiz researchers Avigayil Mechtinger, Gili Tikochinski, and Dor Laska said.

“By default, authentication is not enabled for this service. This means that many publicly accessible instances are misconfigured and can be accessed by anyone and abused for malicious purposes.”

Selenium Grid, part of the Selenium automated testing framework, enables parallel execution of tests across multiple workloads, different browsers, and various browser versions.

Selenium Grid Services

“Selenium Grid must be protected from external access using appropriate firewall permissions,” the project maintainers warn in a support documentation, stating that failing to do so could allow third-parties to run arbitrary binaries and access internal web applications and files.

Exactly who is behind the attack campaign is currently not known. However, it involves the threat actor targeting publicly exposed instances of Selenium Grid and making use of the WebDriver API to run Python code responsible for downloading and running an XMRig miner.

https://youtube.com/watch?v=Pn7_MkAToe4%3Fsi%3D9o4n-SQHbv8bgV0t

It starts with the adversary sending a request to the vulnerable Selenium Grid hub with an aim to execute a Python program containing a Base64-encoded payload that spawns a reverse shell to an attacker-controlled server (“164.90.149[.]104”) in order to fetch the final payload, a modified version of the open-source XMRig miner.

“Instead of hardcoding the pool IP in the miner configuration, they dynamically generate it at runtime,” the researchers explained. “They also set XMRig’s TLS-fingerprint feature within the added code (and within the configuration), ensuring the miner will only communicate with servers controlled by the threat actor.”

The IP address in question is said to belong to a legitimate service that has been compromised by the threat actor, as it has also been found to host a publicly exposed Selenium Grid instance.

Wiz said it’s possible to execute remote commands on newer versions of Selenium and that it identified more than 30,000 instances exposed to remote command execution, making it imperative that users take steps to close the misconfiguration.

“Selenium Grid is not designed to be exposed to the internet and its default configuration has no authentication enabled, so any user that has network access to the hub can interact with the nodes via API,” the researchers said.

“This poses a significant security risk if the service is deployed on a machine with a public IP that has inadequate firewall policy.”

Update #

Selenium, in an advisory released on July 31, 2024, urged users to upgrade their instances to the latest version to mitigate against the threat.

“Selenium Grid by default doesn’t have any authentication as the assumption has always been that we want you to put this behind a secure network to prevent people from abusing your resources,” it said. “Another way to combat this is to use a cloud provider to run your Selenium Grid.”

Posted in VulnerabilityTagged Cyber Attacks, Data Security, vulnerabilityLeave a comment

Linux: Mount Remote Directories With SSHFS

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

The Secure Shell (SSH) isn’t just about allowing you to remote into servers to tackle admin tasks. Thanks to this secure networking protocol, you can also mount remote directories with the help of the SSH File System (SSHF).

SSHFS uses SFTP (SSH File Transfer Protocol) to mount remote directories to a local machine using secure encryption, which means the connection is far more secure than your standard FTP. As well, once a remote directory is mounted, it can be used as if it was on the local machine.

Consider SSHFS to be a more secure way of creating network shares, the only difference is you have to have SSHFS installed on any machine that needs to connect to the share (whereas with Samba, you only have to have it installed on the machine hosting the share).

Let’s walk through the process of getting SSHFS up and running, so you can securely mount remote directories to your local machine.

What You’ll Need

To make this work, you’ll need at least two Linux machines. These machines can be Ubuntu or Fedora-based, because SSHFS is found in the standard repositories for most Linux distributions. You’ll also need a user with sudo privileges.

Installing SSHFS

Since SSHFS is found in the standard repositories, the installation is quite simple. Log into the server (which will house the directory to share) and install SSHFS with one of the following commands:

  • Ubuntu-based distributions – sudo apt-get install sshfs -y
  • Fedora-based distributions – sudo dnf install fuse-sshfs -y
  • Arch-based distributions – sudo pacman -S sshfs
  • openSUSE-based distributions – sudo zypper -n in sshfs

Next, log into your local machine and install the package as well.

Once installed, you’ll need to set user_allow_other in the SSHFS config file on the local machine. For that, open the file with:

1sudo nano /etc/fuse.conf


In that file, locate the line:

1#user_allow_other


Change that to:

1user_allow_other


Save and close the file.

Creating a Directory for Mounting

Back on the server, we must create a directory that will be mounted on the client machines. We’ll place our new directory in /srv with the command:

1sudo mkdir /srv/data


With the new directory created, we need to give it ownership, such that either a user or group can access it. If you only have one user who needs to access it, you can change the ownership with the command:

1sudo chown -R USERNAME:USERNAME /srv/data


If you want to allow more than one user to access the directory, you’d need to first create a group with the command:

1sudo groupadd GROUP


Where GROUP is the name of the new group.

Next, add the necessary users to the group (one at a time) with the command:

1sudo usermod -aG GROUP USERNAME


Where GROUP is the name of the group and USERNAME is the name of the user to be added.

You would then need to change the ownership of the new directory to the new group with:

1sudo chown -R USERNAME:GROUP /srv/data


On the local machine, you’ll have to create a directory that will house the mounted remote directory. We’ll create this in a user’s home directory with:

1mkdir ~/data_mount

Mount the Directory

It’s now time to mount our remote directory. Remember, we’re mounting the remote directory /srv/data to the local directory ~/data_mount. This is done with the command:

1sshfs USER@SERVER:/srv/data ~/data_mount


Where USER is the remote username and SERVER is the IP address of the remote server. You’ll be prompted for the remote user’s password. On successful authentication, the remote directory will be mounted to the local directory and you can access it as if it were native to the local machine. If you save or edit a file in ~/data_mount, it will be reflected in /srv/data on the remote machine.

This method of mounting is temporary. Let’s make it permanent.

Permanently Mount the Remote Drive

To permanently mount the SSHFS drive, you have to jump through a few hoops before it’ll work. First, you must create an SSH key pair (on the local machine) with the command:

1ssh-keygen -t rsa


Make sure to give the key a strong/unique password.

Once the key is generated, copy it to the server with the command:

1ssh-copy-id USER@SERVER


Where USER is the remote user name and SERVER is the IP address of the remote server.

Let’s test the connection to ensure it’s working properly. From the local machine, SSH to the server with:

1ssh USER@SERVER


Where USER is the remote username and SERVER is the IP address of the remote server. You should be prompted for the SSH key password and not your user password. Once you’ve successfully authenticated, exit from the connection with the exit command.

To make this mount permanent, you need to modify the /etc/fstab file on the local machine. Open that file for editing with:

1sudo nano /etc/fstab


At the bottom of the file, paste the following line:

1USER1@SERVER:/srv/data /home/USER1/data_mount fuse.sshfs x-systemd.automount,_netdev,user,idmap=user,transform_symlinks,identityfile=/home/USER2/.ssh/id_rsa,allow_other,default_permissions,uid=USER_ID_N,gid=USER_GID_N 0 0


Where USER1 is the remote username, SERVER is the IP address of the server, USER2 is the username on the local machine, and USER_ID and GROUP_ID are unique to the local machine. You can locate the IDs with the command:

1id


You should see entries like this:

1uid=1000(jack) gid=1000(jack)


In the above example, the user ID is 1000 and the group ID is also 1000.

Save the file and test the mount with:

1mount -a


If you receive no errors, all is well.

There is one caveat to this. During the boot process, the mount will fail because it will be attempted before networking is brought up. Because of this, after a reboot on the local machine, you’ll have to open a terminal window and mount the SSHFS directory with the command:

1mount -a


Once you’ve done that, you’re ready to use the remote directory as if it were local.

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malware, Reverse Engineering2 Comments

Are Mobile Devices Less Secure than PCs?

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Are smartphones less secure than PCs? The answer to that is, they’re different. They face different security threats. Yet they certainly share one thing in common — they both need protection.

So, what makes a smartphone unique when it comes to security? And how do you go about protecting it? We’ll cover both here.

Apps, spam texts, and other smartphone vulnerabilities

Several facts of life about smartphones set them apart when it comes to keeping your devices safer. A quick rundown looks like this:

First off, people keep lots of apps on their phones. Old ones, new ones, ones they practically forgot they had. The security issue that comes into play there is that any app on a phone is subject to vulnerabilities.

A vulnerability in just one of the dozens of apps on a phone can lead to problems. The adage of “the weakest link” applies here. The phone is only as secure as its least secure app. And that goes for the phone’s operating system as well.

Additionally, app permissions can also introduce risks. Apps often request access to different parts of your phone to work — such as when a messenger app asks for access to contacts and photos. In the case of malicious apps, they’ll ask for far more permissions than they need. A classic example involves the old “flashlight apps” that invasively asked for a wide swath of permissions. That gave the hackers all kinds of info on users, including things like location info. Today, the practice of malicious, permission-thirsty apps continues with wallpaper apps, utility apps, games, and more.

As for other malicious apps, sometimes people download them without knowing. This often happens when shopping in third-party app stores, yet it can happen in legit app stores as well — despite rigorous review processes from Apple and Google. Sometimes, hackers sneak them through the review process for approval. These apps might include spyware, ransomware, and other forms of malware.

Many people put their smartphones to personal and professional use.[i] That might mean the phone has access to corporate apps, networks, and data. If the phone gets compromised, those corporate assets might get compromised too. And it can work in the other direction. A corporate compromise might affect an employee’s smartphone.

More and more, our phones are our wallets. Digital wallets and payment apps have certainly gained popularity. They speed up checkout and make splitting meals with friends easy. That makes the prospect of a lost or stolen phone all the more serious. An unsecured phone in the hands of another is like forking over your wallet.

Lastly, spam texts. Unique to phones are the sketchy links that crop up in texting and messaging apps. These often lead to scam sites and other sites that spread malware.

With a good sense of what makes securing your smartphone unique, let’s look at several steps you can take to protect it.

How to protect your smartphone

  1. Update your phone’s apps and operating system

Keeping your phone’s apps and operating system up to date can greatly improve your security. Updates can fix vulnerabilities that hackers rely on to pull off their malware-based attacks. it’s another tried and true method of keeping yourself safer — and for keeping your phone running great too.

  1. Lock your phone

With all that you keep and conduct on your phone, a lock is a must. Whether you have a PIN, passcode, or facial recognition available, put it into play. The same goes for things like your payment, banking, and financial apps. Ensure you have them locked too.

  1. Avoid third-party app stores

As mentioned above, app stores have measures in place to review and vet apps that help ensure they’re safe and secure. Third-party sites might very well not, and they might intentionally host malicious apps as part of a front. Further, legitimate app stores are quick to remove malicious apps from their stores once discovered, making shopping there safer still.

  1. Review apps carefully

Check out the developer — have they published several other apps with many downloads and good reviews? A legit app typically has many reviews. In contrast, malicious apps might have only a handful of (phony) five-star reviews. Lastly, look for typos and poor grammar in both the app description and screenshots. They could be a sign that a hacker slapped the app together and quickly deployed it.

  1. Go with a strong recommendation.

Yet better than combing through user reviews yourself is getting a recommendation from a trusted source, like a well-known publication or app store editors themselves. In this case, much of the vetting work has been done for you by an established reviewer. A quick online search like “best fitness apps” or “best apps for travelers” should turn up articles from legitimate sites that can suggest good options and describe them in detail before you download.

  1. Keep an eye on app permissions

Another way hackers weasel their way into your device is by getting permissions to access things like your location, contacts, and photos — and they’ll use malicious apps to do it. If an app asks for way more than you bargained for, like a simple puzzle game that asks for access to your camera or microphone, it might be a scam. Delete the app.

  1. Learn how to remotely lock or erase your smartphone

So what happens if your phone ends up getting lost or stolen? A combination of device tracking, device locking, and remote erasing can help protect your phone and the data on it. Different device manufacturers have different ways of going about it, but the result is the same — you can prevent others from using your phone. You can even erase it if you’re truly worried that it’s gone for good. Apple provides iOS users with a step-by-step guide, and Google offers a guide for Android users as well.

  1. Protect your phone and block sketchy links

Comprehensive online protection software can secure your phone in the same ways that it secures your laptops and computers. Installing it can protect your privacy, and keep you safe from attacks on public Wi-Fi, just to name a few things it can do. Ours also includes Text Scam Detector that blocks sketchy links in texts, messages, and email before they do you any harm. And if you tap that link by mistake, Text Scam Detector still blocks it.

Posted in VulnerabilityTagged Cyber Attacks, Data Security, malware, SpywareLeave a comment

Microsoft Says Azure Outage Caused by DDoS Attack Response

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Microsoft’s response to a distributed denial-of-service (DDoS) attack appears to have caused Azure service outages that impacted many customers.

Microsoft explained on its Azure status page that a “subset of customers” experienced issues connecting to services such as Azure App Services, Application Insights, Azure IoT Central, Azure Log Search Alerts, and Azure Policy, as well as the Azure portal and some Microsoft 365 and Purview services.

According to the BBC, the outage, which lasted roughly 10 hours, impacted water utilities, courts, banks, and other types of organizations. 

Microsoft said it initially saw an unexpected usage spike that resulted in Azure Front Door and Azure Content Delivery Network components “performing below acceptable thresholds”, which led to errors, timeouts and latency issues. 

An investigation showed that a DDoS attack launched against its systems triggered protection mechanisms, but an implementation bug in those defenses caused the attack’s impact to be amplified rather than mitigated. 

The tech giant has promised to publish a preliminary incident review within 72 hours and a more detailed review within two weeks. 

It’s unclear who is behind the DDoS attack on Microsoft services, but it would not be surprising if multiple hacktivist groups take credit for it in an effort to boost their reputation. 

The incident comes just days after millions of computers worldwide were disrupted by a bad update rolled out by cybersecurity firm CrowdStrike. 

A vast majority of devices impacted by the CrowdStrike incident were restored within one week, but insurers predict billions in losses for the security firm’s major customers. CrowdStrike is also facing lawsuits over the incident.  

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, DDOS, malwareLeave a comment

Cost of Data Breach in 2024: $4.88 Million, Says Latest IBM Study

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

The bald figure of $4.88 million tells us little about the state of security. But the detail contained within the latest IBM Cost of Data Breach Report highlights areas we are winning, areas we are losing, and the areas we could and should do better.

“The real benefit to industry,” explains Sam Hector, IBM’s cybersecurity global strategy leader, “is that we’ve been doing this consistently over many years. It allows the industry to build up a picture over time of the changes that are happening in the threat landscape and the most effective ways to prepare for the inevitable breach.”

IBM goes to considerable lengths to ensure the statistical accuracy of its report (PDF). More than 600 companies were queried across 17 industry sectors in 16 countries. The individual companies change year on year, but the size of the survey remains consistent (the major change this year is that ‘Scandinavia’ was dropped and ‘Benelux’ added). The details help us understand where security is winning, and where it is losing. Overall, this year’s report leads toward the inevitable assumption that we are currently losing: the cost of a breach has increased by approximately 10% over last year.

While this generality may be true, it is incumbent on each reader to effectively interpret the devil hidden within the detail of statistics – and this may not be as simple as it seems. We’ll highlight this by looking at just three of the many areas covered in the report: AI, staff, and ransomware.

AI is given detailed discussion, but it is a complex area that is still only nascent. AI currently comes in two basic flavors: machine learning built into detection systems, and the use of proprietary and third party gen-AI systems. The first is the simplest, most easy to implement, and most easily measurable. According to the report, companies that use ML in detection and prevention incurred an average $2.2 million less in breach costs compared to those who did not use ML.

The second flavor – gen-AI – is more difficult to assess. Gen-AI systems can be built in house or acquired from third parties. They can also be used by attackers and attacked by attackers – but it is still primarily a future rather than current threat (excluding the growing use of deepfake voice attacks that are relatively easy to detect).

Nevertheless, IBM is concerned. “As generative AI rapidly permeates businesses, expanding the attack surface, these expenses will soon become unsustainable, compelling business to reassess security measures and response strategies. To get ahead, businesses should invest in new AI-driven defenses and develop the skills needed to address the emerging risks and opportunities presented by generative AI,” comments Kevin Skapinetz, VP of strategy and product design at IBM Security.

But we don’t yet understand the risks (although nobody doubts, they will increase). “Yes, generative AI-assisted phishing has increased, and it’s become more targeted as well – but fundamentally it remains the same problem we’ve been dealing with for the last 20 years,” said Hector.

Part of the problem for in-house use of gen-AI is that accuracy of output is based on a combination of the algorithms and the training data employed. And there is still a long way to go before we can achieve consistent, believable accuracy. Anyone can check this by asking Google Gemini and Microsoft Co-pilot the same question at the same time. The frequency of contradictory responses is disturbing.

The report calls itself “a benchmark report that business and security leaders can use to strengthen their security defenses and drive innovation, particularly around the adoption of AI in security and security for their generative AI (gen AI) initiatives.” This may be an acceptable conclusion, but how it is achieved will need considerable care.

Our second ‘case-study’ is around staffing. Two items stand out: the need for (and lack of) adequate security staff levels, and the constant need for user security awareness training. Both are long term problems, and neither are solvable. “Cybersecurity teams are consistently understaffed. This year’s study found more than half of breached organizations faced severe security staffing shortages, a skills gap that increased by double digits from the previous year,” notes the report.

Security leaders can do nothing about this. Staff levels are imposed by business leaders based on the current financial state of the business and the wider economy. The ‘skills’ part of the skills gap continually changes. Today there is a greater need for data scientists with an understanding of artificial intelligence – and there are very few such people available.

User awareness training is another intractable problem. It is undoubtedly necessary – and the report quotes ‘employee training’ as the #1 factor in decreasing the average cost of a beach, “specifically for detecting and stopping phishing attacks”. The problem is that training always lags the types of threat, which change faster than we can train employees to detect them. Right now, users might need additional training in how to detect the greater number of more compelling gen-AI phishing attacks.

Our third case study revolves around ransomware. IBM says there are three types: destructive (costing $5.68 million); data exfiltration ($5.21 million), and ransomware ($4.91 million). Notably, all three are above the overall mean figure of $4.88 million.

The biggest increase in cost has been in destructive attacks. It is tempting to link destructive attacks to global geopolitics since criminals focus on money while nation states focus on disruption (and also theft of IP, which incidentally has also increased). Nation state attackers can be hard to detect and prevent, and the threat will probably continue to expand for as long as geopolitical tensions remain high.

But there is one potential ray of hope found by IBM for encryption ransomware: “Costs dropped dramatically when law enforcement investigators were involved.” Without law enforcement involvement, the cost of such a ransomware breach is $5.37 million, while with law enforcement involvement it drops to $4.38 million.

These costs do not include any ransom payment. However, 52% of encryption victims reported the incident to law enforcement, and 63% of those did not pay a ransom. The argument in favor of involving law enforcement in a ransomware attack is compelling by IBM’s figures. “That’s because law enforcement has developed advanced decryption tools that help victims recover their encrypted files, while it also has access to expertise and resources in the recovery process to help victims perform disaster recovery,” commented Hector.

Our analysis of aspects of the IBM study is not intended as any form of criticism of the report. It is a valuable and detailed study on the cost of a breach. Rather we hope to highlight the complexity of finding specific, pertinent, and actionable insights within such a mountain of data. It is worth reading and finding pointers on where individual infrastructure might benefit from the experience of recent breaches. The simple fact that the cost of a breach has increased by 10% this year suggests that this should be urgent.

Posted in Data BreachesTagged Cyber Attacks, Data Breach, Data SecurityLeave a comment

DigiCert Revoking 83,000 Certificates of 6,800 Customers

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

DigiCert has started revoking thousands of certificates impacted by a recently discovered verification issue, but some customers in critical infrastructure and other sectors are asking for more time. 

The certificate authority (CA) informed customers on July 29 of an incident related to domain validation, saying that it needs to revoke some certificates within 24 hours due to strict CA/Browser Forum (CABF) rules. 

The company initially said roughly 0.4% of applicable domain validations were impacted. A DigiCert representative clarified in discussions with stakeholders that 83,267 certificates and 6,807 subscribers are affected.

DigiCert said some of the impacted customers were able to quickly reissue their certificates, but others would not be able to do so within the 24-hour time frame. 

“Unfortunately, many other customers operating critical infrastructure, vital telecommunications networks, cloud services, and healthcare industries are not in a position to be revoked without critical service interruptions. While we have deployed automation with several willing customers, the reality is that many large organizations cannot reissue and deploy new certificates everywhere in time,” said Jeremy Rowley, CISO at DigiCert.

DigiCert said in an updated notification that it has been working with browser representatives and customers in an effort to delay revocations under exceptional circumstances in order to avoid disruption to critical services. 

However, the company highlighted that “all certificates impacted by this incident, regardless of circumstances, will be revoked no later than Saturday, August 3rd 2024, 19:30 UTC.”

Rowley noted that some customers have initiated legal action against DigiCert in an attempt to block the revocation of certificates.

The certificates are being revoked due to an issue related to the process used by DigiCert to validate that a customer requesting a TLS certificate for a domain is actually the owner or administrator of that domain. 

One option is for customers to add a DNS CNAME record with a random value provided by DigiCert to their domain. The random value provided by DigiCert is prefixed by an underscore character to prevent collisions between the value and the domain name. However, the underscore prefix was not added in some cases since 2019.

In order to comply with CABF rules, DigiCert has to revoke certificates with an issue in their domain validation within 24, without exception. 

Andrew Ayer, founder of SSLMate and an expert in digital certificates, believes that DigiCert’s public notification about this incident “gets the security impact of the noncompliance completely wrong”.

“[…] this is truly a security-critical incident, as there is a real risk […] that this flaw could have been exploited to get unauthorized certificates. Revocation of the improperly validated certificates is security-critical,” Ayer said.

Posted in VulnerabilityTagged Cyber Attacks, Data Security, malware, SpywareLeave a comment

The European Union’s World-First Artificial Intelligence Rules Are Officially Taking Effect

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

The European Union’s world-first artificial intelligence law formally took effect on Thursday, marking the latest milestone in the bloc’s efforts to regulate the technology.

Officials say the Artificial Intelligence Act will protect the “fundamental rights” of citizens in the 27-nation bloc while also encouraging investment and innovation in the booming AI industry.

Years in the making, the AI Act is a comprehensive rulebook for governing AI in Europe, but it could also act as a guidepost for other governments still scrambling to draw up guardrails for the rapidly advancing technology.

The AI Act covers any product or service offered in the EU that uses artificial intelligence, whether it’s a platform from a Silicon Valley tech giant or a local startup. The restrictions are based on four levels of risk, and the vast majority of AI systems are expected to fall under the low-risk category, such as content recommendation systems or spam filters.

“The European approach to technology puts people first and ensures that everyone’s rights are preserved,” European Commission Executive Vice President Margrethe Vestager said. “With the AI Act, the EU has taken an important step to ensure that AI technology uptake respects EU rules in Europe.”

The provisions will come into force in stages, and Thursday’s implementation date starts the countdown for when they’ll kick in over the next few years.

AI systems that pose “unacceptable risk,” such as social scoring systems that influence how people behave, some types of predictive policing and emotion recognition systems in schools and workplaces, will face a blanket ban by February.

Rules covering so-called general-purpose AI models like OpenAI’s GPT-4 system will take force by August 2025.

Brussels is setting up a new AI Office that will act as the bloc’s enforcer for the general purpose AI rules.

OpenAI said in a blog post that it’s “committed to complying with the EU AI Act and we will be working closely with the new EU AI Office as the law is implemented.”

By mid-2026, the complete set of regulations, including restrictions on high-risk AI such as systems that decide who gets a loan or that operate autonomous robots, will be in force.

There’s also a fourth category for AI systems that pose a limited risk, and face transparency obligations. Chatbots must be informed that they’re interacting with a machine and AI-generated content like deepfakes will need to be labelled.

Companies that don’t comply with the rules face fines worth as much as 7% of their annual global revenue.

Posted in VulnerabilityTagged Cyber Attacks, Data Security, malwareLeave a comment

Sitting Ducks DNS attacks let hackers hijack over 35,000 domains

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Threat actors have hijacked more than 35,000 registered domains in so-called Sitting Ducks attacks that allow claiming a domain without having access to the owner’s account at the DNS provider or registrar.

In a Sitting Ducks attack, cybercriminals exploit configuration shortcomings at the registrar level and insufficient ownership verification at DNS providers.

Researchers at DNS-focused security vendor Infoblox and at firmware and hardware protection company Eclypsium discovered that there are more than a million domains that can be hijacked every day via the Sitting Ducks attacks.

Multiple Russian cybercriminal groups have been using this attack vector for years and leveraged the hijacked domains in spam campaigns, scams, malware delivery, phishing, and data exfiltration. 

Sitting Ducks details

Although the issues that make Sitting Ducks possible were first documented in 2016 [1, 2] by Matthew Bryant, a security engineer at Snap, the attack vector continues to be an easier way to hijack domains than other better-known methods.

For the attack to be possible, the following conditions are required:

– registered domain either uses or delegates authoritative DNS services to a provider other than the registrar

– the authoritative name server of the record cannot resolve queries because it lacks the info about the domain (lame delegation)

– the DNS provider needs to allow claiming a domain without properly verifying ownership or requiring access to the owner’s account

Variations of the attack include partially lame delegation (not all name servers are configured incorrectly) and redelegation to another DNS provider. However, if lame delegation and exploitable provider conditions are met, the domain can be hijacked.

Prerequisites diagram
Prerequisites diagram
Source: Infoblox

Infoblox explains that attackers can use the Sitting Ducks method on domains that use authoritative DNS services from a provider that is different from the registrar, such as a web hosting service.

If the authoritative DNS or web hosting service for the target domain expires, an attacker can simply claim it after creating an account with the DNS service provider.

The threat actor can now set up a malicious website under the domain and configure DNS settings to resolve IP address record requests to the fake address; and the legitimate owner won’t be able to modify the DNS records.

Sitting Ducks overview
“Sitting Ducks” overview
Source: Infoblox

Attacks in the wild

Infoblox and Eclypsium report that they have observed multiple threat actors exploiting the Sitting Ducks (or Ducks Now Sitting – DNS) attack vector since 2018 and 2019.

Since then, there have been at least 35,000 domain hijacking cases using this method. Typically, the cybercriminals held the domains for a short period but there were some instances where they kept them up to a year.

There have also been occurrences where the same domain was hijacked by multiple threat actors successively, who used it in their operations for one to two months and then passed it on.

GoDaddy is confirmed as a victim of Sitting Ducks attacks, but the researchers say there are six DNS providers who are currently vulnerable.

The observed clusters of activity leveraging Sitting Ducks is summarized as follows:

  • “Spammy Bear” – Hijacked GoDaddy domains in late 2018 for use in spam campaigns.
  • “Vacant Viper” – Started using Sitting Ducks in December 2019, and hijacks 2,500 yearly since then, used in the 404TDS system that distributes IcedID, and setting up command and control (C2) domains for malware.
  • “VexTrio Viper” – Started using Sitting Ducks in early 2020 to utilize the domains in a massive traffic distribution system (TDS) that facilitates the SocGholish and ClearFake operations.
  • Unnamed actors – Several smaller and unknown threat actors creating TDS, spam distribution, and phishing networks.

Defense tips

Domain owners should regularly review their DNS configurations for lame delegations, especially on older domains, and update the delegation records at the registrar or authoritative name server with proper, active DNS services.

Registrars are advised to perform proactive checks for lame delegations and alert owners. They should also ensure that a DNS service is established before propagating name server delegations.

Ultimately, regulators and standards bodies must develop long-term strategies to address DNS vulnerabilities and press DNS providers under their jurisdictions to take more action to mitigate Sitting Ducks attacks.

Posted in Cyber AttacksTagged Cyber Attacks, Data SecurityLeave a comment

DuckDuckGo blocked in Indonesia over porn, gambling search results

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Privacy-focused search engine DuckDuckGo has been blocked in Indonesia by its government after citizens reportedly complained about pornographic and online gambling content in its search results.

The government’s choice to block DuckDuckGo isn’t surprising considering the cultural and religious context, with Indonesia being a Muslim country where gambling is prohibited and porn is viewed as morally unacceptable.

In January 2024, Indonesia announced it blocked nearly 600,000 online gambling portals and took action against 5,000 bank accounts that engaged with them.

The government has previously blocked numerous pornography sites, Reddit, and Vimeo, and imposed temporary or partial restrictions on Tumblr, Telegram, TikTok, Netflix, and Badoo.

DuckDuckGo has now confirmed to BleepingComputer that Indonesia blocked its search engine in the country and that it has no means to respond to it.

“We can confirm that DuckDuckGo has been blocked in Indonesia due to their censorship policies. Unfortunately, there is no current path to being unblocked, similar to how we’ve been blocked in China for about a decade now,” DuckDuckGo told BleepingComputer.

At the same time, Google Search remains accessible in Indonesia, which suggests that either the tech giant has implemented effective self-censorship mechanisms for its local search engine or its size makes blocking too disruptive for internet usage in the country.

Indonesians have resorted to using VPN software to bypass the government’s restrictions. However, the Indonesian government plans to block free VPNs, making gaining access to blocked sites costly.

Free VPNs next

Virtual Private Network (VPN) tools are commonly used to bypass censorship imposed by governments and internet service providers.

When using VPNs, users can make connections from other countries to once again access DuckDuckGo, but free offerings may soon be removed.

Minister of Communication and Information Budi Arie Setiadi stated that the government intends to restrict access to free VPN tools, as they know these are used to access blocked online gambling portals.

“Yesterday, Mr. Hokky (Ministry’s Director General of Informatics Applications) had a meeting with Mr. Wayan (Ministry’s Director General of Postal and Information Technology Operations), and we will shut down free VPNs to reduce access to networks for the general public to curb the spread of online gambling,” stated Setiadi on June 31, 2024.

“I specifically have to include the issue of online gambling to make it clear that this is the darkest side of digitalization.”

The same ministry announcement highlighted the risks of free VPN services, underlining personal data theft, malware infections, and making internet connectivity slow or unreliable.

Posted in Data Breaches, VulnerabilityTagged Cyber Attacks, Data Security, malware1 Comment

CrowdStrike sued by investors over massive global IT outage

Posted on August 4, 2024 - August 4, 2024 by Maq Verma

Cybersecurity company CrowdStrike has been sued by investors who say it provided false claims about its Falcon platform after a bad security update led to a massive global IT outage causing the stock price to tumble almost 38%.

The plaintiffs claim that the massive IT outage that occurred on July 19, 2024, proves CrowdStrike’s claims that their cybersecurity platform is thoroughly tested and validated are false.

As a result of this incident and its aftermath, CrowdStrike’s stock price has tumbled almost 38% from $343 on July 18 to $214, causing significant financial losses to investors.

The class action lawsuit submitted by the Plymouth County Retirement Association in the U.S. District Court of Austin, Texas, seeks compensatory damages for these losses.

A bad update causes a global IT outage

On July 19, Crowdstrike pushed out a faulty Falcon sensor update to Windows devices running the security software. The update slipped past Crowdstrike’s internal tests due to a bug in its content validator and inadequate testing procedures.

The update was received by 8,500,000 Windows devices, if not more, causing an out-of-bounds memory read when processed by Falcon, leading to the operating system crashing with Blue Screen of Death (BSOD).

CrowdStrike is widely used in enterprises, including airports, hospitals, government organizations, the media, and financial firms, causing catastrophic, costly, and even dangerous IT outages.

As restoring systems required staff to remove the faulty update manually, it took days for some companies to resume normal operations, leading to extended outages and delays.

While most have returned to normal operations, the fallout from the incident continues to unfold on multiple levels, including elevated cybercrime activity, loss of trust, and litigation threats.

According to the plaintiffs, the faulty Falcon update proved that contrary to CrowdStrike’s assurances around the diligence in its procedures and the efficacy and reliability of the Falcon platform, updates were inadequately tested and controlled, and the risk of outages is high.

The class action alleges that stockholders were defrauded by CrowdStrike’s knowingly false statements about the quality of its products and procedures.

“Because of their positions and access to material, nonpublic information, the Individual Defendants knew or recklessly disregarded that the adverse facts specified herein had not been disclosed to and were being concealed from the investing public and that the positive representations that were being made were false and misleading.” – Class action document.

To reflect the extent of the losses, the lawsuit mentions that the CrowdStrike stock price fell by 11% on the day of the incident, then another 13.5% on July 22, when Congress called CEO George Kurtz for a testimony, and another 10% on July 29 following news that Delta Airlines, one of the impacted entities, hire an attorney to seek damages.

The plaintiff alleges violations of Sections 10(b) and 20(a) of the Exchange Act and seeks compensation.

Financial impact

The IT outage caused by the CrowdStrike Falcon update has caused massive financial losses to impacted organizations, with many of them exploring litigation pathways to get some of it back.

Delta Airlines CEO Ed Bastian previously stated that the outage forced the cancellation of 2,200 flights for the company, resulting in losses estimated at $500,000,000.

The firm has already hired a law firm that will seek compensation from CrowdStrike and Microsoft, which is now in the crosshairs despite not being responsible for the incident.

Market analysts estimate that the outage has caused big enterprises $5.4 billion in losses.

A report by Guy Carpenter projects the estimated insured losses resulting from the bad Falcon update to be between $300 million and $1 billion, while CyberCube have raised the figure to $1.5 billion.

Posted in Data BreachesTagged Data Security, Safe Delete, Safe Erase, SpywareLeave a comment

Hackers breach ISP to poison software updates with malware

Posted on August 4, 2024 by Maq Verma

A Chinese hacking group tracked as StormBamboo has compromised an undisclosed internet service provider (ISP) to poison automatic software updates with malware.

Also tracked as Evasive Panda, Daggerfly, and StormCloud, this cyber-espionage group has been active since at least 2012, targeting organizations across mainland China, Hong Kong, Macao, Nigeria, and various Southeast and East Asian countries.

On Friday, Volexity threat researchers revealed that the Chinese cyber-espionage gang had exploited insecure HTTP software update mechanisms that didn’t validate digital signatures to deploy malware payloads on victims’ Windows and macOS devices.

“When these applications went to retrieve their updates, instead of installing the intended update, they would install malware, including but not limited to MACMA and POCOSTICK (aka MGBot),” cybersecurity company Volexity explained in a report published on Friday.

To do that, the attackers intercepted and modified victims’ DNS requests and poisoned them with malicious IP addresses. This delivered the malware to the targets’ systems from StormBamboo’s command-and-control servers without requiring user interaction.

For instance, they took advantage of 5KPlayer requests to update the youtube-dl dependency to push a backdoored installer hosted on their C2 servers.

​After compromising the target’s systems, the threat actors installed a malicious Google Chrome extension (ReloadText), which allowed them to harvest and steal browser cookies and mail data.

StormBamboo attack flow
StormBamboo attack flow (Volexity)

“Volexity observed StormBamboo targeting multiple software vendors, who use insecure update workflows, using varying levels of complexity in their steps for pushing malware,” the researchers added.

“Volexity notified and worked with the ISP, who investigated various key devices providing traffic-routing services on their network. As the ISP rebooted and took various components of the network offline, the DNS poisoning immediately stopped.”

In April 2023, ESET threat researchers also observed the hacking group deploying the Pocostick (MGBot) Windows backdoor by abusing the automatic update mechanism for the Tencent QQ messaging application in attacks targeting international NGOs (non-governmental organizations).

Almost a year later, in July 2024, Symantec’s threat hunting team spotted the Chinese hackers targeting an American NGO in China and multiple organizations in Taiwan with new Macma macOS backdoor and Nightdoor Windows malware versions.

In both cases, although the attackers’ skill was evident, the researchers believed it was either a supply chain attack or an adversary-in-the-middle (AITM) attack but weren’t able to pin down the exact attack method.

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, malwareLeave a comment

Linux kernel impacted by new SLUBStick cross-cache attack

Posted on August 4, 2024 by Maq Verma

A novel Linux Kernel cross-cache attack named SLUBStick has a 99% success in converting a limited heap vulnerability into an arbitrary memory read-and-write capability, letting the researchers elevate privileges or escape containers.

The discovery comes from a team of researchers from the Graz University of Technology who demonstrated the attack on Linux kernel versions 5.9 and 6.2 (latest) using nine existing CVEs in both 32-bit and 64-bit systems, indicating high versatility.

Additionally, the attack worked with all modern kernel defenses like Supervisor Mode Execution Prevention (SMEP),  Supervisor Mode Access Prevention (SMAP), and Kernel Address Space Layout Randomization (KASLR) active.

SLUBStick will be presented in detail at the upcoming Usenix Security Symposium conference later this month. The researchers will showcase privilege escalation and container escape in the latest Linux with state-of-the-art defenses enabled.

In the meantime, the published technical paper contains all the details about the attack and the potential exploitation scenarios.

SLUBStick details

One way the Linux kernel manages memory efficiently and securely is by allocating and de-allocating memory chunks, called “slabs,” for different types of data structures.

Flaws in this memory management process may allow attackers to corrupt or manipulate data structures, called cross-cache attacks. However, these are effective roughly 40% of the time and typically lead to system crashes sooner or later.

SLUBStick exploits a heap vulnerability, such as a double-free, user-after-free, or out-of-bounds write, to manipulate the memory allocation process.

CVEs successfully used in the researchers' experiments
CVEs successfully used in the researchers’ experiments
Source: stefangast.eu

Next, it uses a timing side channel to determine the exact moment of memory chunk allocation/deallocation, allowing the attacker to predict and control memory reuse.

Using this timing information raises the success of the cross-change exploitation to 99%, making SLUBStick very practical.

Measured success rates
Measured success rates
Source: stefangast.eu

The conversion of the heap flaw into arbitrary memory read-and-write primitive is done in three steps:

  • Free specific memory chunks and wait for the kernel to reuse them.
  • Reallocate these chunks in a controlled manner, ensuring they get repurposed for critical data structures like page tables.
  • Once reclaimed, the attacker overwrites the page table entries, getting the ability to read and write any memory location.
SLUBStick overview
SLUBStick overview
Source: stefangast.eu

Real-world impact

As with most attacks involving a side channel, SLUBStick requires local access on the target machine with code execution capabilities. Furthermore, the attack requires the presence of a heap vulnerability in the Linux kernel, which will then be used to gain read and write access to the memory.

While this may make the attack appear impractical, it does introduce some benefits to attackers.

Even for attackers with code execution capabilities, SLUBStick provides the ability to achieve privilege escalation, bypass kernel defenses, perform container escapes, or use it as part of a complex attack chain.

Privilege escalation can be used to elevate privileges to root, allowing unlimited operations, while container escape can be used to break from sandboxed environments and access the host system.

Additionally, in the post-exploitation phase, SLUBStick could modify kernel structures or hooks to maintain persistence, making malware harder for defenders to detect.

Targeting /etc/passwd data
Tampering with ‘/etc/passwd’ data
Source: stefangast.eu

Those who want to dive deeper into SLUBStick and experiment with the exploits used by the Graz University researchers can find them in the researcher’s GitHub repository.

Posted in Cyber AttacksTagged Cyber Attacks, Data SecurityLeave a comment

Get Certified in Platform Engineering, Starting Aug. 6

Posted on August 3, 2024 - August 3, 2024 by Maq Verma

As platform engineering continues to grow in popularity inside enterprises, an all-new training curriculum is being launched that will provide one of the first platform engineering certifications in the industry.

The training and certification, offered by the global platform engineering developer’s community PlatformEngineering.org, will begin on Aug. 6 and include two courses and a certification exam that will help IT administrators gain new skills in this burgeoning field.

The courses include an eight-week Platform Engineering Fundamentals class and an MVP (Minimum Viable Platform) to IDP (Internal Development Platform) course, followed by a certification exam combining platform engineering theory with hands-on tasks. Each of the courses will be held for eight to 10 weeks with live, interactive, online meetup sessions taught by platform engineering experts and some of the top platform architects in the industry, according to the group. The live nature of the courses will provide instant feedback, back-and-forth Q&As and an up-to-the-minute feel for the instruction. The live sessions are expected to run for up to 2.5 hours per class module.

IT admins can still sign up for the certification courses through Monday, Aug. 5, on the organization’s website.

“The number-one question from the community is ‘How do I become a platform engineer?’” Luca Galante, the lead instructor and vice president of product and growth for platform engineering vendor, Humanitec, told The New Stack. “So, we wanted to create a resource for IT admins [based on] the three or four years of really good content we have under our belt.”

ngrok is a secure unified ingress platform for developers. It combines reverse proxy, firewall, API gateway and global load balancing to deliver apps and APIs. Over 5 million developers use ngrok for testing webhooks and previewing apps, including the world’s top technology brands.

Learn More

The latest from ngrok

MicroK8s and ngrok: Easy ingress at the edge

25 July 2024
It’s about time: Format timestamps and time zones in the ngrok dashboard
24 July 2024
How to build connectors to on-premise services for workflow automation
24 July 2024

The certification course will be a combination of theory and hands-on internal developer platform building. The first course, the fundamentals instruction, is ready to go for the students and includes eight weekly modules on topics from an introduction to platform engineering to how to build an internal developer platform. Other modules include getting started with reference architectures, designing golden paths for developers, finding the right abstractions, infrastructure platform engineering, how to build minimum viable platforms and selling your minimum viable platforms to key stakeholders inside enterprises.

The second course is still being developed, but it will go beyond the basics to teach full rollout strategies and adoption paths across multiple workflows and teams, according to the group.

The fundamentals course is priced at $2,149, but registrants can use the code EARLYBIRD20 when registering to drop the price by 20% to $1,720. Once the development of the second course is completed, Galante said he expects that a special bundle price will be available for the combined courses and the exam for the students in the program.

The curriculum is being built to ensure that students who successfully complete the courses and exam will be able to demonstrate an elevated level of understanding and knowledge of platform engineering, from its theory to firsthand skills and expertise.

At least one other platform engineering course is also available today in the tech marketplace, from training vendor Pluralsight.

What Is Platform Engineering?

Platform engineering is becoming more common as more companies find that it empowers their software developers and accelerates their operational efficiencies. Platform engineering allows IT systems administrators to choose and assemble proven, curated and regularly maintained development applications, and then deliver them to a company’s developers via an all-in-one self-service portal.

By using one of these custom, standardized internal developer platforms (IDPs) and tools inside a company, developers are freed to do their core work activities, rather than spending valuable time searching for applications they need to create and test their code. These IDPs provide developers with the best code-building tools in an easy-to-use environment built to provide automation, standardization and flexibility for busy application developers.

Who Is Interested in Platform Engineering?

Many of the first students coming into the new certification program are individuals who want to learn the skills and fundamentals for their own edification, said Galante. “They are DevOps engineers or some junior engineers that want to have these skills. But we are already seeing teams purchasing the course. And we are getting applicants from large companies where they have systems administrators or IT admins who they want to upskill to become platform engineers or upskill their existing platform team to educate them around best practices.”

The guest lecturers who will participate in the courses include experts like Manuel Pais, co-author of the book, Team Topologies, and Ajay Chankramath, former head of platform engineering at the technology consultancy Thoughtworks.

An Enrollee Shares Why He Is Taking the Certification Course

Giovanny Velez, a platform engineering technical lead for a cloud-based applications vendor in Colombia, told The New Stack that he is taking the certification coursework to gain more experience and understanding of this still-emerging technology from a reliable and knowledgeable source.

“This platform engineering course and certification will significantly boost my professional growth and career prospects and will validate my expertise, increasing my credibility and making me a stronger candidate for advanced roles,” Velez wrote in an email. “It will open new career opportunities and potentially accelerate my path to senior positions.”

Velez said the courses will also teach him industry best practices and innovative tools that will improve his problem-solving abilities and help him stay current with the latest trends in platform engineering. “This knowledge is essential in our team as we are starting to implement the practice, and it helps us to streamline the process,” he said.

Velez, who is paying for the course himself, said he learned of the program after seeing announcements on the PlatformEngineering.org webpage and in its Slack channel.

“The program will also provide valuable networking opportunities, connecting me with other professionals in the field,” he said. “These connections can lead to knowledge sharing and future collaborations. Overall, I think that I will be more prepared to tackle the challenges of modern, cloud native environments and deliver increased value in my role. I see this field growing constantly and I see more job offers in the market.”

As lead for his company’s recently formed platform engineering team, Velez said the company is working to establish communication channels and that he will share his experiences with the course at his company “so the rest of the teams know what we will be doing and how.”

Analyst Reactions to Platform Engineering Training

Two analysts contacted by The New Stack said they believe that platform engineering training and certifications are a smart move for enterprises.

“Training is always smart, particularly if it is done by a company that offers related products and services because they will be up to date on the kind of unique, new and changing problems that IT is dealing with,” said Rob Enderle, principal analyst with the Enderle Group.

Enderle said he often hears from IT managers that employees are learning on the job rather than getting advanced training, which is creating huge deployment and operational problems.

“They need training like this to reduce those problems,” he added. “Learning on the job or from some obsolete lesson plan continues to be problematic. I recommend training from current practitioners so that those learning can hit the ground running and become immediately able to do the job right. It makes for better implementations, and it makes employees more valuable and less likely to get laid off.”

Another analyst, Dan Olds, CEO of Olds Research, agrees.

“The concept of platform engineering is what the hyperscale organizations — think Amazon, Google, and Meta — pioneered to allow them to efficiently operate at vast scale,” said Olds. “The idea behind it is to have a specific set of development tools — and only that set — which pays dividends when it comes to scalability, reliability, support and troubleshooting, to name a few. This is something that is catching on with other large organizations that have lots of applications in flight all the time as it simplifies the development, deployment and support functions.”

Such an approach simplifies the overall application development/deployment and life-cycle management processes for companies, said Olds. “In many organizations today, developers are able to pick and choose from a huge slate of development tools, which can lead to problems down the road when it comes to deployment, troubleshooting and the inevitable changes — unless those functions are performed by the engineer or engineers who actually developed the app. Standardizing development tools ensures that everyone is using the same size building blocks and understands how they fit together.”

Olds said he recommends that enterprise IT shops should investigate platform engineering, along with their counterparts in a company’s business units.

“I can see where this approach would increase the speed of application development and result in smoother deployments,” he said. But at the same time, while this holistic platform engineering approach can be beneficial for organizations, a potential problem could be getting all developers on board with it.

“I think that it might be a hard sell to some developers,” said Olds. “They are experts at the set of tools they use and have reasons why they do not use alternatives. It might take some work to get them out of their comfort zones and get them on board with this new approach.”

Posted in Data BreachesTagged Cyber Attacks, Data Security, Encryption, RansomwareLeave a comment

Beyond React: How Astro and Its Server Islands Work

Posted on August 3, 2024 - August 3, 2024 by Maq Verma

React and its virtual DOM paradigm has been at the forefront of frontend development for a decade now, but there’s been a swing towards simplicity and web-native features in more recent frameworks. Astro is a great example; and it also now has the support of Netlify, a leading player in the current web landscape.

Earlier this month Netlify announced Astro as its “Official Deployment Partner,” which in practice means it will contribute $12,500 each month “towards the ongoing open source maintenance and development of Astro.”

As Netlify CEO Matt Biilmann noted, Astro was “the first framework to popularize the idea of island architecture, where islands are interactive widgets floating in a sea of otherwise static, lightweight, server-rendered HTML.” As part of the new official partnership, Netlify will help Astro roll out a related new feature: Server Islands. Astro defines this as a “solution to integrate high-performance static HTML and dynamic server-generated components together.”

Astro concept drawing of server islands.

How Is Astro Different to React Frameworks?

We’ll get back to server islands shortly. But first, let’s look at why Astro has become a trending framework. Conceptually, the main difference between Astro and the React-based frameworks that came before it is this: most of the work is done server-side, instead of client-side. Here’s how Astro’s founders explained it in their introductory post, back in June 2021:

“Astro renders your entire site to static HTML during the build. The result is a fully static website with all JavaScript removed from the final page. No monolithic JavaScript application required, just static HTML that loads as fast as possible in the browser regardless of how many UI components you used to generate it.”

Astro is perhaps closest to a static site generator, like Eleventy and Hugo, but it also cunningly incorporates other approaches too — including React itself. “In Astro, you compose your website using UI components from your favorite JavaScript web framework (React, Svelte, Vue, etc),” wrote the founders in 2021. The trick was, that all of that complexity was rendered into HTML in the build phase, meaning it was never foisted onto the user.

 Zoom

Astro creator Fred K. Schott demoed the product in April 2021.

But undoubtedly the key to Astro’s subsequent success is the “islands architecture.” That’s because, for a web application of any scale, chances are JavaScript will be needed at some point. But whereas previous frameworks, like Angular and Next.js, focused on client-side rendering — the so-called single-page application (SPA) approach — Astro wanted to limit client-side rendering to certain parts of an app (the “islands”). While both Angular and Next.js can implement partial hydration and server-side rendering, Astro’s architecture inherently avoids client-side JavaScript by default, unless explicitly required.

“When a component needs some JavaScript, Astro only loads that one component (and any dependencies),” explained the Astro founders. “The rest of your site continues to exist as static, lightweight HTML.”

Astro credited Jason Miller with coining “islands architecture,” pointing to an August 2020 article on Miller’s blog. He in turn credited Etsy frontend architect Katie Sylor-Miller for coining the “Component Islands” pattern in 2019.

 Zoom

There was some skepticism about “web islands” when they were first discussed in 2021.

In April 2021, Astro creator Fred K. Schott demonstrated Astro for the first time. He acknowledged that “partial hydration” (a key part of the islands architecture) had been difficult to achieve in practice to this point. He said that frameworks like Next.js and Gatsby had to try and “take this application and pull it apart” if they wanted to implement partial hydration. That could cause significant problems, so Astro took a completely different approach.

“What we try and do is, by default, it’s all server-rendered,” said Schott in the April 2021 podcast. “So by default, you’re speaking this language of a server-rendered document, and then individual components, instead of being pulled out of an application, are actually being injected into a document. So it’s a totally different kind of static-first approach, where the result is that you actually have to opt into all of your payloads, and everything becomes much lighter as a result.”

Server Islands

Fast forward to 2024 and Astro has become a rising web framework. As well as the technical advantages outlined above, Astro offers a kind of ‘back to basics’ approach to web development that harkens back to early Web 2.0 frameworks, like Ruby on Rails and Django, which were also server-rendered. Now that Astro has become a viable alternative to the likes of Next.js, it is trying to expand its capabilities. Enter “Server Islands.”

In its announcement post, Astro referenced the original islands architecture as “interactive client-side components.” With Static Islands, the idea is to add “dynamic server-generated components” to the mix as well. In practice, you use “server:defer” to defer running a particular component until after the page loads. So it’s a kind of caching mechanism, similar perhaps to Next.js “partial prerendering” — or at least “solving the same problem,” as Schott said on X.

Server islands comparison to Next.js tech.

In a recent four-and-a-half-hour podcast with Ryan Carniato, the creator of Solid.js, Astro developer Matthew Phillips discussed Server Islands. “What it really is, is a way to differentiate types of content to run at different times,” he explained, adding later that Server Islands “essentially cache different parts of your page differently.”

Carniato noted that when Astro first came out, they positioned the client-side islands as being “interactive,” whereas the rest of the HTML was “non-interactive.” He pointed out that Astro is now using the word “static” to indicate the non-changing content, and “dynamic” to mean a server-side island — for example, an “x rooms left” button on an Airbnb-type website, which requires checking with a database.

Ryan Carniato and Matthew Phillips discuss server islands.

Note that although Server Islands are different to traditional Astro islands, a component can be both at the same time. Netlify explains this well in a recent blog post:

“It’s worth clarifying that Astro Server Islands are different to standard Astro Islands, which improve performance by allowing you to selectively ‘hydrate’ components, i.e. load JavaScript only for components that need it, instead of making the entire page interactive. In fact, a component can be both an Island and a Server Island!”

Back to the Future

One can’t help but wonder whether Astro is slowly falling into the same trap as Next.js, in that it will get progressively more complex over time. But the concept of Server Islands isn’t that difficult to grok, so currently it feels like a sensible extension of the core “islands architecture” concept of Astro.

What I love about Astro is that it gets us back to that server-based paradigm that we grew up with on the web in the 1990s (at least, those of us of a certain vintage), and which early Web 2.0 frameworks like Ruby on Rails and Django extended. But Astro adds component-based development to the mix, including options to use React and Vue, which makes it an entirely modern approach.

Ultimately, anything that takes the bulk of the JavaScript load away from the client (read: end users) is a good thing, and Astro is certainly helping in that regard.

Posted in VulnerabilityTagged Cyber Attacks, Data Security, ScamLeave a comment

AWS Discontinues Git-Hosting Service CodeCommit

Posted on August 3, 2024 - August 3, 2024 by Maq Verma

As part of an apparent effort to clean house, Amazon Web Services will pull the plug on its git-based source control service, AWS CodeCommit.

“After giving it a lot of thought, we made the decision to discontinue new access to a small number of services, including AWS CodeCommit,” AWS Chief Evangelist Jeff Barr wrote, sharing a prepared message on the X social media service Tuesday.

Although existing customers can continue to use CodeCommit for the time being, AWS has stopped accepting new customers. And it has not given a date on when the service would be shuttered.

“While we are no longer onboarding new customers to these services, there are no plans to change the features or experience you get today, including keeping them secure and reliable,” Barr wrote in an extended Tweet. “We also support migrations to other AWS or third-party solutions better aligned with your evolving needs.”

The company has posted instructions on how to migrate a codebase from AWS to Microsoft’s GitHub, GitLab and other code repositories.

How to delete a CodeCommit repository — after migrating to another service (AWS)

“After migration, you have the option to continue to use your current AWS CodeCommit repository, but doing so will likely require a regular sync operation between AWS CodeCommit and the new repository provider,” the cloud giant provider advised.

Market Impact of CodeCommit

Launched in 2015, AWS CodeCommit provided a managed revision control service to host git repositories. It was designed to work with Git-based tools as well as with other AWS tools such as the CloudGuru code analysis software.

Nonetheless, the service had trouble gaining a foothold in the competitive code repository market, despite the natural appeal for AWS shops to stick with AWS for additional services.

TNS Analyst Lawrence Hecht noted that, in last year’s JetBrains survey, 3.2% of the developers surveyed used CodeCommit. Even among developers whose company primarily uses AWS for the cloud, only 9% used AWS CodeCommit.

Those same AWS-centric accounts were much more likely to say their company was using BitBucket (39%), GitLab (45%) and GitHub (63%)

“That is not a large user base, but it will be interesting to see where those people will migrate to,” noted Hecht in a Slack message.

https://datawrapper.dwcdn.net/p0BfG/1

One place where AWS CodeCommit has a strong userbase was Japan. In the JetBrains survey, 11% of developers said their company uses it, a larger user base than BitBucket in that market.

Despite the marginal use of CodeCommit, many observers still had feelings about the matter.

“Given AWS is a competitor to Azure, it’s so odd to see AWS making a business case for their customers to move” to Microsoft, wrote Gergely Orosz, author of the Pragmatic Engineer newsletter, in an X thread.

“To me, this is a clear sign to not adopt any dev-related tooling from AWS. It’s not the business they want to be in,” replied Acorn Labs chief architect and co-founder Darren Shepherd.

For GitLab, AWS shuttering CodeCommit is a sign that the market for code hosting has matured.

“The market has moved from point solutions to platforms that address the entire software development lifecycle,” wrote Emilio Salvador, GitLab vice president for strategy and developer relations, in a statement. “Buyers are now looking to platforms that provide one workflow that unifies developer, security, and operations teams with integrated native security.”

GitLab has set up two options for migrations from CodeCommit, using either self-managed GitLab or through the GitLab.com hosted service.

“Self-managed customers can install, administer, and maintain their GitLab instance on bare metal, VMs, or containers. GitLab.com requires no installation,” Salvador explained.

Other companies in the space are focusing on how their products can help in the transition.

“Migration to a new source code management tool is always challenging, The right considerations and migration strategies can significantly help with the process,” wrote Patrick Wolf, principal product manager at Harness, a DevOps software provider, in an e-mail to TNS. “Some important considerations for selecting a new Source Code Manager are integration with a DevOps platform, security and governance features, and developer productivity features.”

Other AWS services being discontinued are S3 Select, CloudSearch, Cloud9, SimpleDB, Forecast, and Data Pipeline. Earlier this month, the company also gave the axe to its Quantum Ledger Database.

Posted in Data Breaches, VulnerabilityTagged Cyber Attacks, Data Security, ScamLeave a comment

How to use the Windows Registry Editor

Posted on August 3, 2024 - August 3, 2024 by Maq Verma

The Registry Editor is a powerful application that allows you to access and edit the configuration settings of the Windows operating system.

The Windows Registry is a database containing various settings used by the operating system and installed software applications.

However, it is essential to be careful when using the Registry Editor, as making incorrect changes to the Registry can cause serious problems with your system, including preventing it from booting correctly.

Therefore, before using this guide to modify the Windows Registry, please make a backup of the Registry first.

What is the Windows Registry

The Windows Registry database stores the configuration options and settings for the Windows operating system and software installed on your computer.

The Registry is organized in a hierarchical structure containing keys and values, with five different ‘root’ keys at the top, as shown below, that serve a particular purpose.

The Windows Registry Editor showing the five root keys
The Windows Registry Editor showing the five root keys
Source: BleepingComputer

The five different root keys in the Windows registry and their purposes are:

  1. HKEY_CLASSES_ROOT (HKCR): This key stores information about file associations and OLE object classes.
  2. HKEY_CURRENT_USER (HKCU): This key stores information about the current user’s settings and a specific user’s preferences for various applications.
  3. HKEY_LOCAL_MACHINE (HKLM): This key stores information that affects the entire computer, regardless of the logged-in user. These settings are for the computer’s hardware, operating system configuration, and software settings that affect all users.
  4. HKEY_USERS (HKU): This key stores information about all users who have logged on to the computer.
  5. HKEY_CURRENT_CONFIG (HKCC): This key stores information about the current hardware configuration of the computer.

For the most part, you will be modifying keys and values under the HKCU and HKLM root keys.

Registry keys are like folders containing other keys and values used to organize and group related settings.

For example, the HKEY_CURRENT_USER key contains settings for the current user account, such as a user’s desktop wallpaper, installed application preferences, and personal settings.

While keys are like folders, Registry values are the files stored within them, containing the specific settings for a particular key. They can have different data types, including text, numbers, and binary data.

There are several different types of registry value data types:

  • REG_SZ – A string value that contains text data, such as a username or folder path.
  • REG_DWORD – A numeric value that contains a 32-bit integer.
  • REG_QWORD – A numeric value that contains a 64-bit integer.
  • REG_BINARY – A value that contains binary data, such as an image or sound file.
  • REG_MULTI_SZ – A string value that contains multiple strings separated by null characters. This is often used for lists or arrays of values.

Each registry value type is used for a specific purpose, and understanding them can help you better manage and customize your Windows operating system.

Viewing Registry keys and values in the Windows Registry Editor
Viewing Registry keys and values in the Windows Registry Editor
Source: BleepingComputer

However, for the most part, when editing the Registry, you will be modifying REG_SZ values for text data and REG_DWORD for numeric data, as they are the most common data types used to stored user-editable data.

Using the Windows Registry Editor

The Windows Registry Editor is a software application created by Microsoft and built into all versions of Windows that allows you to edit the data in the Registry.

The Registry Editor application is located at C:\Windows\regedit.exe, and for the most part, requires administrative privileges to use it properly.

To open the Registry Editor, press the Windows key + R to open the Run dialog box. Next, type “regedit” in the search box and press Enter. If you are shown a prompt asking if you would like to allow the program to make changes, select ‘Yes‘ to continue.

When the Registry Editor is opened, you will see that the window is divided into two panes. The left pane displays a hierarchical tree structure of the Registry’s various keys (folders) and subkeys (subfolders).

The right pane displays the values and data associated with the selected key in the left pane.​

The Windows Registry Editor
The Windows Registry Editor
Source: BleepingComputer

To open a specific Registry key, use the left pane to navigate to the key you want to edit. Then expand a key and click on the plus sign (+) next to it.

To collapse a key, click the minus sign (-) next to it.

When you click on a key in the left pane, the values stored within it will be shown in the right pane.

Now that we know how to navigate the Windows Registry let’s learn how to modify data stored within it.

CREATE A NEW REGISTRY KEY

When configuring new settings in the Windows Registry, you may need to create a key at some point. 

Use these steps to create a new key in the Registry:

  1. Right-click on the key you want to create a new subkey under in the left pane.
  2. Select New -> Key.
  3. Enter a name for the new key and press Enter.

CREATING A REGISTRY VALUE

As Registry values contain the data that configure how an application or Windows works, it is common to need to create Registry values.

Use these steps to create a Registry value:

  1. Navigate to the key where you want to create the value.
  2. Right-click on the key and select “New” and then select the type of value you want to create. The kinds of values you can make are explained in the previous section.
  3. Give the new value a name by typing it in the box that appears.
  4. Double-click on the new value to edit its data.
  5. Enter the desired value data and click OK.

RENAMING REGISTRY KEY

Sometimes you may need to rename a Registry value, such as when introducing a typo.

To rename a Registry key, please follow these steps:

  1. Navigate to the key you want to rename. 
  2. Right-click the key and select Rename.
  3. Enter the new name for the key in the editable field.
  4. Click anywhere to save the changes.

RENAMING A REGISTRY VALUE

Sometimes you may need to rename a Registry value, such as when introducing a typo.

To rename a Registry value, please follow these steps:

  1. Right-click the value you want to rename in the right pane and select Rename.
  2. Enter the new name for the key in the editable field.
  3. Click anywhere to save the changes.

EDITING REGISTRY VALUE DATA

To make changes to a Registry value, follow these steps:

  1. Double-click on the value you want to edit in the right pane.
  2. The Edit dialog box will appear.
  3. Enter the new value for the key in the ‘Value data’ field.
  4. Click OK to save your changes.

EXPORT A REGISTRY KEY

It is possible to export Registry keys and all their subkeys and values to a registry file. 

A registry file ends with .reg, and when you double-click on one in Windows, it will import the data back into the Registry. Exporting a registry key helps create a backup before you make changes to the Registry.

  1. Right-click on the key you want to export in the left pane.
  2. Select Export.
  3. Choose a location to save the exported key and give it a name.
  4. Click Save.

IMPORT A REGISTRY KEY

If you had previously exported a key, you could import its data into the Registry by importing the file using these steps:

  1. Click on File -> Import.
  2. Navigate to the location where you saved the exported key.
  3. Select the key file and click Open.

DELETE A REGISTRY KEY

If you need to delete a Registry key, use the following steps.

  1. Right-click on the key you want to delete in the left pane.
  2. Select ‘Delete‘ from the context menu.
  3. Confirm that you want to delete the key by clicking Yes.

Note: If you delete a key, all subkeys and values underneath them will be deleted too!

Note 2: You should only delete a registry key if you know it will not cause issues with your computer!

DELETE A REGISTRY VALUE

If you need to delete a Registry value, use the following steps.

  1. Right-click on the value you want to delete in the right pane.
  2. Select ‘Delete‘ from the context menu.
  3. Confirm that you want to delete the value by clicking Yes.

Note: You should only delete a registry value if you know it will not cause issues with your computer

When done using the Registry Editor, you can close it by clicking on the X in the Window or clicking on File > Exit.

With these steps, you should now be able to use the Windows Registry Editor to navigate and edit the Registry.

However, it is vital to be careful when making changes to the Registry, as incorrect changes can cause severe problems with your system.

It is always recommended to create a backup of your Registry before making any changes.

Posted in VulnerabilityTagged Cyber Attacks, Data Security, malware, SpywareLeave a comment

Driving lessons: The kernel drivers in Sophos Intercept X Advanced

Posted on August 3, 2024 - August 3, 2024 by Maq Verma

Operating in ‘kernel-space’ – the most privileged layer of an operating system, with direct access to memory, hardware, resource management, and storage – is vitally important for security products. It enables them to monitor ‘user-space’ – the non-privileged environment where applications run – and protect against malware that executes in that environment, even when it tries to evade detection. But kernel access also allows security products to counter more insidious threats within the kernel itself. As we’ve reported previously, for example, some threat actors use BYOVD (Bring Your Own Vulnerable Driver) attacks, or attempt to get their own malicious drivers cryptographically signed, in order to access kernel-space and take advantage of that elevated level of access.

However, from a security standpoint, working in kernel-space comes with its own risks. A wrong step in this environment – such as a bad update to a kernel driver – can cause outages. If the driver in question starts at boot time, when the operating system first loads, that can lead to prolonged impacts, potentially requiring affected hosts to be started in a recovery mode to mitigate the problem and allow the machines to boot normally.

Sophos’ Intercept X Advanced product uses five kernel drivers as of release 2024.2. All drivers are extensively tested* with applicable flags enabled and disabled, and shipped with new flags disabled. (Sophos Intercept X and Sophos Central use feature flags to gradually enable new features. Feature flags are deployed through Sophos Central. New features are typically ‘guarded’ by feature flags – turned off unless the flag is enabled – so that the feature can be rolled out gradually and potentially revised before wider enablement.)

In this article, in the interests of transparency, we’ll explore what those drivers are, what they do, when they start, how they’re signed, and what their inputs are. We’ll also explore some of the safeguards we put in place around these drivers to minimize the risk of disruption (such as staged rollouts, as mentioned above; we provide an example of this later in the article), and the options available to customers when it comes to configuring them. It’s also worth noting that Intercept X Advanced and all its components, including the kernel drivers, has been part of an external bug bounty program since December 14, 2017; we welcome scrutiny via external bug bounty submissions, and we foster a culture of collaboration with the research community.

* ‘Testing’ refers to a range of internal testing, including Microsoft-provided tools and verifiers

Overview

The following table provides an at-a-glance overview of the five kernel drivers which are part of Intercept X Advanced release 2024.2.

DriverVersionTypeStart TypeSigned By Microsoft?SignatureDescription
SophosEL.sys3.2.0.1150 Kernel DriverEarly-Launch Boot StartYesELAMP*Sophos ELAM driver: can prevent execution of malicious boot start drivers
SophosED.sys3.3.0.1727 File System DriverBoot StartYesWHCP+The main Sophos anti-malware driver
Sntp.sys1.15.1121 Network Filter DriverSystem StartYesWHCP+Sophos Network Threat Protection driver
Hmpalert.sys3.9.4.990 File System DriverSystem Start YesWHCP+Sophos HitmanPro.Alert driver
SophosZtnaTap.sys9.24.6.3 Network Filter DriverOn DemandYesWHCP+Sophos Zero Trust Network Access (ZTNA) Tap driver

Table 1: An overview of the kernel drivers in Intercept X Advanced2024.2
* Microsoft Windows Early Launch Anti-malware Publisher
+ Microsoft Windows Hardware Compatibility Publisher

A graphic showing user-space and the kernel (with hardware below) and how Sophos user-space components interact with the kernel drivers mentioned in the post

Figure 1: A conceptual depiction of user-space/kernel boundaries and where Intercept X Advanced components operate

SophosEL.sys

What it does: SophosEL.sys is the Sophos Early Launch Anti-Malware (ELAM) driver.

Inputs: This driver has one input – a blocklist of known-bad drivers which must be prevented from executing as boot start drivers at machine startup. This blocklist, located at the registry key below, is set by Sophos user-space threat detection logic when it detects a malicious driver. At the next boot cycle, SophosEL.sys ensures that this driver is not loaded.

InputDescriptionProtection
HKLM\SYSTEM\CurrentControlSet\Services\Sophos ELAM\ConfigBlocklist of known-bad driversDACLs; Sophos Tamper Protected

Customer options: Customers can configure remediation and allowed items in the Threat Protection policy from Sophos Central.

Additional measures: Any Microsoft or Sophos-signed driver is exempt from cleanup/blocking.

SophosED.sys

What it does: SophosED.sys (Endpoint Defense) is a boot start driver, started during ELAM processing and before many other kernel drivers are loaded, Windows user-space is initialized, and the system drive is mounted. It has three broad responsibilities:

  1. Providing tamper protection for the Sophos installation and configuration
  2. Exposing system activity events to Sophos user-space components for protection and detection
  3. Recording low-level system activity events to the Sophos Event Journals for after-the-fact forensics and analysis

Inputs: Since SophosED.sys starts before the filesystem is available, its entire configuration is provided through its service key. Note that all the below inputs are under HKLM\SYSTEM\CurrentControlSet\Services\Sophos Endpoint Defense.

Filter driver altitudes inputs

SophosED.sys registers with Windows as a Mini-Filter driver at multiple altitudes (a unique identifier that defines a driver’s position on the ‘stack’ of drivers, with ‘lower’ drivers being closer to bare metal) allocated and approved by Microsoft.

InputDescriptionProtection
HKLM\SYSTEM\CurrentControlSet\Services\Sophos Endpoint Defense\Instances\Multiple altitudes allocated by MicrosoftDACLs; Sophos Tamper Protected

Tamper Protection inputs

Sophos Tamper Protection is configured by a combination of customer policies, Sophos feature flags, and signed manifests built into the agent.

InputDescriptionProtection
HKLM\SYSTEM\CurrentControlSet\Services\Sophos Endpoint Defense\TamperProtection\Config\Customer policy (On/Off, configuration password*)DACLs; Sophos Tamper Protected
HKLM\SYSTEM\CurrentControlSet\Services\Sophos Endpoint Defense\TamperProtection\Components\HKLM\SYSTEM\CurrentControlSet\Services\Sophos Endpoint Defense\TamperProtection\Services\Manifest of protected keys, folders, services etcSigned; verified by driver before loading

* The configuration password is hashed with PBKDF2-SHA512 and a salt

System Activity Events inputs

The Sophos Central Threat Protection policy supports multiple configuration options, which Sophos user-space processes write to the SophosED.sys registry key, so that they’re available when the driver is loaded.

InputDescriptionProtection
HKLM\SYSTEM\CurrentControlSet\Services\Sophos Endpoint Defense\Scanning\Config\Customer policy (On/Off, exclusions, and lots more)DACLs; Sophos Tamper Protected
HKLM\SYSTEM\CurrentControlSet\Services\Sophos Endpoint Defense\EndpointFlags\Sophos feature flags (various)DACLs; Sophos Tamper Protected

Event Journal inputs

InputDescriptionProtection
HLKM\SYSTEM\CurrentControlSet\Services\Sophos Endpoint Defense\EventJournal\Config\Customer policy (exclusions, disk limits)DACLs; Sophos Tamper Protected
HLKM\SYSTEM\CurrentControlSet\Services\Sophos Endpoint Defense\EventJournal\Features\If a subkey exists with a DWORD value Enabled =1, event journals are enabledDACLs; Sophos Tamper Protected

Customer options: Customers can configure disk limits and manage exclusions in Sophos Central Threat Protection policy.

Additional measures: If a driver facility is available (based on a combination of Customer Policy plus Sophos flag), then Sophos user-space processes can configure various parameters at runtime:

  • A bitmask of mitigations to apply per-process
  • What events to enable or disable for each process
  • The amount of time the driver should wait for a response from user-space (or whether it should be an asynchronous notification).

Sntp.sys

What it does: Sntp.sys (Sophos Network Threat Protection) is a kernel driver that registers for various Windows Filtering Platform events to intercept and potentially modify network flow data. Depending on features enabled by Sophos Central Threat Protection and Web Control policies, different filters and callouts are registered.

Inputs: Feature configuration is communicated to the driver from one or more of the following user mode processes:

  • SophosNtpService.exe
  • SophosNetFilter.exe
  • SophosIPS.exe
  • SSPService.exe

User-space processes communicate with the driver via the Windows Driver Framework, using IOCTLs, Read, and Write. Communications to and from the driver are protected, only accepting connections from authorized and authentic Sophos processes.

Customer options: The filter driver intercepts network traffic by browser and non-browser processes based on the policies defined in Sophos Central. Processing of the intercepted traffic is performed in user-space by SophosNetFilter.exe and SophosIPS.exe, which may send modified content back to the driver (for example, to display a block page for malicious content).

Additional measures: Customers can add individual sites to their allow or block list in Sophos Central.

Hmpalert.sys

What it does: Hmpalert.sys enforces Sophos CryptoGuard, which detects and prevents bulk encryption of files by ransomware. It also configures what exploit mitigations are enforced as processes are executed.

Inputs: Hmpalert.sys has a number of inputs, including several registry subkeys and IOCTLS.

InputDescriptionProtection
HKLM\SYSTEM\CurrentControlSet\Services\hmpalertSoftware configurationDACLs; Sophos Tamper Protected
HKLM\ SYSTEM\CurrentControlSet\Services\hmpalert\ConfigCustomer policyDACLs; Sophos Tamper Protected
HKLM\ SYSTEM\CurrentControlSet\Services\Sophos Endpoint Defense\EndpointFlagsSophos feature flags (various)DACLs; Sophos Tamper Protected

Customer options: Customers can enable/disable exploit mitigations and manage exclusions in Sophos Central Threat Protection policy.

Additional measures: N/A

SophosZtnaTap.sys

What it does: SophosZtnaTap.sys is a Sophos-built OpenVPN TAP driver. If the customer deploys the Sophos Zero Trust Network Access (ZTNA) agent, the driver intercepts DNS lookups for managed applications, and redirects traffic for those applications to the applicable Sophos ZTNA gateways. ZTNA applications and gateways are configured through Sophos Central policies and stored in the registry.

Inputs: Inputs into SophosZtnaTap.sys are via a registry subkey.

InputDescriptionProtection
HKLM\SOFTWARE\Sophos\Management\Policy\NetworkPerimeter\Customer policy (ZTNA Applications, Gateways, and certificates)DACLs; Sophos Tamper Protected

Customer options: Customers can manage their ZTNA-protected applications and gateways from Sophos Central.

Additional measures: N/A

Gradual deployment example: CryptoGuard ExFAT

Sophos CryptoGuard has protected against bulk encryption on permanent disks for over a decade. Intercept X version 2024.1.1 introduced a new feature, CryptoGuard ExFAT, which extends this protection to ExFAT partitions (typically found on removable USB drives).

CryptoGuard ExFAT development and testing took place in September 2023 through March 2024. This feature was guarded by the flag ‘hmpa.cryptoguard-exfat.available.’

Sophos Engineering ran the software internally with the flag enabled (our ‘Dogfood release’) starting March 22, 2024.

Intercept X version 2024.1.1 was released to Sophos, then to customers using our gradual software deployment process, between May 21, 2024 through June 6, 2024. At this stage the feature was still dormant for everyone apart from Sophos engineers.

The ’hmpa.cryptoguard-exfat.available’ flag was enabled using our gradual flag enablement process, between June 10, 2024 through June 26, 2024.

Customer control

Customers can select a fixed software version (Endpoint Software Management: Fixed-term support, Long-term support). This locks the software and flags until the customer selects a different software package. Customers who use the ‘Sophos recommended’ option receive new software periodically. Aside from software rollouts, they also receive gradual feature-flag enablements for new features in the software, as with a normal software release. Sophos has evolved this process to improve stability and avoid enabling new events globally for all customers.

Conclusion

Kernel drivers are fundamental to the Intercept X Advanced product – and to robust Windows endpoint security, in general – but we also acknowledge that operating in kernel-space is not without its risks.

In this article, we’ve walked through the kernel drivers in Intercept X Advanced (as of release 2024.02), what they do, how they’re signed, what their inputs are, the control customers have over their management, and additional safeguards we’ve put in place – including gradual, phased rollouts of new features, and exemptions to minimize the risk of disruption.

While no safeguard can ever eliminate risk altogether, we wanted to share the details of our drivers in the interests of transparency, and to explain in depth how we approach the complex problem of trying to protect our customers from threats, in as safe a manner as possible.

Posted in Cyber AttacksTagged Cyber Attacks, Data Security, kernel, Sophos Intercept X AdvancedLeave a comment

Researchers Showcase Decentralized AI-Powered Torrent Search Engine

Posted on August 3, 2024 - August 3, 2024 by Maq Verma

Researchers from Delft University of Technology plan to amplify their BitTorrent client “Tribler” with decentralized AI-powered search. A new demo shows that generative AI models make it possible to search for content in novel ways, without restriction. The ultimate goal of the research project is to shift the Internet’s power balance from governments and large corporations back to consumers.

Twenty-five years ago, peer-to-peer file-sharing took the Internet by storm.

The ability to search for and share content with complete strangers was nothing short of a revolution.

In the years that followed, media consumption swiftly moved online. This usually involved content shared without permission, but pirate pioneers ultimately paved the way for new business models.

The original ‘pirate’ ethos has long since gone. There are still plenty of unauthorized sites and services, but few today concern themselves with decentralization and similar technical advances; centralized streaming is the new king with money as the main motivator.

AI Meets BitTorrent

There are areas where innovation and technological progress still lead today, mostly centered around artificial intelligence. Every month, numerous new tools and services appear online, as developers embrace what many see as unlimited potential.

How these developments will shape the future is unknown, but they have many rightsholders spooked. Interestingly, an ‘old’ research group, that was already active during BitTorrent’s heyday, is now using AI to amplify its technology.

Researchers from the Tribler research group at Delft University of Technology have been working on their Tribler torrent client for nearly two decades. They decentralized search, removing the need for torrent sites, and implemented ‘anonymity‘ by adding an onion routing layer to file transfers.

Many millions of euros have been spent on the Tribler research project over the years. Its main goal is to advance decentralized technology, not to benefit corporations, but to empower the public at large.

“Our entire research portfolio is driven by idealism. We aim to remove power from companies, governments, and AI in order to shift all this power to self-sovereign citizens,” the Tribler team explains.

Decentralized AI-powered Search

While not every technological advancement has been broadly embraced, yet, Tribler has just released a new paper and a proof of concept which they see as a turning point for decentralized AI implementations; one that has a direct BitTorrent link.

The scientific paper proposes a new framework titled “De-DSI”, which stands for Decentralised Differentiable Search Index. Without going into technical details, this essentially combines decentralized large language models (LLMs), which can be stored by peers, with decentralized search.

This means that people can use decentralized AI-powered search to find content in a pool of information that’s stored across peers. For example, one can ask “find a magnet link for the Pirate Bay documentary,” which should return a magnet link for TPB-AFK, without mentioning it by name.

This entire process relies on information shared by users. There are no central servers involved at all, making it impossible for outsiders to control.

Endless Possibilities, Limited Use

While this sounds exciting, the current demo version is not yet built into the Tribler client. Associate Professor Dr. Johan Pouwelse, leader of the university’s Tribler Lab, explains that it’s just a proof of concept with a very limited dataset and AI capabilities.

“For this demo, we trained an end-to-end generative Transformer on a small dataset that comprises YouTube URLs, magnet links, and Bitcoin wallet addresses. Those identifiers are each annotated with a title and represent links to movie trailers, CC-licensed music, and BTC addresses of independent artists,” Pouwelse says.

We tried some basic searches with mixed results. That makes sense since there’s only limited content, but it can find magnet links and videos without directly naming the title. That said, it’s certainly not yet as powerful as other AI tools.de-dsi

In essence, De-DSI operates by sharing the workload of training large language models on lists of document identifiers. Every peer in the network specializes in a subset of data, which other peers in the network can retrieve to come up with the best search result.

A Global Human Brain to Fight Torrent Spam and Censors

The proof of concept shows that the technology is sound. However, it will take some time before it’s integrated into the Tribler torrent client. The current goal is to have an experimental decentralized-AI version of Tribler ready at the end of the year.

While the researchers see this as a technological breakthrough, it doesn’t mean that things will improve for users right away. AI-powered search will be slower to start with and, if people know what they’re searching for, it offers little benefit.

Through trial and error, the researchers ultimately hope to improve things though, with a “global brain” for humanity as the ultimate goal.

Most torrent users are not looking for that, at the moment, but Pouwelse says that they could also use decentralized machine learning to fight spam, offer personal recommendations, and to optimize torrent metadata. These are concrete and usable use cases.

The main drive of the researchers is to make technology work for the public at large, without the need for large corporations or a central government to control it.

“The battle royale for Internet control is heating up,” Pouwelse says, in a Pirate Bay-esque fashion.

“Driven by our idealism we will iteratively take away their power and give it back to citizens. We started 18 years ago and will take decades more. We should not give up on fixing The Internet, just because it is hard.”

The very limited De-DSI proof of concept and all related code is available on Huggingface. All technological details are available in the associated paper. The latest Tribler version, which is fully decentralized without AI, can be found on the official project page.

Posted in Cyber Attacks, VulnerabilityTagged Cyber Attacks, Data Security, Scam, SpywareLeave a comment

Posts navigation

Older posts
Newer posts

Recent Posts

  • New Malicious PyPI Packages used by Lazarus(By Shusei Tomonaga)
  • Recent Cases of Watering Hole Attacks, Part 1(By Shusei Tomonaga)
  • Recent Cases of Watering Hole Attacks Part 2(By Shusei Tomonaga)
  • Tempted to Classifying APT Actors: Practical Challenges of Attribution in the Case of Lazarus’s Subgroup(By Hayato Sasaki)
  • SPAWNCHIMERA Malware: The Chimera Spawning from Ivanti Connect Secure Vulnerability(By Yuma Masubuchi)
  • DslogdRAT Malware Installed in Ivanti Connect Secure(By Yuma Masubuchi)
  • DslogdRAT Malware Targets Ivanti Connect Secure via CVE-2025-0282 Zero-Day Exploit
  • Lazarus Group’s “Operation SyncHole” Targets South Korean Industries
  • North Korean APT ‘Contagious Interview’ Launches Fake Crypto Companies to Spread Malware Trio
  • SocGholish and RansomHub: Sophisticated Attack Campaign Targeting Corporate Networks
  • Critical Flaw Exposes Linux Security Blind Spot: io_uring Bypasses Detection
  • Discord Used as C2 for Stealthy Python-Based RAT
  • Earth Kurma APT Targets Southeast Asia with Stealthy Cyberespionage
  • Triada Trojan Evolves: Pre-Installed Android Malware Now Embedded in Device Firmware
  • Fake GIF and Reverse Proxy Used in Sophisticated Card Skimming Attack on Magento
  • Fog Ransomware Group Exposed: Inside the Tools, Tactics, and Victims of a Stealthy Threat
  • Weaponized Uyghur Language Software: Citizen Lab Uncovers Targeted Malware Campaign
  • 4Chan Resumes Operation After Hack, Cites Funding Issues
  • ResolverRAT Targets Healthcare and Pharmaceutical Sectors Through Sophisticated Phishing Attacks
  • CVE-2024-8190: Investigating CISA KEV Ivanti Cloud Service Appliance Command Injection Vulnerability
  • Dissecting the Cicada
  • LockBit Analysis
  • Attacking PowerShell CLIXML Deserialization
  • Threat Hunting Report: GoldPickaxe
  • Exploiting Microsoft Kernel Applocker Driver (CVE-2024-38041)
  • Acquiring Malicious Browser Extension Samples on a Shoestring Budget
  • Type Juggling and Dangers of Loose Comparisons
  • Exploring Deserialization Attacks and Their Effects
  • Hunting for Unauthenticated n-days in Asus Routers
  • Element Android CVE-2024-26131, CVE-2024-26132 – Never Take Intents From Strangers
  • A Journey From sudo iptables To Local Privilege Escalation
  • AlcaWASM Challenge Writeup – Pwning an In-Browser Lua Interpreter
  • Fortinet Confirms Third-Party Data Breach Amid Hacker’s 440 GB Theft Claim
  • Adversary Emulation is a Complicated Profession – Intelligent Cyber Adversary Emulation with the Bounty Hunter
  • Cloudflare blocks largest recorded DDoS attack peaking at 3.8Tbps
  • RPKI Security Under Fire: 53 Vulnerabilities Exposed in New Research
  • CVE-2024-5102: Avast Antivirus Flaw Could Allow Hackers to Delete Files and Run Code as SYSTEM
  • Build Your Own Google: Create a Custom Search Engine with Trusted Sources
  • Rogue AI: What the Security Community is Missing
  • Ransomware Roundup – Underground
  • Emansrepo Stealer: Multi-Vector Attack Chains
  • Threat Actors Exploit GeoServer Vulnerability CVE-2024-36401
  • In-depth analysis of Pegasus spyware and how to detect it on your iOS device
  • GoldPickaxe exposed: How Group-IB analyzed the face-stealing iOS Trojan and how to do it yourself
  • Beware CraxsRAT: Android Remote Access malware strikes in Malaysia
  • Boolka Unveiled: From web attacks to modular malware
  • Ajina attacks Central Asia: Story of an Uzbek Android Pandemic
  • SMTP/s — Port 25,465,587 For Pentesters
  • POC – CVE-2024–4956 – Nexus Repository Manager 3 Unauthenticated Path Traversal
  • Unauthenticated RCE Flaw in Rejetto HTTP File Server – CVE-2024-23692
  • CVE-2024–23897 — Jenkins File Read Vulnerability — POC
  • Why Django’s [DEBUG=True] is a Goldmine for Hackers
  • Extracting DDosia targets from process memory
  • Dynamic Binary Instrumentation for Malware Analysis
  • Meduza Stealer or The Return of The Infamous Aurora Stealer
  • Unleashing the Viper : A Technical Analysis of WhiteSnake Stealer
  • MetaStealer – Redline’s Doppelgänger
  • Pure Logs Stealer Fails to Impress
  • MetaStealer Part 2, Google Cookie Refresher Madness and Stealer Drama
  • From Russia With Code: Disarming Atomic Stealer

Recent Comments

  1. Maq Verma on Turla APT used two new backdoors to infiltrate a European ministry of foreign affairs
  2. binance Registrera on Turla APT used two new backdoors to infiltrate a European ministry of foreign affairs
  3. Hal on FBI: BlackSuit ransomware made over $500 million in ransom demands
  4. canadian pharmaceuticals on Linux: Mount Remote Directories With SSHFS
  5. situs togel resmi on Extracting DDosia targets from process memory

Archives

  • April 2025 (19)
  • November 2024 (20)
  • October 2024 (13)
  • September 2024 (2)
  • August 2024 (119)
  • July 2024 (15)

Categories

  • Crack Tutorials
  • Cyber Attacks
  • Data Breaches
  • Exploits
  • Programming
  • Tools
  • Vulnerability

Site Visitors

  • Users online: 0 
  • Visitors today : 1
  • Page views today : 1
  • Total visitors : 2,217
  • Total page view: 2,826

$22 Million AWS Bitmagnet BlackCat Bytecode CrowdStrike Cyber Attacks cyber security Data Breach Data Security DDOS Decentralized Encryption fake github Indexer Injection Activity kernel Linux Maestro malware Microsoft Model Architecture Netflix Open Source Phishing Phishing Scam Programming Ransomware Reverse Engineering Safe Delete Safe Erase Scam Security tool Software Crack Software Design software protection SOLID SOLID Principles Sophos Intercept X Advanced Spyware Tools Torrent TryCloudflare vulnerability Workflow Engine

Proudly powered by Admiration Tech News | Copyright ©2023 Admiration Tech News | All Rights Reserved