Network misconfigurations take on many types and forms, and come about for many different reasons. Many of them stem from blind adherence to poorly-informed common practices or even just from simply not being aware that operating system configuration defaults inherently contain security misconfigurations.

Let’s review 2 common misconfigurations to serve as our examples: 

Why closed ports are like landmines

Most organizations rely on their organizational domain firewall and tend to turn off the firewall at the level of the local machine. As a result, they often neglect to closely maintain it.

One way this neglect may be revealed is through ports.

As you may know, services and applications require different ports. These ports are usually opened per request on the organizational firewall by IT or the networking team, 

Over time, many of these services or applications are removed or deleted from the endpoint. However, the local firewall rule to allow connections on these ports is often overlooked and will remain on the ‘closed’ status as the application is no longer waiting for connections.

Why would leaving a port in the closed state count as a misconfiguration? Well, first review the differences between the three most common port statuses:

  • Open port – The application or service is running and accepting connections over the port.
  • Filtered port – A firewall or filter (or another network issue) is blocking the port. A port may be filtered by a server firewall, network firewall, router, or another security device.
  • Closed port – Indicates that an application or service is not actively listening for connections on that port. However, a closed port can be open at any time if an application or service is started.

In other words, attackers can use closed ports to establish an incoming connection between their “attack box” and their victim machine, and use it to issue malicious commands. This is known as a bind shell.

Here’s one example of how hackers intent to attack a machine located on another segment, which is protected by the organizational firewall: 

A group of hackers intend to attack a machine located in another segment and is protected by the organizational firewall.

The hackers send a payload to the machine over port 445 (SMB), which is allowed through the organizational firewall. The payload attempts to open communication back to the attack box using a reverse shell over port 4444 but it is blocked by the organizational firewall:

Just when the hackers think they are out of luck, they notice that port 88 is closed but not being filtered out. Since port 88 is the default Kerberos port, the hackers decide to leverage it for a command and control connection (C&C):

This time their attack works. The hackers are able to establish a command and control channel using the bind port and progress their attack by initiating malicious commands on the host. At this point, the extent of the damage depends on the privileges of the compromised host, which may be significant if the organization does not strongly enforce the principle of Least Privilege.

Why LSASS whitelisting invites credential dumps


The practice of whitelisting services and applications is yet another common source of dangerous misconfigurations. Overly liberal policies whitelisting policies can be leveraged by attackers to extract NTLM password hashes or even cleartext passwords of domain or local users through the LSASS process.

Let’s take a look at the following scenario:

An attacker was able to exploit a machine through a critical remote code execution vulnerability without prior authentication. Put differently, an attacker exploits a vulnerability that allows unauthenticated users to perform RCE on the vulnerable host.

The attacker then extracts the SAM file and manages to obtain access to the NTLM hash of the local administrator of the victim host, let’s call it ‘Host A’. Next, the attacker uses the NTLM hash to authenticate and receives local admin rights which he then uses to extract cleartext passwords or NTLM hashes from the LSASS process.

If the Windows Error Reporting service on Host A has previously been whitelisted for debugging purposes by some unwitting IT personnel, the EDR or AV will not respond to the attack.

So when the attacker attempts credential extraction using Windows Error Reporting, what happens? 

Success! The attacker obtains an NTLM hash of a Domain Admin from the LSASS process and secures access to further abuse the domain.


When your vulnerability scan fails to detect misconfigurations

While there are countless ways to misconfigure an organizational network, the greatest caveat lurks in configurations that will go undetected by a vulnerability scan. Both of the above examples fall just under that category. If your network suffers from unnoticed, unvalidated, and unmonitored closed ports or legacy whitelisting policies, that means your network’s exposures are far and wide. And fixing these issues should be your first priority. 

Automated Security Validation offers a best-in-class methodology to help you uncover your own network’s misconfigurations and emulate real-world attack vectors to help you avoid false-positives and prioritize patching.

Written by: Niv Toledo
Show all articles by Niv Toledo
Learn more about automated security validation
Resource center
Get blog updates via email
Trending
Four steps the financial industry can take to cope with their growing attack surface
Four steps the financial industry can take to cope with their growing attack surface

The financial services industry has always been at the forefront of technology adoption, but the 2020 pandemic accelerated the widespread use of mobile banking apps, chat-based customer service, and other digital tools. Adobe’s 2022 FIS Trends Report, for instance, found that more than half of financial services and insurance firms surveyed experienced a notable increase […]

The elephant 🐘 in the cloud
The elephant 🐘 in the cloud

As much as we love the cloud, we fear it as well. We love it because cloud computing services of Amazon, Azure, and Google have transformed operational efficiency and costs, saving us money, time, and alleviating much of the IT burden. We also fear it because as companies moved to the cloud, they found that […]

A new era of tested Cloud Security is here
A new era of tested Cloud Security is here

Cloud computing has fundamentally changed how we operate. It’s efficient and scalable, but it’s not without some problems. Security is the biggest. As we’ve shifted to the cloud, we’ve exposed ourselves to new risks that can’t be ignored. The IBM Cost of a Data Breach 2023 Report points out that 11% of breaches are due […]

Learn more about our platform
Platform