Category Archives: Active Directory

Active Directory Risk Auditing with BloodHound

687474703a2f2f692e696d6775722e636f6d2f5365336175484e2e706e67

Brief update: awesome video from Defcon was recently posted.  You can find it here: DEF CON 24 – Andy Robbins. Rohan Vazarkar, Will Schroeder – Six Degrees of Domain Admin 

The following post is a guide on performing risk audits for your Active Directory infrastructure with BloodHound.  It’s a beginners guide.  Bloodhound is a tool created for and widely used by the red team.  That said, it provides excellent data for risk mitigators and auditors looking to validate or prove out network hardening policies.  Therefore, I’d maybe call this a purple team tool.  If you have found yourself here it’s most likely you are already familiar with BloodHound and are trying to get help using or understanding it.

What does Bloodhound do?  In my own words, it maps your AD infrastructure.  The map uses intel gathered from querying the domain controllers.  This information (intel) is used to provide vulnerable locations in the AD infrastructure and associated endpoints, thus allowing an attacker to target the weakest accounts, and gain the quickest access to highest privileged accounts.  For more info about the history of BloodHound check out Andy’s Intro to BloodHound.  Andy is a friend and the visionary behind BloodHound.  My networks are better off having been put to test by his great talents.

Assumptions

This technique requires a database, an ingestor, and a client.  We are going to run the whole thing on one machine.  It is possible to run the BloodHound ingestor from one machine (the machine that actually talks to AD to gather information), the database on another, and the client on yet another.  In this example, I will be using one machine.

Warnings

The data being collected by BloodHound should be considered sensitive.  Only run the database on secured machines, and if possible destroy the data.  That said, let’s get into it…

What You Need

  • Create a new powershell (.ps1) file that contains the following code (for this guide, we will name the file execute-bloodhound.ps1:
    • Get-BloodHoundData | Export-BloodHoundData -URI http://localhost:7474/ 
      -UserPass "neo4j:neo4j"
  • Download the powershell ingestor script from GitHub and name it whatever you want.  In this case we will use the name powershell-ingestor.ps1
  • Download neo4j Community Edition from neo4j.com
  • Download Bloodhound for Windows from GitHub

Once you have this stuff, you are ready to go.  Going forward we will basically do the following:

  1. Install Neo4j database
  2. Start the ingestor process (which will ship data into Neo4j)
  3. Use BloodHound to parse that data into graphical form

Install Neo4j

neo4j-install-2

neo4j-install-3

Make sure at this step that you leave the box checked for “Run Neo4j Community Edition”

neo4j-install-4

Click “Start”

neo4j-install-5

Verify it started properly

neo4j-install-6

Log in to http://localhost:7474 using the default credentials.  If you get in, then success!

neo4j-install-7

Start the Ingestor Process – Run the Scripts

The following steps will execute powershell, import the bloodhound module (ingestor), and then run the bloodhound script (the one that is a simple one line command: Get-BloodHoundData | Export-BloodHoundData -URI http://localhost:7474/
-UserPass “neo4j:neo4j”.

Step 1: Open a cmd window (elevated) and launch powershell with the the command: powershell -version 2 -exec bypass

cmd-1

Step 2: change directories into the folder where you saved the two powershell files.  Execute the command Import-Module .\powershell-ingestor.ps1 (this is the ps1 file you downloaded from GitHub)

cmd-2

Step 3: Execute the powershell script you created earlier with the simple command .\execute-bloodhound.ps1.  You may have to wait a few moments.

cmd-3cmd-4

What was done so far…

  • We grabbed all the files we need: 2 powershell scripts, an installer for Neo4j and the BloodHound binaries (this blog is using the Rolling Release which is 1.0.1 at the time of writing).
  • Installed Neo4j database and established a connection to verify everything is in working order.
  • Executed powershell, imported the bloodhound module and then execute the collection/ingestor process (another powershell script).

At this point, the powershell script has done its thing and reached out to the domain.  It has queried the domain controller and imported the results into the Neo4j database.  It is time to use BloodHound to map our results….

Running BloodHound

At the time of writing this blog, BloodHound 1.1 is in early release.  It was not behaving well in our environment, so we are running the Rolling Release (1.0.1 at time of writing).

When you launch BloodHound you only need to connect it to the database.  Here are the basic steps….

Run BloodHound.exe

blood-1

Connect to the database (wherever you may have installed it).  For blue team stuff this is typically going to be localhost.  The default creds are shown below.  Note: that in the newer versions of BloodHound (beginning with version 1.1) the database URL uses BOLT and not HTTP.  So the URL will look like “bolt://localhost:7687

blood-2

Once you get logged in, the window below will appear.  Click the the lines to drop a menu down.

blood-3

Database information will be displayed.  If you see no record counts, then that means your ingestor is either not working or taking a while.  Most likely it’s taking some time, so be patient.  If you are not sure, then you can always log directly into the database and look at records manually.  If you see records, then something could be awry with the connection between BloodHound and Neo4j.

blood-4

If you see records, then check out the Queries tab!  The below picture shows the number of Domain Administrator accounts in the test environment.  You will see a number of other pre-built queries.  One of the most valuable of which is “Find User with the Most Local Admin Rights”.  The newer version of BloodHound has some additional queries that might be helpful (such as “top 10” users with certain permission types).  You may be surprised what you find in here.  Don C. Weber just posted another blog about writing custom queries for BloodHound, which can be found here.

blood-5

In Summary

BloodHound is an awesome tool that enables us to measure specific risks based on privileged account configurations in our network.  You can quickly identify if there are any outlying accounts on the network which could fall victim to credential theft (à la incognito or mimikatz).  Red Teams use this tool to identify machines to attack based on the probability that it will give them a better chance of acquiring administrator or domain administrator rights.  Know your network!

Advertisements

Mitigations for LSA Credential Exposure | Part 1: Plain-Text Passwords

The series will address the following attacks:

  • Plain-text password grabbing (wdigest LSASS/SSP)
  • Pass-the-hash (LM, NTLM, NTLMv2, Kerberos AES)
  • Overpass-the-hash (also referred to as pass-the-ticket)
  • Golden Ticket

I will give a rundown of each attack as I understand them, and then provide current supposed methodology for mitigating against them.  I am assuming that the initial attack stages were successful, and a payload with remote callback/shell has been acquired.  For example, meterpreter (msfconsole).  In other words, these are post-exploitation attacks aimed at escalating privileges ultimately to domain domain.

Before I go any further I need to point out that the primary method of securing your infrastructure from any of these methods is managing privileged accounts.  None of these attacks can be done without local administrator rights escalation first (harder to do than it sounds).  That means that the attacker must gain local administrator rights on a workstation in order to make use of these strategies.  If you or your staff are operating with local administrator rights then there is almost nothing you can do…

(TL/DR)

By its nature, reading LSA to gain information on accounts is a post-exploitation event.  This means that the attacker already has root to the machine they are looking into.  This means any operating systems parameters used to secure the machine are almost entirely trivial.  I would argue that the exception is when someone is operating 2012 R2 functional domain, has all local administrator accounts disabled, and network admins use domain accounts (delegated for only local system administration and have no domain admin rights) which are placed in the Protected Users group to remotely administer systems.  No matter what, an attacker can update registry to try to force LSA to store passwords – it will not store for these protected users!  So you only run into a problem when a domain users who is not in the protected users group logs into a system where they have domain administrator (could be a service too).

Plain-Text Password Grabbing

This is the worst.  Believe it or not there are probably a swath of people vulnerable to this method (more later).  This is the low hanging fruit for anyone on the network (think: inside threats).  How does it work?  Well, Microsoft needs to offer single sign-on.  That means any system you log into must “cache” your credentials somehow for future use and then forward those credentials (securely) to an application or remote application.  You are supposed to rest easy knowing that TLS is used to encrypt your password on the wire when your workstation “automagically” authenticates you to that web application.  The problem is that the password is stored without encryption locally (therefore encrypting over the network is useless) in various states (a couple hashed version and plain-text) in memory by the Local Security Authority Server Service (LSASS).  When your credentials are needed to sign you on remotely without your input an SSP (Security Support Provider) will grab the password from LSASS, throw it into a TLS connection and sign you on remotely.  I only mention TLS because a lot of network managers think that this is enough security.  It is not.  TLS only secures you against MITM (man in the middle) attacks (passive network sniffing for plain-text passwords).  Before the SSP authenticates you over a secure network connection (TLS), it might need to have access to your plain-text password so that it can authenticate on your behalf without your input.  Why plain-text?  Because not all we applications or 3rd party systems support Kerberos, etc.  There are a couple problematic SSPs which support such use of plain-text passwords for various applications.  First, Digest SSP (often referred to as “wdigest” which is the name of the DLL).  Digest SSP (or just “Digest Authentication”) was developed to aid in passing credentials to web applications (e.g. a company intranet site).  Second, CredSSP also relies on the plain-text instance of your password in LSASS.  Rather than being used to authenticate a user against web applications, CredSSP’s job is to authenticate you against terminal services / RDP.  Even worse, if you authenticate to a remote computer it’s very likely that you are leaving a plain-text copy of your password there as well!  Gee, thanks CredSSP!  CredSSP is also used for remote Powershell.  There may be other SSPs that attempt to store and access plain-text copies of our passwords too.   They will place the password in LSASS, so our ultimate goal for this method will be to restrict LSASS’s ability to store passwords in plain-text (or to restrict SSP’s from doing this).  More on that in a minute…

How It’s Exploited

Tools

There are a couple of well known tools for reading contents of LSASS in memory.

  • Windows Credential Editor (often referred to as just “wce”).
  • Mimikatz

Reading memory is obviously not proprietary.  Therefore, it should be assumed there are a number of other projects or malware packages that will do this.  Personally, I enjoy using Mimikatz.  This tool is a favorite of pentesters and excellent for blue teams to validate their mitigation techniques.  Benjamin Delpy (creator of Mimikatz) is a fantastic researcher.

Prerequisites

  • Tool of your choice (Mimikatz)
  • Access to target system (either remote or physical access)
    • Local Administrator privilege is a requirement

Ease of Execution

This information is extremely easy to get.  Every beginning “hax0r” learns these techniques out the gate.

Execution

Once you execute mimikatz.exe on the target machine with local administrator privileges you simply need to issue two commands:

mimikatz #privilege::debug
mimikatz #sekurlsa::wdigest

The output will look something like this (I have sanitized it)

mimikatz # sekurlsa::wdigest
Authentication Id : 
Session : 
User Name : username
Domain : domain
Logon Server : domain controller
Logon Time : user's logon time
SID : user's SID
 wdigest :
 * Username : username
 * Domain : domain
 * Password : password (the fun part)  

 

This blog is not intended to teach people how to execute attacks.  These facts are well documented.  It should be noted that this can be done remotely, quite easily with tools like psexec and metasploit / meterpreter.

Mitigation Steps

Disclaimer: I am not making any claims or guarantees with this info, nor its completeness.  Both the threat and the mitigation are dynamic events that change day to day.  As of the writing of this post, these are considered published mitigation techniques.  Whether they are effective or not is debatable

Because tools like Mimikatz are post-exploitation and require local administrator rights before they are useful, it’s safe to assume that most of these security features can be trivial to undue.  For example, if someone has managed to acquire local administrator rights on a system, it’s trivial to make registry changes.  As we will see below, Microsoft has included registry settings to tweak whether SPP is allowed to store credentials or not in plain-text.  Same for RDP security.  It’s so easy to silently change these things, even if they are being pushed from Group Policy.

Patching

There are patches for this problem.  I will try to summarize here, but suffice it to say it’s a little confusing.  Microsoft has a good rundown under Security Advisory 2871997.  In Windows 8.1 and Server 2012 R2 Microsoft added support for (null) storage of plain-text passwords (SSPs are no longer allowed to store plain-text passwords in LSASS).  You must explicitly modify these systems to permit it in Windows 8.1 and Server 2012 R2.  They also created a new switch in the RDP client called /restrictedAdmin, and if not called out then the default was to use RDP with restricted admin.  Excellent!  But most organizations are still using Windows 7.  Therefore, Microsoft backported all of this to previous versions of Windows, and released as a patch under KB2871997  (applause).  Subsequent patches were released to provide support for modifying the restrictedAdmin feature via registry (CredSSP).  It gets better…the backported operating systems (e.g. Windows 7) left wdigest (Digest Authentication SSP) alone.  While all other SSPs would no longer be allowed to store plain-text passwords in the LSA, Digest still can.  An additional registry entry must be made on these legacy Operating Systems to prevent Digest SSP from continuing to operate in its less secure mode!  The reason for this is that Microsoft did not want to break customer’s applications.  Therefore, they offered a neutered solution by forcing those who want to be more strict to “opt in”.  It is for this reason that I believe there are thousands of exposed credentials still being exposed on patched workstations.  That is nuts!  I will cover how to fix this under the registry sub-heading.

Registry

Make sure your applicable systems have the following registry values.  This I know for certain from our own testing, that a patched Windows 7 system will still publish the LSA plain-text passwords.  It will continue to do this until you update registry and reboot.

  • HKLM\System\CurrentControlSet\Control\SecurityProviders\WDigest\UseLogonCredentials=0
    • This registry entry should be added to all Operating Systems prior to Windows 8.1 and Windows Server 2012 R2 after they have installed patch 2871997
  • HKLM\System\CurrentControlSet\Control\Lsa\DisableRestrictedAdmin=0
    • This registry setting should be enabled where applicable – you should review the Security Advisory carefully.

Windows 10 Advancements

This post has been a work in progress for the last 2 years or so.  As I am finally publishing on July 19th 2016, there are important updates available related to mitigating these attacks.  Specifically, Windows 10 Isolated User Mode.  Rather than cover it in details here I will provide links to a few educational resources:

Use Protected Users Group

Microsoft created a new “Protected Users” group, which LSASS is not allowed to store passwords for, let alone hashes!  We will discuss this group more in future posts, but suffice it to say in a Windows Server 2012 R2 functional level domain it is critical that you add your domain users that are granted local administrator rights on systems to the Protected Users group!  There’s a few more domain settings that I will cover when we review the hash exploits, as it relates to storing only secure hashes so that passwords cannot be “cracked”.  More on that later.

Additional Best Practices

Absolutely no one in your organization should be running as a local administrator.  One pain with accessing LSA is that it requires an administrator privilege set.  That is a significant road block.  Enough so, that most pentesters will look at alternative approaches on the network for gaining credentials other than reading LSA.  Additionally, domain administrators should never be allowed to log into any system other than a domain controller.  One of the easiest ways to protect passwords is to only use them where necessary.  God forbid we have an unpatched system containing credentials in plain-text for a domain administrator!  In my environment, domain admins cannot even use RDP to connect to domain controllers.  They must connect directly to the console (virtualization makes this less annoying).

Leave the built-in local Administrator account disabled (if it’s not, then disable it).  Microsoft added a lot of security to the Local Administrators group, preventing various pivot techniques for its’ members.  Except, they allowed one exception: the built-in local Administrator account.  So just disable it.  Go ahead.  What are you afraid of?  Seriously.  Do it.

Use Microsoft LAPS (or something) for Local Account Password Management.  I suppose if you must keep a local Administrator account handy, then at the very least use unique passwords for each local account.  If the passwords for every built-in Administrator account in your Windows network are the same, then pivoting from machine to machine is moot for an attacker.  It’s a matter of time before they find a hash of more value somewhere and escalate to a domain or worse, domain administrator.  An alternative tool: SHIPS by TrustedSec.

Resources

Here are a list of links / resources that I found helpful.

Other Solutions

There are vendors out there that can protect memory space of processes such as lsass.exe.  For example, CarbonBlack Enterprise Protect (formerly called Bit9).  Cb Protect has the ability to watch and restrict what areas of memory certain processes are able to access.  This solutions is agent based (requires an agent on the endpoints).  Policies are used to determine how the agent behaves, and in this case policies can contain types of rules such as a “Memory Rule”.  At its most basic function the memory rule can block read and write in memory space reserved for lsass.exe by any other process.  So quite literally, no other processes can access the memory space of lsass.exe except lsass.exe.  This is a great mitigation.  CarbonBlack is not the only vendor that can do this.  I believe that most application white listing vendors will offer this type of feature.  Again, this rule is a defense against post-exploitation abuses where someone is already on your systems (abusing existing tools or having ran malware).  Application white listing is one step up in that it is designed to prevent the running of any untrusted binary/PE files.  In the event that this does not work as it is designed, then a memory rule should add additional “post-exploitation” defense.

Summary (TL/DR)

By its nature, reading LSA to gain information on accounts is a post-exploitation event.  This means that the attacker already has root to the machine they are looking into.  This means any operating systems parameters used to secure the machine are almost entirely trivial.  I would argue that the exception is when someone is operating 2012 R2 functional domain, has all local administrator accounts disabled, and network admins use domain accounts which are placed in the Protected Users group to remotely administer systems.  No matter what, an attacker can update registry to try to force LSA to store passwords – it will not store for these protected users!  So you only run into a problem when a domain users who is not in the protected users group logs into a system where they have domain administrator (could be a service too).

Feel free to comment.