Strider Honey Monkeys

August 23rd, 2011

The idea of a honeypot is  simple – you place a server on the network (usually in the DMZ) to entice the bad guys to attack your honeypot, while
ignoring your actual production boxes hidden safely behind additional layers of security.  By distracting them with a baited system, they’ll focus their collective bad guy energy where it won’t interfere/damage/affect your production network.  If you’re a security guru, they can also be
used to gather information about attackers and track their methods and strategies as they try to hack your systems.

Enter Microsoft.

Rather than placing a bait machine and waiting for someone to troll around and attack it, they created the Strider Honey Monkey Exploit
Detection System.  The Honey Monkeys are a series of desktops, some patched and some intentionally vulnerable, that use an automated process to leisurely stroll around the internet – visiting sites that virus fearing mortals would fear to go.  Powered by the Strider program, the machines are configured to watch for malicious activity such as changes to the system’s registry, to determine what web sites are spreading malware.


Think about it – Microsoft has the ability to see how their operating systems perform when users are surfing the internet, whether on questionable sites or on seemingly innocent web pages.  The PCs emulate how humans navigate, so engineers can see which sites
propagate malware and then compare the damage to between patched systems versus unpatched, vulnerable machines.  Doing so
allows Microsoft to determine how their protective mechanisms and configurations are holding up (or not holding up) to new attack
strategies.   Gathering a wealth of data, Microsoft can also track the evolution of malware designs and counter them with proactive measures in their security updates and products.  They can spend time doing what most security administrators are trying to do on their networks – “protect the environment from the users.”

They’ve already used information from the project, which was announced back in 2005, to implement improvements in their web browser
(Internet Explorer) for Windows 7.  In addition to including a Phishing filter, which catalogs known phishing sites and warns users of potential social engineering, they’ve modified how the program handles malformed HTML pages to prevent the automatic running of malicious scripts.

The enhanced security of Windows 7 proves Microsoft is focusing on emerging threats and security protection – a smart move in today’s
cybercrime laden world.  Though I disagree with some of the settings and changes which recently locked me out of my system “for my own protection”, the majority of people out on the internet aren’t nerdy women with a passion for security – they’re technology-ignorant wanna-be script kiddies following their human curiosity into unsafe web territories who will certainly benefit from the added lockdown.

The Terry Childs Incident

August 23rd, 2010

In July of 2008, Terry Childs, the San Francisco city network administrator received a heads up from some of his co-workers as he headed into a meeting with management. “We were just told you’ve been reassigned.” Childs entered the meeting to find not only local and remotely dialed in management in attendance, but a representative from Human Resources and a police officer (never a good sign, in my personal experience.)

Management demanded Childs turn over the network passwords for the city’s FiberWAN, Childs refused and was subsequently arrested and held on $5 million dollar bond for three years as he awaited his day in court. Childs argued that the city was creating a situation that would undoubtedly undermine the security mechanisms in place that protect the network. He claimed the city employees asking for that information were unqualified to have administrative access to the FiberWAN, so he refused to divulge them for 12 days before he finally provided them to the city’s mayor.

As a former systems administrator, I understand the sentiment – I truly do. Mr. Childs worked on the network for many years building what he believed to be a safe and secure entity that benefitted the citizens of San Francisco by keeping their infrastructure confidential, available and reliable. The network was like his child – he’s spent countless hours watching it grow and mature, and likely felt a keen sense of pride in what he and his team had achieved.  But in truth, the FiberWAN was not Mr. Childs dependent – he was, at best, a “foster parent” or a “nanny” to the city’s infrastructure. It didn’t belong to him and he didn’t have the right to lock out the city from their equipment, regardless of his motive (and I make no case either way regarding his intent being noble or vindictive.)

A jury agreed, and found Mr. Childs guilty of violating a California statue on denial-of-service attacks. This week Mr. Childs was sentenced to four years in prison for his actions and will likely be ordered to pay a substantial amount of restitution for the city.  The real question, in my opinion, isn’t whether or not Mr. Childs is guilty of criminal mischief – but rather, what in the world has the city of San Francisco done in the past three years to overcome its complete lack of processes and procedures that allowed Mr. Childs to sabotage the network with such ease? Mr. Childs had singular access to the network for a considerable length of time before the incident occurred, yet no actions were taken to correct this. (It reminds me a little of the BP oil spill – you can’t honestly expect me to believe that NOBODY had the foresight to say, “Hey! Potential disaster here! Shouldn’t we have a backup plan?”) 

No security system is flawless – and no administrator is incorruptible. The policies we put in place for our security program MUST take into consideration that every person that interacts with the network is a potential vulnerability. While Mr. Childs could definitely have handled the situation more maturely, the city also needs to be held accountable by its citizens who should be demanding an accounting of how the city is preventing this situation from occurring again.

Where did Nikki go?

February 21st, 2010

Nikki is on hiatus finishing the last requirements for her Masters Degree in Information Systems.

Working on some great stuff, though – including:  1) papers on Quantum Cryptography, Business Continuity Planning, and Risk Assessment;  2)  trying to garner an interview with my hero Ron Rivest; and 3) creating more practice questions and relevant content for my CISSP students.   (STUDENTS – web content is found here:

Also – will be reporting on my white house tour scheduled for next weekend.  That is undoubtedly going to be an amazing opportunity to see national security in action.

Security through Absurdity

January 16th, 2010

Every once in awhile you come across a situation that just makes you shake your head and wonder what people are thinking when it comes to security.  A few years ago I was working on a security assessment with my partner, and per the terms of our contract we ran L0phtcrack to see how well the users were selecting robust passwords.  Since there was no formal password policy in place, it wasn’t surprising to find that more than 60% of the company was using either a blank password or the word “password.”    The program cracked those in less than a second – which caused more than a little concern to the owner/executive watching us work.  (The term “freaked out” would be appropriate here, if it didn’t seem a tad disrespectful.)

Of course, POST assessment we worked with him to implement a password policy, a user training session and complexity requirements that ensured users were choosing alpha-numeric passwords and changing them on a regular basis.  However, the owner/executive (whom I will call Bob, since that is his name) was still wary that the users weren’t making good choices.  He asked us again to run L0phtcrack to provide a list of the password that users had selected.

Now, I understand Bob’s concern – however, let’s consider the potential risks involved.  First of all, we’ve completed the assessment and report and are now implementing recommendations in the consulting phase of the project – therefore we are no longer covered under the protection we had during the actual assessment.  To proceed, we’d need first to gain written permission for the action.  But more importantly, if we were to provide a list of the usernames and passwords as requested – we’ve just completely eliminated the concept of accountability.  Knowledge of this list would mean that users could claim they hadn’t logged in with their password – someone with access to the password list must have logged in and “fudged” the books or surfed inappropriate material on company computers.     Bob’s logic was that if nobody knew about the list except the three of us, nobody could claim misuse. 

(Big red flags should be waving before your eyes right about now.)

Let’s pretend for a moment that we acquiesced and granted the request, providing Bob with a list of all usernames and passwords.  Two weeks later, the computer that Bob’s brother works on was found to have accessed child pornography.  Bob knows HE didn’t log in, and he knows his brother couldn’t possibly have done it.  That leaves two people who have previously remotely accessed the system (as part of the assessment) and who theoretically could know the usernames and passwords.   You may as well paint a target on your t-shirt with the words “sucker” on the top.

Think I’m being paranoid?   It’s possible, but thankfully my partner agreed with me and we chose not to provide the list and to explain in great detail why it was unnecessary based on the complexity requirements we implemented.  We also explained the legal risk to him, which I’m hoping he appreciated once he calmed down from all the obscenities he was slinging at us. 

So to recap – the users hated us for making them learn how to create new complex passwords; the owner hated us for not giving him an unethical list of passwords; the hackers hated us for closing down all the blatant security holes that said, “come on in.”     At the end of the day, we still got paid – I’d call that a good day’s work in penetration testing/security assessments.

Hashing vs. Digital Signatures

December 30th, 2009

Okay, I want to make a quick clarification – there’s a difference between using a simple hash and a digital signature.   We use hashing as a mechanism to ensure message integrity – by running the one way function on each end, we can detect modification or corruption of the message (whether innocent or intentional) if there is a difference in the value.   We can encrypt that hashed value using a symmetric/shared key process to protect the integrity of the HASH, which validates the integrity of the MESSAGE. 

A digital signature encrypts the hash value using an asymmetric key – the sender’s private key.  In our well designed PKI world, we know that NOBODY else has a copy of our private key.  We’re the ONLY person in possession of the private key, so we’re theoretically the only person that can encrypt the message in this way.  The recipient decrypts using the sender’s public key, thereby validating not only that no alteration / modification has taken place but ALSO that the message came from the specified sender.

The Birthday Attack

December 28th, 2009

My favorite security topics revolve around espionage and attacks, likely because I fancy myself an internationally renowned spy – a female James Bond – who took a wrong turn and ended up doing investigations for a bank.  Thus, I must lead my vicarious life of intrigue out on the web until the FBI sees fit to hire me on full time as a security goddess. 

Today I’m going to begin this blog by discussing one of my favorite brute force attacks.  A BIRTHDAY ATTACK is an attempt to recreate a digital signature in order to “prove” that a message is legitimate after we have nefariously altered it for our own evil purpose.

When we digitally sign an email message, we perform a one-way hash function on the message to validate that we are, in fact, the sender.  When the message is received, the recipient runs the same one-way hash calculation and if the value matches the original, they know the message is legitimate and has been unaltered since its origin.  The one-way hash value is a mathematical calculation that is simple to compute in one direction and computationally infeasible to run in reverse.  For example, if I were to multiply 68,435,299 x 391,001,278, my computer can easily calculate the correct total.   However, given only the total 26,758,289,369,312,122 how easy would it be to deduce the original two integers?  Not impossible, but HIGHLY unlikely.

The birthday attack is based on the paradox that in a room of 23 people, there is a 50% probability that two people within the group share the same birthday (month/day).   If you were to attempt to find someone with the same birthday as you, however, you’d need 253 people for the same likelihood.  Finding two people with the same birthday is equivalent in this situation to finding two messages with the same message digest (the final outcome of the hashing algorithm.)

In a birthday attack, I want to alter a message after it has been sent. Why?   Let’s assume that ACME Products has extended a job offer to Jenny and the message was only signed and not encrypted – leaving the plain text available to a typical packet sniffer.  Jenny is excited about the work, but disappointed in the salary offered so she decides to take some less than ethical initiative.  She obviously doesn’t have the private key with which the message was originally signed, so she decides to alter the message slightly – let’s say by increasing the salary offer from $60,000 to $80,000 per annum.  She takes the altered messages and re-runs it through the hashing algorithm only to find that the message digest (the final result) has changed – as expected.  The message digest is a fixed value depending on the strength of the hash, so Jenny understands that there is a finite number of digest values that will be used.  She alters the message again – this time aiming for $85,000 per annum.   She checks the new message digest against the original value and still finds no match.   Next she tries $85,000 per YEAR instead of per annum and runs the message again – but no match.   After multiple tries (usually through the use of a script or program written for these types of attacks), Jenny eventually finds an alteration that produces the SAME message digest as the original message.  VOILA!  She has a new offer letter from ACME that is digitally signed by the hiring director offering her a salary of $465,000 per month.  Success! 

Finding two messages that contain the same message digest (hashed value) is known as a collision.  The stronger the hash value, the more difficult the attack becomes (a 160-bit hash value vs a 256-bit or 512-bit hash, for example). 

So how do I protect against birthday attacks?  Encrypting the body of the message provides additional security because it would require the attacker to first decrypt the message before they could begin alteration.  Another suggestion would be to include a digital timestamp in the message –  the process of finding a collision takes time, and a timestamp would be a clear indication that the message may have been altered.  The issue here would be finding a reliable method of 3rd party time validation, as timestamps can also be altered or modified by attackers.

For myself, my biggest birthday attack concern of the day is whether or not someone is going to tip the waitress off so I’m forced to endure a half-baked version of “Restaurant Happy Birthday.”   I’m off to party…

Your comments, thoughts, and experiences are always welcome on this blog – discussion and debate are welcome, provided you remain respectful of everyone’s viewpoint.