Web Application vulnerabilities that we should not overlook

The following are Web application vulnerabilities that we’ve all likely overlooked yet we can’t afford to miss.

Files that shouldn’t be publicly accessible
Using a Web mirroring tool such as HTTrack, mirror your site(s) and manually peruse through files and folders downloaded to your local system. Check for FTP log files, Web statistics (such as Webalizer) log files, and backup files containing source code and other comments that the world doesn’t need to know about. You can also use Google hacking tools such as SiteDigger and Gooscan to look for sensitive information you may not have thought about. You’ll likely find more files and information using manual scans than Google hacks, but do both to be sure.

Functionality that’s browser specific
With all the standards that exist for HTTP, HTML and browser compatibility, you’ll undoubtedly witness different application behavior using different browsers. I see things like form input, user authentication and error generation handled one way in Firefox and yet another in Internet Explorer. I’ve even seen different behavior among varying versions of the same browser.

I”ve also come across security issues when using an unsupported browser. Even if you’re not supposed to use a certain browser, use it anyway and see what happens. So, when you’re digging in and manually testing the application, be sure to use different browsers – and browser versions if you can to uncover some “undocumented features”.

Flaws that are user-specific
It’s imperative to go beyond what the outside world sees and test your Web applications as an authenticated user. In fact, you should use automated tools and manual checks across every role or group level whenever possible. I’ve found SQL injection, cross-site scripting (XSS), and other serious issues while logged in as one type of user that didn’t appear at a lower privilege level and vice versa. You’ll never know until you test.

Operating system and Web server weaknesses
It’s one thing to have a solid Web application, but keeping the bad guys out of the underling operating system, Web server and supporting software is quite another. It’s not enough to use automated Web vulnerability scanners and manual tests at the application layer. You’ve got to look at the foundation of the application and server as well. I often see missing patches, unhardened systems and general sloppiness flying under the radar of many security assessments. Use tools such as Nessus or QualysGuard to see what can be exploited in the OS, Web server or something as seemingly benign as your backup software. The last thing you want is someone breaking into your otherwise bulletproof Web application at a lower level, obtaining a remote command prompt for example, and taking over the system that way.

Form input handling
One area of Web applications that people rely too much on automated security scanning tools is forms. The assumption is that automated tools can throw anything and everything at forms, testing every possible scenario of field manipulation, XSS and SQL injection. That’s true, but what tools can’t do is put expertise and context into how the forms actually work and can be manipulated by a typical user.

Determining exactly what type of input specific fields will accept combined with other options presented in radio buttons and drop-down lists is something you’re going to be able to analyze only through manual assessment. The same goes for what happens once the form is submitted, such as errors returned and delays in the application. This can prove to be very valuable in the context of typical Web application usage.

Application logic
Similar to form manipulation, analyzing your Web application’s logic by some basic poking and prodding will uncover as many, if not more, vulnerabilities than any automated testing tool. The possibilities are unlimited, but some weak areas I’ve found revolve around the creation of user accounts and account maintenance. What happens when you add a new user? What happens when you add that same user again with something slightly changed in one of the sign-up fields? How does the application respond when an unacceptable password length is entered after the account is created?

You should also check email headers in email sent to users. What can you discover? It’s very likely the internal IP address or addressing scheme of the entire internal network is divulged. Not necessarily something you want outsiders knowing.

Also, look at general application flows, including creation, storage and transmission of information. What’s vulnerable that someone with malicious intent could exploit?

Authentication weaknesses
It’s easy to assume that basic form or built-in Web server authentication is going to protect the Web application, but that’s hardly the case. Depending on the authentication coding and specific Web server versions, the application may behave in different ways when it’s presented with login attacks – both manual and automated.

How does the application respond when invalid user IDs and passwords are entered? Is the user specifically told what’s incorrect? This response alone can give a malicious attacker a leg up knowing whether he needs to focus on attacking the user ID, password, or both. What happens when nothing is entered? How does the authentication process work when nothing but junk is entered? How do the application, server and Internet connection all stand up to login attacks when a dictionary attack is run using a tool such as Brutus? Are log files filled up? Is performance degraded? Do user accounts get locked after so many failed attempts? Those are all things that affect the security and availability of your application and should be tested for accordingly.

Sensitive information transmitted in the clear
It seems simple enough to just install a digital certificate on the server and force everyone to use secure sockets layer (SSL). But are all parts of your application using it? I’ve come across configurations where certain parts of applications used SSL, but others did not. Low and behold the areas that weren’t using SSL ended up transmitting login credentials, form input and other sensitive information in the clear for anyone to see. It’s not a big deal until someone on your network loads up a network analyzer or tool such as Cain, performs ARP poisoning and captures all HTTP traffic flowing across the network – passwords, session information and more. There’s also the inevitable scenario of employees working from home or coffee shop using an unsecured wireless network. Anything transmitted via unsecured HTTP is fair game for abuse. Make sure everything in the application is protected via SSL – not just the seemingly important areas.

Possible SQL injections
When using automated Web application vulnerability scanners, you may come across scenarios where possible SQL injections are discovered when logged in to the application. You may be inclined to stop or not know how to proceed, but I encourage you to dig in deeper. The tool may have found something but wasn’t able actually verify the problem due to authentication or session timeouts or other limitations. A good SQL injection testing tool will provide the ability to authenticate users and then perform its tests. If the application is using form-based authentication, don’t fret. You can simply copy or capture the original SQL injection query and then copy and paste the entire HTTP request into a Web proxy or HTTP editor and submit it to a Web session you’re already authenticated to. It’s a little extra effort, but it works and you may be able to find your most serious vulnerabilities this way.

False sense of firewall or IPS security
Many times firewalls or intrusion detection/prevention systems (IPS) will block Web application attacks. Validating that this works is good, but you also need to test what happens when such controls aren’t in place. Imagine the scenario where an administrator makes a quick firewall rule change or the protective mechanisms are disabled or temporarily taken offline altogether? You’ve got to plan on the worst-case scenario. Disable your network application protection and/or setup trusting rules and see what happens. You may be surprised.

With all the complexities of our applications and networks, all it takes is one unintentional oversight for sensitive systems and information to be put in harm’s way. Once you’ve exhausted your vulnerability search using automated tools and manual poking and prodding, look a little deeper. Check your Web applications with a malicious eye – what would the bad guys do? Odds are there are some weaknesses you may not have thought about

Advertisements

6 thoughts on “Web Application vulnerabilities that we should not overlook

  1. Thanks for the interesting article and i agree with all the points that you mention in your article concernig the effect of application vulnerabilities on website security.

    We choose to use the service of a SaaS online website vulnerabilties scan from http://www.gamasec.com who work deaply on the application vulnerabilities layers.

    The http://www.gamasec.com monthly reports provid clear understanding of the finding vulnerabilities and good practical recommendation.

    We also add the GamasEc security seal on our website in order to show or customer that we are taking care of our website security.

    Thanks, D

    • Didier,

      I understand that gamasec could automatically navigate through the website/web application and find vulnerabilities but I cant really rely on such automated tools. I would rather prefer a human being doing it. I dont have anything against automated tools such as gamasec but it cannot find the complex scenarios with which vulnerabilities can be exposed.

      • intekhabsadekin,

        you right in the concept that a manual pen test done buy a human is allway better but the cost is from $3k to $5k and the http://www.gamasec.com scan solution provide us a result with 12 cmonthly report a year for only $600 mean $50 per report.

        We usualy order a manual scan for our site every 18 months and the http://www.gamasec.con provide the rest especialy the application vulnerabilities updates and the check our technical team
        Didier

      • That means that you do manual testing every 18 months and the rest of the time you depend on automation? Correct me if I am wrong but if that is so, then dont you think that after a while the cost of automated testing would be much more than manual testing? I mean automation would only save up to atleast 20% – 30% of your time but not the whole and automation would leave out a lot of other things that would only surface if and only if manual testing is done. So it is my honest opinion I think manual testing would be much more effecient and cost effective in the long run as opposed to automated testing.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s