Web Application vulnerabilities that we should not overlook

6

The following are Web application vulnerabilities that we’ve all likely overlooked yet we can’t afford to miss.

Files that shouldn’t be publicly accessible
Using a Web mirroring tool such as HTTrack, mirror your site(s) and manually peruse through files and folders downloaded to your local system. Check for FTP log files, Web statistics (such as Webalizer) log files, and backup files containing source code and other comments that the world doesn’t need to know about. You can also use Google hacking tools such as SiteDigger and Gooscan to look for sensitive information you may not have thought about. You’ll likely find more files and information using manual scans than Google hacks, but do both to be sure.

Functionality that’s browser specific
With all the standards that exist for HTTP, HTML and browser compatibility, you’ll undoubtedly witness different application behavior using different browsers. I see things like form input, user authentication and error generation handled one way in Firefox and yet another in Internet Explorer. I’ve even seen different behavior among varying versions of the same browser.

I”ve also come across security issues when using an unsupported browser. Even if you’re not supposed to use a certain browser, use it anyway and see what happens. So, when you’re digging in and manually testing the application, be sure to use different browsers – and browser versions if you can to uncover some “undocumented features”.

Flaws that are user-specific
It’s imperative to go beyond what the outside world sees and test your Web applications as an authenticated user. In fact, you should use automated tools and manual checks across every role or group level whenever possible. I’ve found SQL injection, cross-site scripting (XSS), and other serious issues while logged in as one type of user that didn’t appear at a lower privilege level and vice versa. You’ll never know until you test.

Operating system and Web server weaknesses
It’s one thing to have a solid Web application, but keeping the bad guys out of the underling operating system, Web server and supporting software is quite another. It’s not enough to use automated Web vulnerability scanners and manual tests at the application layer. You’ve got to look at the foundation of the application and server as well. I often see missing patches, unhardened systems and general sloppiness flying under the radar of many security assessments. Use tools such as Nessus or QualysGuard to see what can be exploited in the OS, Web server or something as seemingly benign as your backup software. The last thing you want is someone breaking into your otherwise bulletproof Web application at a lower level, obtaining a remote command prompt for example, and taking over the system that way.

Form input handling
One area of Web applications that people rely too much on automated security scanning tools is forms. The assumption is that automated tools can throw anything and everything at forms, testing every possible scenario of field manipulation, XSS and SQL injection. That’s true, but what tools can’t do is put expertise and context into how the forms actually work and can be manipulated by a typical user.

Determining exactly what type of input specific fields will accept combined with other options presented in radio buttons and drop-down lists is something you’re going to be able to analyze only through manual assessment. The same goes for what happens once the form is submitted, such as errors returned and delays in the application. This can prove to be very valuable in the context of typical Web application usage.

Application logic
Similar to form manipulation, analyzing your Web application’s logic by some basic poking and prodding will uncover as many, if not more, vulnerabilities than any automated testing tool. The possibilities are unlimited, but some weak areas I’ve found revolve around the creation of user accounts and account maintenance. What happens when you add a new user? What happens when you add that same user again with something slightly changed in one of the sign-up fields? How does the application respond when an unacceptable password length is entered after the account is created?

You should also check email headers in email sent to users. What can you discover? It’s very likely the internal IP address or addressing scheme of the entire internal network is divulged. Not necessarily something you want outsiders knowing.

Also, look at general application flows, including creation, storage and transmission of information. What’s vulnerable that someone with malicious intent could exploit?

Authentication weaknesses
It’s easy to assume that basic form or built-in Web server authentication is going to protect the Web application, but that’s hardly the case. Depending on the authentication coding and specific Web server versions, the application may behave in different ways when it’s presented with login attacks – both manual and automated.

How does the application respond when invalid user IDs and passwords are entered? Is the user specifically told what’s incorrect? This response alone can give a malicious attacker a leg up knowing whether he needs to focus on attacking the user ID, password, or both. What happens when nothing is entered? How does the authentication process work when nothing but junk is entered? How do the application, server and Internet connection all stand up to login attacks when a dictionary attack is run using a tool such as Brutus? Are log files filled up? Is performance degraded? Do user accounts get locked after so many failed attempts? Those are all things that affect the security and availability of your application and should be tested for accordingly.

Sensitive information transmitted in the clear
It seems simple enough to just install a digital certificate on the server and force everyone to use secure sockets layer (SSL). But are all parts of your application using it? I’ve come across configurations where certain parts of applications used SSL, but others did not. Low and behold the areas that weren’t using SSL ended up transmitting login credentials, form input and other sensitive information in the clear for anyone to see. It’s not a big deal until someone on your network loads up a network analyzer or tool such as Cain, performs ARP poisoning and captures all HTTP traffic flowing across the network – passwords, session information and more. There’s also the inevitable scenario of employees working from home or coffee shop using an unsecured wireless network. Anything transmitted via unsecured HTTP is fair game for abuse. Make sure everything in the application is protected via SSL – not just the seemingly important areas.

Possible SQL injections
When using automated Web application vulnerability scanners, you may come across scenarios where possible SQL injections are discovered when logged in to the application. You may be inclined to stop or not know how to proceed, but I encourage you to dig in deeper. The tool may have found something but wasn’t able actually verify the problem due to authentication or session timeouts or other limitations. A good SQL injection testing tool will provide the ability to authenticate users and then perform its tests. If the application is using form-based authentication, don’t fret. You can simply copy or capture the original SQL injection query and then copy and paste the entire HTTP request into a Web proxy or HTTP editor and submit it to a Web session you’re already authenticated to. It’s a little extra effort, but it works and you may be able to find your most serious vulnerabilities this way.

False sense of firewall or IPS security
Many times firewalls or intrusion detection/prevention systems (IPS) will block Web application attacks. Validating that this works is good, but you also need to test what happens when such controls aren’t in place. Imagine the scenario where an administrator makes a quick firewall rule change or the protective mechanisms are disabled or temporarily taken offline altogether? You’ve got to plan on the worst-case scenario. Disable your network application protection and/or setup trusting rules and see what happens. You may be surprised.

With all the complexities of our applications and networks, all it takes is one unintentional oversight for sensitive systems and information to be put in harm’s way. Once you’ve exhausted your vulnerability search using automated tools and manual poking and prodding, look a little deeper. Check your Web applications with a malicious eye – what would the bad guys do? Odds are there are some weaknesses you may not have thought about

Advertisements

Redmine, a free project management web application

4

Redmine is a free, open source project management/bug tracking web application similar to JIRA. The difference is that redmine is built using Ruby on Rails and ofcourse it is free. Redmine has not reached its maturity yet in order to be used for enterprise applications like therap but it works fine for small projects. Since it is built using RoR a few configuration needs to be done before being able to use it.

Some of the main features of Redmine are:
* Multiple projects support
* Flexible role based access control.
* Flexible issue tracking system
* Gantt chart and calendar
* News, documents & files management
* Feeds & email notifications.
* Per project wiki
* Per project forums
* Simple time tracking functionality
* Custom fields for issues, projects and users
* SCM integration (SVN, CVS, Git, Mercurial, Bazaar and Darcs)
* Multiple LDAP authentication support
* User self-registration support
* Multilanguage support
* Multiple databases support

If you want to go through the features of Redmine then goto the following link:
http://www.redmine.org/wiki/redmine/Features

If you want to install redmine then goto the following link:
http://www.redmine.org/wiki/1/RedmineInstall

OR

If you just want to try the online Demo then goto the following link:
http://demo.redmine.org

Our Deepest Fear…

0

“Our deepest fear is not that we are inadequate. Our deepest fear is that we are powerful beyond measure. It is our light, not our darkness that most frightens us. We ask ourselves, Who am I to be brilliant, gorgeous, talented, fabulous? Actually, who are you not to be? You are a child of God. Your playing small does not serve the world. There is nothing enlightened about shrinking so that other people won’t feel insecure around you. We are all meant to shine, as children do. We were born to make manifest the glory of God that is within us. It’s not just in some of us; it’s in everyone. And as we let our own light shine, we unconsciously give other people permission to do the same. As we are liberated from our own fear, our presence automatically liberates others.”
— by Marianne Williamson

Object Query Language (OQL)

0

OQL is a superset of the part of standard SQL that deals with database queries. Thus, any select SQL sentence which runs on relational tables, works with the same syntax and semantics on collections of ODMG objects. Extensions concern object-oriented notions, like complex objects, object identity, path expression, polymorphism, operation invocation, late binding, etc.

OQL provides high-level primitives to deal with sets of objects but is not restricted to this collection construct. It also provides primitives to deal with structures, lists, arrays, and treats such constructs with the same efficiency.

OQL is a functional language where operators can freely be composed, as long as the operands respect the type system. This is a consequence of the fact that the result of any query has a type which belongs to the ODMG type model, and thus can be queried again.

OQL is not computationally complete. It is a simple-to-use query language which provides easy access to an ODBMS.

Based on the same type system, OQL can be invoked from within programming languages for which an ODMG binding is defined. Conversely, OQL can invoke operations programmed in these languages.

OQL does not provide explicit update operators but rather invokes operations defined on objects for that purpose, and thus does not breach the semantics of an ODBMS which, by definition, is managed by the “methods” defined on the objects.

OQL provides declarative access to objects. Thus OQL queries can be easily optimized by virtue of this declarative nature.

The formal semantics of OQL can easily be defined.

Examples of OQL:
select distinct x.age from Persons x where x.name = “Intekhab”
>This selects the set of ages of all persons named Intekhab, returning a literal of type set.

select distinct struct(a:x.age, s: x.sex) from Persons x where x.name = “Intekhab”;
>This does about the same, but for each person, it builds a structure containing age and sex. It returns a literal of type set.

select c.address
from Persons p, p.children c
where p.address.street = “Banani” and count(p.children) >= 2 and c.address.city != p.address.city
>The example is self explanatory.

select max(select c.age from p.children c)
from Persons p
where p.name = “Intekhab”
>An example of method invocation such as “max”.

The explanation and the example given above is not even the tip of the ice berg when it comes to giving an introduction to OQL. It will only give you an essence of what sort of query language it is. I found it interesting, but thats just me.

Mozilla CEO John Lily’s thought on Google Chrome

0

Mozilla CEO John Lily points out certain key factor that might affect mozilla since google came up with their very own browser that promises to be the browser for the next generation. He points out certain interesting facts that focuses on the relationship between Mozilla and Google and he mentions in quite a few places that he is taking this as a positive competition and also states that Mozilla’s venture with Google will still continue regardless of the competition that has newly emerged.

He says, “Interesting developments in the browser world lately. Between the new beta of IE8 and Google releasing the beta of their new browser (called ‘Chrome’), not to mention interesting work by the Mozilla team here as well, there’s as much happening as I can ever remember. Let’s start from there: more smart people thinking about ways to make the Web good for normal human beings is good, absolutely. Competition often results in innovation of one sort or another and in the browser you can see that this is true in spades this year, with huge Javascript performance increases, security process advances, and user interface breakthroughs. I’d expect that to continue now that Google has thrown their hat in the ring.

It should come as no real surprise that Google has done something here; their business is the web, and they’ve got clear opinions on how things should be, and smart people thinking about how to make things better. Chrome will be a browser optimized for the things that they see as important, and it’ll be interesting to see how it evolves.

Having said that, it’s worth addressing a couple of questions that folks will no doubt have.

1. How does this affect Mozilla? As much as anything else, it’ll mean there’s another interesting browser that users can choose. With IE, Firefox, Safari, Opera, etc there’s been competition for a while now, and this increases that. So it means that more than ever, we need to build software that people care about and love. Firefox is good now, and will keep on getting better.

2. What does this mean for Mozilla’s relationship with Google? Mozilla and Google have always been different organizations, with different missions, reasons for existing, and ways of doing things. I think both organizations have done much over the last few years to improve and open the Web, and we’ve had very good collaborations that include the technical, product, and financial. On the technical side of things, we’ve collaborated most recently on Breakpad, the system we use for crash reports
; stuff like that will continue. On the product front, we’ve worked with them to implement best-in-class anti-phishing and anti-malware that we’ve built into Firefox, and looks like they’re building into Chrome. On the financial front, as has been reported lately, we’ve just renewed our economic arrangement with them through November 2011, which means a lot for our ability to continue to invest in Firefox and in new things like mobile and services.

So all those aligned efforts should continue. And similarly, the parts where we’re different, with different missions, will continue to be separate. Mozilla’s mission is to keep the Web open and participatory – so, uniquely in this market, we’re a public-benefit, non-profit group (Mozilla Corporation is wholly owned by the Mozilla Foundation) with no other agenda or profit motive at all. We’ll continue to be that way, we’ll continue to develop our products & technology in an open, community-based, collaborative way.

With that backdrop, it’ll be interesting to see what happens over the coming months and years. I personally think Firefox 3 is an incredibly great browser, the best anywhere, and we’re seeing millions of people start using it every month. It’s based on technology that shows incredible compatibility across the broad web – technology that’s been tweaked and improved over a period of years.

And we’ve got a truckload of great stuff queued up for Firefox 3.1 and beyond things like open video and an amazing next-generation Javascript engine (TraceMonkey) for 3.1, to name a couple. And beyond that, lots of breakthroughs like Weave, Ubiquity, and Firefox Mobile. And even more that are unpredictable the strength of Mozilla has always come from the community that’s built it, from core code to the thousands of extensions that are available for Firefox.

So even in a more competitive environment than ever, I’m very optimistic about the future of Mozilla and the future of the open Web.”

Money, Grease Monkey

0

We, the testers(toolsmith as we like to call ourselves) are not far behind in terms of programming. We(the toolsmiths) are the new breed, its the dawn of a new species.

Getting to the point, we use various tools in our testing endeavors that I believe makes out lives a bit easy so to speak. But one of the most useful that I believe is worth mentioning is a Firefox extension called greasemonkey. Greasemonkey is a Firefox extension that allows you to write user scripts(JavaScript) that alter the web pages you visit. A user script is just a chunk of Javascript code, with some additional information that tells Greasemonkey where and when it should run. Each user script can target a specific page, a specific site, or a group of sites. A user script can do anything you can do in Javascript. In fact, it can do even more than that. A small code snippet is as follows:

// ==UserScript==
// @name MyFirstGreaseMonkey Script
// @namespace http://www.somewebsite.net/
// @description This code will show the name of the page being visited.
// @include https://www.somewebsite.com/*
// @include https://www.someotherwebsite.com/*
// ==/UserScript==

alert(document.title);

The above code snippet, as suggested between the ‘==/UserScript==’ tag will only take effect in pages that follows the keyword ‘@include’. It will essentially start executing the JavaScript code when the pages mentioned are visited. There are ways that will enable the script to run on all pages and also exclude some of the pages in which case the keyword will be ‘@exclude’ and then the URL.

How to install a user script? Simple. Install Greasemonkey for Firefox, create a js file and while naming the file a certain convention has to be followed and that is fileName.user.js. Right click on the file and open the file with Firefox. Then a window will appear that will ask the user to install the script for the pages mentioned in the script which will also appear in the window and voila!! You have your own user script installed.

This is particularly useful for testing purpose because we are constantly faced with challenges such as entering values in innumerable text fields over and over again. So we wrote our own automation user script that helps us identify text fields in a page and populate them with values of our own choice.

JavaServer Faces (JSF)

0

JavaServer Faces is a set of web-based GUI controls and its associated handlers. JSF provides many prebuilt HTML-oriented GUI controls, along with code to handle their events. JSF can be used to generate graphics in formats other than HTML, using protocols other than HTTP. JSF is a well-established standard for web-development frameworks in Java. The standard is based on the MVC paradigm much like its counterpart, STRUTS. Some people also call it a better STRUTS but that leaves much room for debate. It provides a set of APIs and associated custom tags to create HTML forms that have complex interfaces. Validation is rather easy in JSF. It has builtin capabilities for checking that form values are in the required format and for converting from strings to various other data types. If values are missing or in an improper format, the form can be automatically redisplayed with error messages and with the previously entered values maintained much like the binding capabilities of Spring 2.5.

One of the other things that JSF supports is the ability to configure java files centrally i.e it has a configuration xml file where all the properties are set along with dependence injections. If a change needs to be made in many files then a small change in the xml file can be done to achieve that. Rather then hard-coding information into Java programs, many JSF values are represented in that xml or property files. This loose coupling means that many changes can be made without modifying or recompiling Java code, and that wholesale changes can be made by editing a single file. This approach also lets Java and Web developers focus on their specific tasks without needing to know about the overall system layout.

This is just a small overview of JSF. It has many other components that are very complex yet helps solve issues that are normally done by Spring or Struts. It has many advantages in comparison to Struts and Spring as well as some disadvantages.