The Most Famous Vulnerabilities - HTTP Parameter Pollution
Jozsef Konnyu

The Most Famous Vulnerabilities - HTTP Parameter Pollution

In the previous blog article, we learned about SQL injection and how it works. If you read it then you will know that it belongs to the family of the most serious vulnerabilities. The next vulnerability is not going to be so serious, but it's worth taking care of.

What is HTTP Parameter Pollution?

The easiest way to introduce this vulnerability is to show the method that you have seen many times on websites or any other application that can be linked to the Internet at some level: redirection.

A lot of websites use this technique to redirect from one website to another, or even within a certain website for another part of the page. Thus, there is no problem in itself. Problems begin when these redirections are not properly limited. If a site allows a full redirection (where anyone can add the address of the target url) to another fake site which was created to phish data from the original website.

HTTP parameter pollution (HPP) is not limited to redirects, that is just one example but can cause problems with any HTTP parameter that is not properly handled.

You can see the details how a parameter pollution case was solved on the site of HackerOne.

How does it work?

Suppose there is a blog page that lists blog articles. This site has a page, and there is a link at the bottom of the page which redirects the website to the next page:

mywebsite.com/blog?from=11&to=20

From this link it can be concluded that the blog lists 10 articles per page and that the two numbers increase on every page. So what will happen if we increase the second parameter unjustifiably?

mywebsite.com/blog?from=11&to=999999999

If the second parameter is not handled properly then the website selects 999999988 articles from the database on the server side. If you select more data from a database, it takes more time for the query to run, which his makes it easy to overload the database. And for us, only one parameter had to be changed.

There was another case where users were able to subscribe another user from an email notification through an unsubscribe link in Twitter. In this report, a Twitter unsubscribed another user from an email notification by rewriting an HTTP parameter.

How can you defend against it?

There are many ways to protect against this vulnerability. One of the most common is to squeeze the parameters into limits. Most PHP frameworks are able to support validation of the parameters.

Furthermore, we have to be careful not to use a parameter that cannot be set from the client side. This bad practice can be seen in the Twitter example. In this case, we should use a parameter for user detection which is hidden from other Twitter users, such as a unique hash.

In the case of the page pagination example we should hide the page limitation (in this example the limit should be 10) on the server side, then the website visitor can control the pagination with only one parameter:

mywebsite.com/blog?page=2

How can BitNinja protect against it?

Unfortunately, there is no 100% protection on any defense system against HPP. The developer of the web application has to look out for these things. However, there is a part where the WAF 2.0 module can provide effective protection.

I did not really explain this but there are many HTTP back-end technologies where HTTP parameters can be combined, for example, ASP. This property can be used for HPP vulnerability.

Let's say we have an ASP application that has a validator for each parameter that is responsible for filtering out the SQL injection. The validator is written to deny the request if all of the following words are included in the parameter value: “select”, “from” and  “where”. Thus, the validator can stop serving this request:

mywebsite.com/index.aspx?page=select 1,2,3 from user where id=1

As the ASP parameters are merged, this will work:

mywebsite.com/index.aspx?page=select 1&page=,2,3 from user where id=1

If the validator is not properly written at the back-end, then this request will go through it and even run the injection.

The 921170 and 921180 rules (in the Protocol Attack ruleset) of the WAF 2.0 provide a solution for this. As I wrote in the previous article, WAF rules should be handled very carefully, because if you do not use them in the right place, they can easily cause false positives.



Did you miss the previous parts? Catch up now:

Curious about the next part? Read it here:

Share your ideas with us about this article

Previous posts

New SenseLog rules against WordPress and Joomla vulnerabilities
A few days ago, we released a new agent version (1.23.3), which contains very important developments: We added two new SenseLog rules. The first one detects arbitrary file uploader bots, and the second one is for Joomla Spam regers. SenseLog is prepared for future remote config update. Instant blacklist action added to WAF Manager. It can be enabled for rules in the config.ini. Virtual WAF honeypotify command added to CLI. It could be useful for blocking web shell access. We'd like to talk a bit more about the first point; the new SenseLog rules. SenseLog rule agai...
Classification of malware
The current world war isn’t happening in the physical world. However, cyber attacks have stepped into the foreground, and blackhat hackers can gain millions with their targeted attacks. Their main weapon in this war: malware. In this article, we’ll diversify the different types of malware so that you can better understand their behaviour. There are many ways in which malware can be categorized, but now we’d like to introduce Christopher C. Elisan's classifications from his book, Malware, Rootkits & Botnets. 1.Infectors Infectors have a very important limitation: they can only sprea...