-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not guarding against giant-JSON DoS attacks? #212
Comments
Thanks for sharing this concern here and sorry we didn't respond on the whitehat address (I will check what happen). I will think more about this:
This will be a great fit for this library. We can put a
I am going to check this but I was under the impression that this was the case. We decode the header first, then verify the signature, then decode the payload. |
Quick note: You have to be able to decode first, because in some applications the key may be a function of the user identified in the payload. |
You are right.. generally not on the "user identifier" but on the "issuer". This library doesn't take care of that as it expects payload and the secret in the form of an string.. but there is plenty of examples doing what you suggest. An alternative for this could be changing this api to have |
Probably the most robust thing to do would be to validate the JSON, putting limits on the number of unquoted, unescaped colons and braces. It's really hard to come up with a single regex that is mindful of both quotes and character escapes, though. It can't be done for CSV, for example. So this sort of validation would come at more than ideal expense in clock cycles. I think the correct solution lies in the JSON library. We should be able to set limits on the parser, such as the maximum number of objects to generate. |
If you use a streaming JSON parser, you could count objects or otherwise verify objects during the parse and abort an excessive parse. |
const propertiesToParse = {
issuer: {
isValid: value => typeof value === 'string' && value.length < 2000
}
};
const reviver = (key, value) => {
const validator = propertiesToParse[key];
if (!validator) {
//skip this property since is not whitelisted
return undefined;
}
if (!validator.isValid(value)) {
//skip this property since is not valid according to the validator
return undefined;
}
//this property is whitelisted and is valid;
return value;
};
//test payload
const json = JSON.stringify({issuer: 'foo', foo: 'bar'});
console.log(JSON.parse(json, reviver));
//output:
//> { issuer: 'foo' } |
How about something like this? In verify.js, immediately after
add something like:
This way it'll be up to the developer of any given application to determine what is a sensibly long encoded JWT string, with reverse compatibility for existing code? Just a thought. |
Huge request body sizes in node.js are usually dealt with either at the TLS-terminating proxy such as Nginx, Apache or Caddy or at the request body parser level. Also, closing as stale. |
It's not the absolute size of the request that's the problem. It's the fact that a JSON request consumes a lot of clock cycles to parse. A parser should be able to handle anything thrown at it, as the programmer intends, but a web server should be more discriminating about what it processes. |
Especially when the upper bound on size is knowably small. The server stack could do a lot more to protect itself. |
Going off the title here
Something "giant" has to get through a webserver and bodyparser first. Note that most body parsers for the node ecosystem (either express or koa) have defaults set to 100kb or 56kb Javascript's |
I posted this to [email protected] 7 days ago but never got anything but an automated response. I figure that because DoS attacks are intractable anyway, there's no harm in posting here.
Scanning through the code and glancing through the dependent module "jws", it appears to me that clients could sent monster-sized JSON objects inside JWTs, and the module will always decode the JSON, eating up both clock cycles and memory. It seems to me that there should options to guard against this. Some possible options:
(1) Allow for enforcing a size limit on JWTs (or header/payload).
(2) Allow for verifying the signature before decoding JSON.
There may be another solution having to do with
with_indifferent_access
, mentioned on this discussion for a Ruby library, but I didn't follow it: https://github.com/nov/json-jwtAlso, I've been making a general assessment of JWTs and am so far finding them too insecure to use for anything important. Any input you can provide on StackExchange would be helpful: http://security.stackexchange.com/questions/125135/what-scenarios-really-benefit-from-signed-jwts
The text was updated successfully, but these errors were encountered: