Defending Against Query Selector Injection Attacks

In case you haven’t come across Petko Petkov’s post on injection attacks against MongoDB and NodeJS yet, its definitely worth a careful read. In this article, he explains a pretty simple exploit that I suspect affects a fair number of applications, including some that I’ve implemented.

The general idea behind Petko’s exploit is that, typically, when you want to get all documents where username is equal to the user-provided username, you may do something like this:

User.findOne({ username: req.body.username }, function(err, user) {
  // Handler code here

However, let’s say you’ve exposed a JSON-based API and I’m a malicious user that sends you the following body JSON:

{ username: { $gt: "" } }

The query that will get sent to MongoDB then looks like this:

{ username: { $gt: "" } }

Assuming your usernames are strings, that query will return a random user!

Even if you’re using URL encoding instead of JSON for your API, you may not be safe. ExpressJS’ body parser middleware, by default, uses the qs module to parse URL-encoded HTTP request bodies. The qs module is designed to parse URL-encoded strings in a way that makes decoding objects easier, so parsing the string username[$gt]= gives you a nested object { username: { $gt: undefined } }. This is really bad news bears.

Thankfully, query selector injection attacks are pretty easy to defend against, so no need to throw your Express JSON API out the window. Here are two strategies to make sure you’re not vulnerable.

Remove keys that start with $ from user input

One of the cruxes of Petko’s exploit is that, in the above example, MongoDB determines the query selector by scanning the req.body.username object for a key that matches a query selector. There are two ways you can avoid this. The first, and probably most obvious, is to make sure req.body.username is a string rather than an object. JavaScript’s toString function should be sufficient:

User.findOne({ username: (req.body.username || "").toString(10) }, function(err, user) {
  // Handler code here

However, in some cases, you may want to query on user-provided objects, and so casting to a string isn’t sufficient. Since all MongoDB query selectors start with $, you can check if req.body.username is an object, and, if so, remove any keys from the object that start with $. I put together a really simple npm module called mongo-sanitize (see it on Github) does this for you, in case you don’t want to implement this yourself. 

var sanitize = require('mongo-sanitize');

// The sanitize function will strip out any keys that start with '$' in the input,
// so you can pass it to MongoDB without worrying about malicious users overwriting
// query selectors.
var clean = sanitize(req.params.username);

Users.findOne({ name: clean }, function(err, doc) {
  // ...

If this approach doesn’t work for you for whatever reason, don’t worry, there’s another way.

Explicitly specify the query selector when querying with untrusted data

The other crux of Petko’s exploit is that, typically, you don’t specify a query selector when you want to find a document where username is exactly equal to the user input. As a matter of fact, MongoDB doesn’t have a fully supported $eq query selector just yet (although the core server team is working on it). In lieu of $eq, however, you can use the $in selector:

User.findOne({ username: { $in: [req.body.username] } }, function(err, user) {
  // Handler code here

This is slightly more verbose, but if a malicious user tried a query selector injection attack, the query passed would look like this:

{ username: { $in: [{ $gt: "" }] } }

Assuming that your usernames were all strings, this query would return no results, as expected.


Query selector injection attacks are pretty insidious and its easy to be vulnerable, especially if you’ve been happily implementing JSON REST APIs. Thankfully, using one of the above principles, either by using mongo-sanitize or by explicitly specifying a query selector for untrusted data, you can avoid the query selector injection pitfall without having to give up the ease-of-use of JSON APIs. If you want more details on securing your MongoDB application, check out the security checklist and MongoDB’s blog post on security design and configuration.


The Future of MongooseJS

Two weeks ago marked a big milestone: mongoose 3.9.0 was released. Be warned, mongoose’s versioning practice is that even numbered branches are stable and odd are unstable. While all our tests check out on 3.9.0, I would recommend sticking to 3.8.x releases in production for now. 3.9.0 was mongoose’s first unstable release since October 2013. While the changes in 3.9.0 were relatively minor, they open the door to getting some interesting features into 4.0. Here are some of the high-level features I think should make it in to 4.0:

1) Update() with Validators

Mongoose right now doesn’t run validators on calls to Model.update(). I’ve found often that its more elegant and performant to call update() directly instead of loading the document, modifying it, and then saving it. Mongoose should have better support for this paradigm in the future.

2) Browser-friendly and browserify-friendly schema validation module.

Currently, there’s no good way to send your schemas to the browser to do client-side validation. While introducing an API endpoint for validation is quite possible, hooking up mongoose schema validation directly to a tool like AngularJS in the browser can open up some incredibly cool opportunities.

3) Better integration with Koa.js and Harmony in general

Fair warning, I’m not well versed in the particulars of ES6 or Koa just yet, but I have noticed some people opening Github issues related to these subjects. As more people start moving to ES6, mongoose needs to have its A-game ready.


4) Per-document events

The general idea is that mongoose doesn’t scope document events to a particular document, that is, doc1.on(‘event’) will get triggered by doc2.emit(‘event’) if doc1 and doc2 are instances of the same model. This is expected behavior now, but its very counterintuitive. At the very least, in 4.0 doc1.on(‘event’) will get triggered by doc2.emit(‘event’) if doc1 and doc2 are the same JS object. However, we may introduce behavior where doc1.on(‘event’) will get triggered by doc2.emit(‘event’) if doc1 and doc2 have the same _id.

5) Reworking Population

Populate is extremely useful, but also has some very unfortunate dark corners and counter-intuitive behavior that I’d like to rework. There are numerous features, such as caching integration, manual population, and populating on fields other than _id that the current implementation makes very difficult. I’m hoping to get all these features into 4.0.

I’m still very much in the planning stages for mongoose 4.0, so comments, concerns, and feature suggestions are very much welcome. Feel free to open up issues on Github with features you’d like to see in 4.0.

What’s New in Mongoose 3.8.9

I have an important announcement to make: over the last couple weeks I’ve been taking over maintaining mongoose, the popular MongoDB/NodeJS ODM. I have some very big shoes to fill, Aaron Heckmann has done an extraordinary job building mongoose into an indispensable part of the NodeJS ecosystem. As an avid user of mongoose over the last two years, I look forward to continuing mongoose’s storied tradition of making dealing with data elegant and fun. However, mongoose isn’t perfect, and I’m already looking forward to the next major stable release, 4.0.0. Suggestions are most welcome, but please be patient, I’m still trying to catch up on the backlog of issues and pull requests.

On to what’s new in 3.8.9

On that note, Mongoose 3.8.9 was (finally) released yesterday. This was primarily a maintenance release, the major priority was to clean up several test failures against the new stable version of the MongoDB server, 2.6.x, without any backward-breaking API changes. I’m proud to say that 3.8.9 should be compatible with MongoDB 2.2.x, 2.4.x, and 2.6.x. In addition, I added improved support for a couple of key MongoDB 2.6 features:

Support for Text Search in MongoDB 2.6.x

As I mentioned in my post on text search, mongoose 3.8.8 didn’t quite support text search yet: mongoose prevented you from sorting by text score. This commit, which went into mquery 0.6.0, allows you to use the new $meta operator in sort() calls. Here’s an example of how you would use text search with sorting in mongoose:

/* Blog post collection with two documents:
 * { title : 'text search in mongoose' }
 * { title : 'searching in mongoose' }
 * and a text index on the 'title' field */
    { $text : { $search : 'text search' } },
    { score : { $meta: "textScore" } }
  sort({ score : { $meta : 'textScore' } }).
  exec(function(error, documents) {
    assert.equal(2, documents.length);
    assert.equal('text search in mongoose', documents[0].title);
    assert.equal('searching in mongoose', documents[1].title);

The relevant test case can be found here (there’s also test coverage for text search without sorting). Please note that you’re responsible for making sure you’re running >= MongoDB 2.6.0, running text queries against older versions of MongoDB will not give you the expected behavior. MongoDB’s docs about text search can be found here.

Aggregation helper for $out:

As I mentioned in my post about the aggregation framework’s $out pipeline stage (which pipes the aggregation output to a collection), mongoose’s aggregate() function doesn’t prevent you from using $out. However, mongoose also supports syntactic sugar for chaining helper functions onto aggregate() for building an aggregation pipeline:

  .exec(function (err, res) {

This commit adds a .out() helper function that you can use to add a $out stage to your pipeline. Note that you’re responsible for making sure that the .out() function is the last stage of your pipeline, because the MongoDB server will return an error if it isn’t. The relevant test case can be found here. Here’s how the new helper function looks in action:

var outputCollection = 'my_output_collection';

  .exec(function(error, result) {

A Minor Caveat For 2.6.x Compatibility

There is still one unfortunate edge case remaining in 3.8.9 which only affects MongoDB 2.6.x. MongoDB 2.6.x unfortunately no longer allows empty $set operators to be passed to update() and findAndModify(). This change only affects mongoose in the case where you set the upsert flag to true. This commit attempts to mitigate this API inconsistency, but there is still one case where you will get an error on MongoDB 2.6.x but not in 2.4.x: if the query passed to your findAndModify() only includes an _id field. For example,

  { _id: 'MY_ID' },
  { upsert: true },
  function(error, document) {

Will return a server error on MongoDB 2.6.1 but not 2.4.10. Right now, there is no good way to handle this case in both 2.4 and 2.6 without either doing an if-statement on the version or breaking the existing API. You can track the progress of this issue on Github.


Hope y’all are as excited about mongoose’s future as I am. There’s lots of exciting ideas that I’m looking forward to getting into mongoose 4.0. You’re more than welcome to add suggestions for new features or behavior changes on Github issues. I’m looking forward to seeing what y’all can come up with for improving mongoose and what y’all will be able to do with future versions.


A NodeJS Perspective on What’s New in MongoDB 2.6, Part II: Aggregation $out

From a performance perspective as well as a developer productivity perspective, MongoDB really shines when you only need to load one document to display a particular page. A traditional hard drive only needs one sequential read to load a single MongoDB document, which limits your performance overhead. In addition, much like how Nas says life is simple because all he needs is one mic, grouping all the data for a single page into one document makes understanding and debugging the page much simpler.

A place where the one document per page heuristic is particularly relevant is on pages that display historical data. Loading a single user object is fast and simple, but running an aggregation to compute the average number of times per month a user performed a certain action over the last 6 months is a costly operation that you don’t necessarily want to do on-demand. NodeJS devs are spoiled in this regard, because scheduling in NodeJS is extremely simple. You can easily schedule these aggregations to run once per day and avoid the performance overhead of running the aggregation every time a user hits the particular page.

However, before MongoDB 2.6, shipping the results of an aggregation into a separate collection required pulling the aggregation results in through the NodeJS driver and inserting them back into MongoDB. Furthermore, aggregation results were limited to 16MB in size, which made doing aggregations that would output one document per user impossible. MongoDB 2.6, however, introduced a $out aggregation pipeline stage, which writes the output of the aggregation to a separate collection, and removed the 16MB aggregation limit.

Getting transformed data $out of aggregation

Let’s take a look at how this can be used in practice in NodeJS. Recall the food journal app from the first part of this series: let’s add a route that will display the user’s average calories per day broken down on a per-week basis. This involves a slow and complex aggregation, so we’ll schedule this aggregation to run once per day and dump its data to a new collection using $out. The data for this route will get recomputed for all users using one aggregation, and each time the user hits the API endpoint all the server will do is read one document. Here’s what the aggregation looks like in NodeJS (you can also copy/paste this aggregation pipeline into a mongo shell and get the same result). You can also find this code on Github.

  // Pull out week of the year and day of the week from the date
    $project : {
      week : { $week : "$date" },
      dayOfWeek : { $dayOfWeek : "$date" },
      year : { $year : "$date" },
      user : "$user",
      foods : "$foods"
  // Generate a document for each food item
    $unwind : "$foods"
  // And for each nutrient
    $unwind : "$foods.nutrients"
  // Only care about calories
    $match : {
     'foods.nutrients.tagname' : 'ENERC_KCAL'
  // Add up calories for each week, keeping track of how many days in that
  // week the user recorded eating something. Output one document per
  // user and week.
    $group : {
      _id : {
        week : "$week",
        user : "$user",
        year : "$year"
      days : { $addToSet : '$dayOfWeek' },
      calories : {
        $sum : {
          $multiply : [
            { $divide : ['$foods.selectedWeight.grams', 100] }
  // Aggregate all the documents on a per-user basis.
    $group : {
      _id : "$_id.user",
      weeks : { $push : "$_id.week" },
      yearForWeek : { $push : "$_id.year" },
      daysPerWeek : { $push : "$days" },
      caloriesPerWeek : { $push : "$calories" }
  // Output to the 'weekly_calories' collection
    // Hardcode string here so can copy/paste this aggregation into shell
    // for instructional purposes.
    $out : 'weekly_calories'
], callback);

The particular details of the aggregation aren’t that important, what really matters is the $out stage at the end. The $out stage does something very cool: not only will the resulting documents get inserted into a new collection called weekly_calories, $out will overwrite the existing collection once the aggregation completes. In other words, if this aggregation runs for an hour, the weekly_calories collection will remain unchanged until the aggregation is done. After the aggregation finishes, the weekly_calories collection will be atomically replaced by the result of the aggregation. Note that, right now, $out doesn’t have any way of appending to the output collection, it can only overwrite the output collection. Design your aggregations accordingly.

Taking a look at the results

Using a bit of NodeJS magic, we can wrap this aggregation in a service that uses node-cron to schedule itself to run once per day at 0030 (12:30 am) server time:


We can then inject this service into an ExpressJS route and expose the route as a GET /api/weekly JSON API endpoint:

// app.js
app.get('/api/weekly', checkLogin, api.byWeek.inject(di));

// api.js
exports.byWeek = function(weeklyCalorieAggregator) {
  return function(req, res) {
    weeklyCalorieAggregator.get(req.user.username, function(error, doc) {

A little extra work (git diff) to put together a UI that displays the data from GET /api/weekly gives a very satisfying result:


NodeJS Project Version Compatibility

Good news, this time around, the latest versions of node-mongodb-native (1.4.2), mquery (0.6.0), and mongoose (3.8.8) support $out in aggregation. I’ve run the above aggregation with versions 1.3 and 1.2 of node-mongodb-native and version 3.6 of mongoose and those handle $out correctly too.


MongoDB 2.6’s improvements to the aggregation framework are a quantum leap forward, and enable you to do some amazing things. While scheduled analytics calculations certainly aren’t the only use case of $out, I hope this post showed you at least one way in which $out allows you to play to MongoDB’s strengths in a new way.

This is Part II of a 3-part series on using new MongoDB 2.6 features in NodeJS. Part III of this series is coming up in 2 weeks, in which I’ll take a look at some of MongoDB 2.6’s query framework improvements, primarily index filters.

A NodeJS Perspective on What’s New in MongoDB 2.6, Part I: Text Search

MongoDB shipped the newest stable version of its server, 2.6.0, this week. This new release is massive: there were about 4000 commits between 2.4 and 2.6. Unsurprisingly, the release notes are a pretty dense read and don’t quite convey how cool some of these new features are. To remedy that, I’ll dedicate a couple posts to putting on my NodeJS web developer hat and exploring interesting use cases for new features in 2.6. The first feature I’ll dig in to is text search, or, in layman’s terms, Google for your MongoDB documents.

Text search was technically in 2.4, but it was an experimental feature and not part of the query framework. Now, in 2.6, text is a full-fledged query operator, enabling you search for documents by text in 15 different languages.

Getting Started With Text Search

Let’s dive right in and use text search on the USDA SR-25 data set described in this post. You can download a mongorestore-friendly version of the data set here. The data set contains 8194 food items with associated nutrition data, and each food item has a human-readable description, e.g. “Kale, raw” or “Bison, ground, grass-fed, cooked”. Ideally, as a client of this data set, we shouldn’t have to remember whether we need to enter “Bison, grass-fed, ground, cooked” or “Bison, ground, grass-fed, cooked” to get the data we’re looking for. We should just be able to put in “grass-fed bison” and get reasonable results.

Thankfully, text search makes this simple. In order to do text search, first we need to create a text index on your copy of the USDA nutrition collection. Lets create one on the food item’s description:

db.nutrition.ensureIndex({ description : "text" });

Now, we can search the data set for our “raw kale” and “grass-fed bison”, and see what we get:

  { $text : { $search : "grass-fed bison" } },
  { description : 1 }).

  { $text : { $search : "raw kale" } },
  { description : 1 }).


Unfortunately, the results we got aren’t that useful, because they’re not in order of relevance. Unless we explicitly tell MongoDB to sort by the text score, we probably won’t get the most relevant documents first. Thankfully, with the help of the new $meta keyword (which is currently only useful for getting the text score), we can tell MongoDB to sort by text score as described here:

  { $text : { $search : "raw kale" } },
  { description : 1, textScore : { $meta : "textScore" } }).
    sort({ textScore : { $meta : "textScore" } }).

Using Text Search in NodeJS

First, an important note on the compatibility of text search with NodeJS community projects: the MongoDB NodeJS driver is compatible with text search going back to at least 1.3.0. However, only the latest version of mquery, 0.6.0, is compatible with text search. By extension, the popular ODM Mongoose, which relies on mquery, unfortunately doesn’t have a text search compatible release at the time of this blog post. I pushed a commit to fix this and the next version of Mongoose, 3.8.9, should allow you to sort by text score. In summary, to use MongoDB text search, here are the version restrictions:

MongoDB NodeJS driver: >= 1.4.0 is recommended, but it seems to work going back to at least 1.2.0 in my personal experiments.

mquery: >= 0.6.0.

Mongoose: >= 3.8.9 (unfortunately not released yet as of 4/9/14)

Now that you know which versions are supported, let’s demonstrate how to actually do text search with the NodeJS driver. I created a simple food journal (e.g. an app that counts calories for you when you enter in how much of a certain food you’ve eaten) app that is meant to tie in to the SR-25 data set. This app is available on GitHub here, so feel free to play with it.

The LeanMEAN app exposes an API endpoint, GET /api/food/search/:search, that runs text search on a local copy of the SR-25 data set. The implementation of this endpoint is here. For convenience, here is the actual implementation, where the foodItem variable is a wrapper around the Node driver’s connection to the SR-25 collection.

/* Because MongooseJS doesn't quite support sorting by text search score
* just yet, just use the NodeJS driver directly */
exports.searchFood = function(foodItem) {
 return function(req, res) {
   var search =;
       { $text : { $search : search } },
       { score : { $meta: "textScore" } }
     sort({ score: { $meta : "textScore" } }).
     toArray(function(error, foodItems) {
       if (error) {
         res.json(500, { error : error });
       } else {

Unsurprisingly, this code looks pretty similar to the shell version, so it shouldn’t look unfamiliar to you NodeJS pros :)

Looking Forward

And that’s all on text search for now. In the next post (scheduled for 4/25), we’ll tackle some of the awesome new features in the aggregation framework, including text search in aggregation.


Plugging USDA Nutrition Data into MongoDB

As much as I love geeking out about basketball stats, I want to put a MongoDB data set out there that’s a bit more app-friendly: the USDA SR25 nutrient database. You can download this data set from my S3 bucket here, and plug it into your MongoDB instance using mongorestore. I’m very meticulous about nutrition and have, at times, kept a food journal, but sites like FitDay and DailyBurn have far too much spam and are far too poorly designed to be a viable option. With this data set, I plan on putting together an open source web-based food journal in the near future. However, I encourage you to use this data set to build your own apps.

Data Set Structure

The data set contains one collection, ‘nutrition’. The documents in this collection contain merged data from the SR25 database’s very relational FOOD_DES, NUTR_DEF, NUT_DATA, and WEIGHT files. In more comprehensible terms, the documents contain a description of a food item, a list of nutrients with measurements per 100g, and a list of common serving sizes for that food. Here’s what the top level document for grass-fed ground bison looks like in RoboMongo, a simple MongoDB GUI:

The top level document is fairly simple: the description is a human-readable description of the food, the manufacturer is the company that manufactures the product, and survey is whether or not the data set has values for the 65 nutrients used for some government survey. However, the real magic happens in the nutrients and weights subdocuments. Lets see what happens when we open up nutrients:

You’ll see that there are an incredible amount of nutrients. The nutrients data is in an array, where each subdocument in the array has a tagname, which is a common scientific abbreviation for the nutrient, a human-readable description, and an amountPer100G with corresponding units. In the above example, you’ll see that 100 grams of cooked grass-fed ground bison contains about 25.45 g of protein.

(Note: the original data set includes some more detailed data, including standard deviations and sample sizes for the nutrient measurements, but that’s outside the scope of what I want to do with this data set. If you want that data, feel free to read through the government data set’s documentation and fork my converter on github.)

Finally, the weights subdocument is another array which contains sub-documents that describe common serving sizes for the food item and their mass in grams. In the grass-fed ground bison example, the weights list contains a single serving size, 3 oz, which approximately 85 grams:

Exploring the Data Set

First things first: since the nutrients for each food are in an array, its not immediately obvious what nutrients this data set has. Thankfully, MongoDB’s distinct command makes this very easy:

There are a lot of different nutrients in this data set. In fact, there are 145:

So how are we going to find nutrient data for a food that we’re interested in? Suppose we’re looking to find how many carbs are in raw kale. Pretty easy to do because MongoDB’s shell supports JavaScript regular expressions, so lets just find documents where the description includes ‘kale’:

Of course, this doesn’t include the carbohydrate content, so lets add a $elemMatch to the projection to limit output to the carbohydrates in raw kale:

Running Aggregations to Test Nutritional Claims

My favorite burger joint in Chelsea, brgr, claims that grass-fed beef has as much omega-3 as salmon. Lets see if this advertising claim holds up to scrutiny:

Right now, this is a bit tricky. Since I imported the data from the USDA as-is, total omega-3 fatty acids is not tracked as a single nutrient. The amounts for individual omega-3 fatty acids, such as EPA and DHA, are recorded separately. However, the different types of omega-3 fatty acids all have n-3 in the description, so it should be pretty easy to identify which nutrients we need to sum up to get total omega-3 fatty acids. Of course, when you need to significantly transform your data, its time to bust out the MongoDB aggregation framework.

The first aggregation we’re going to do is find the salmon item that has the least amount of total omega-3 fatty acids per 100 grams. To do that, we first need to transform the documents to include the total amount of omega-3’s, rather than the individual omega-3 fats like EPA and DHA. With the $group pipeline state and the $sum operator, this is pretty simple. Keep in mind that the nutrient descriptions for omega-3 fatty acids are always in grams in this data set, so we don’t have to worry about unit conversions.

You can get a text version of the above aggregation on Github. To verify brgr’s claim, lets run the same aggregation for grass-fed ground beef, but reversing the sort order.

Looks like brgr’s claim doesn’t quite hold up to a cursory glance. I’d be curious to see what the basis for their claim is, specifically if they assume a smaller serving size for salmon than for grass-fed beef.


Phew, that was a lot of information to cram into one post. The data set, as provided by the USDA, is a bit complex and could really benefit from some simplification. Thankfully, MongoDB 2.6 is coming out soon, and, with it, the $out aggregation operator. The $out operator will enable you to pipe output from the aggregation framework to a separate collection, so I’ll hopefully be able to add total omega-3 fatty acids as a nutrient, among other things. Once again, feel free to download the data set here (or check out the converter repo on Github) and use it to build some awesome nutritional apps.