Hybrid NoSQL: MarkLogic as a K-V store…

Following on from my blog post about using a Document store as a K-V store, I decided to do some simple tests to see how Redis and MarkLogic compare on my machine…

Spoiler conclusions. Read on for the detail:

  • With journaling and strict durability enabled, Redis is only 5x faster than MarkLogic in storing simple single key data
  • For slightly more complicated data sets (hashes), Redis is only 3.5x faster
  • If you need any query or search capability, MarkLogic provides this whereas Redis doesn’t
  • You may be able to tune your document store (MarkLogic) to be fast enough to replace Redis, but you can’t add functionality to your key-value store (Redis) to solve all the query use cases MarkLogic can handle
  • Consider using just MarkLogic. ;o) (Flamebait!)

UPDATE: I’ve added a new blog entry with Hash testing in Redis vs. MarkLogic, showing Redis is only 4.6x quicker than MarkLogic for simple aggregates.

Introduction

I should say at this point that I’m in no way a Redis expert. I’ve researched it for my book, NoSQL for Dummies, and compared its functionality against MarkLogic from a data type perspective though. I like Redis. I wondered how close its speed is to MarkLogic though.

Why compare them when they are so different? I sometimes come across companies using MongoDB + Redis. They store data in MongoDB, but their app caches data using Redis in order to bypass speed limitations. A very good use case.

The BBC use MarkLogic to store programme metadata, and the iPlayer service caches this in Redis. Again a valid use case – caching.

I wondered though if this was always needed? Can you rejig your doc store to support the same lower level functions as Redis, and thus tune your document database?

In my last blog post I compared functionality and concluded that MarkLogic could provide the same functions as Redis. I guestimated that on my laptop Redis would probably get to 100000+ Set operations per second, whereas MarkLogic with its default indexing would be nearer 3000/sec.

So I decided to see what happens by default…

The set up

I’m using the highly scientific AdamsCurrentGenerationLaptop[™] set up. This is a MacBook Pro (Retina, 15-inch, Mid 2015) with 2.2 GHz Intel Core i7 and 16GB RAM.

My test system is actually running in VMWare Fusion 7, on CentOS 6.6 with Kernel 2.6.32.

Redis

I downloaded Redis, untar’ed the distro, did a make and sudo make install, then ran it in standalone mode. No changes were made to the default configuration.

Running redis-benchmark gave this:-

[demouser@localhost src]$ redis-benchmark -q -n 100000 -t set
SET: 114285.71 requests per second

Turns out I made a pretty good guess in my last blog entry! Redis can do over 114K SET transactions per second on my vmware set up on my Mac.

But what is actually happening under the hood? Are these being saved in an ACID manner?

Well, the creators of redis-benchmark make me happy! They haven’t produced a benchmark tool with lots of tweaks that make Redis look great.

The tool uses 50 clients by default, a not unreasonable number. Pipelining is disabled – meaning changes are acknowledged before the next command issued. Thus you know the data is consistent with the next command. (Enabling pipelining should increase SET performance to above 400K – but its not really trustworthy for most real world applications IMHO).

One slightly naughty thing is that the tool accesses the same key. Tsk Tsk. Simulating a pure write load with 10% updates (i.e. 90% random keys) gives you this:-

[demouser@localhost src]$ redis-benchmark -q -n 100000 -r 90000 -t set
SET: 115740.73 requests per second

This is actually faster! Running it a few more times gave me between 109K and 115K set operations per second. So its roughly equivalent. So the redis-benchmark team are right – it’s a pretty reasonable tool for estimating load.

But hang on, is AOF and fsync being called to ensure durability? No, not by default. I’d best add these in order to create a fair comparison to an ACID compliant database like MarkLogic…

Configuring AOF and fsync

MarkLogic journals changes, with all journal changes being written to disc, and those changes being applied to the stored data later on (the updates occur in RAM, much like Redis, with the journal being used in case of failure). Journaling is how most modern databases ensure good write throughput whilst ensuring data consistency and durability. We want to configure Redis to do the same, so that we are comparing apples and apples…

Q: Is this a fair test?

A: Maybe. If you’re considering Redis for all your data storage needs then it’s definitely fair. As this post is about using a document store for your KV needs, we need to be comparing both databases in a ‘golden store’ use case, where Redis may be the primary source of this data, not just a cache. So for our needs (not losing data) this is a fair test.

Modifying redis’ configuration with the following options does what we need:-

appendfsync always
appendonly yes

The first setting ensures that for every write the AOF log is appended to on disc. Enabling AOF means we’re only write the AOF (journal) file rather than altering all data on the disc.

Saving this config and performing our operations again gives us:-

[demouser@localhost redis-3.0.3]$ redis-benchmark -q -n 100000 -t set -r 90000
SET: 44642.86 requests per second

This is 38.57% the number of writes per second. I think this is probably accurate – there’s no way around writing data to disc – it is slower than RAM. If you absolutely need write durability then you have to pay the price.

Still, this is still a pretty damned impressive number! (Well done Redis engineers, by the way, +1 beers to you.)

Using AOF is a nice way of ensuring fast writes whilst using journaling. AOF stands for append only file – so you’re adding changes to a file which will later on be merged down. Discussion of this is out of scope of this blog post though…

Other Redis notes

It should be noted that a single instance of Redis runs using a single Core. Thus I’m actually testing 1 core of my lovely mac, not the full machine. Later on I’ll be setting MarkLogic up with a single Forest. This uses 1 core to manage all writes to a stand. This provides a nice low level comparison, so I’ve kept Redis with just 1 core for now.

It should be noted that Read queries on MarkLogic may use another core rather than the 1 Redis is limited to. As I’m mainly talking about write speed I’m not too concerned. Its worth pointing out though in case you see any reed performance numbers elsewhere.

MarkLogic Server

I’m using MarkLogic Server 7.0-5.2. Why not Version 8? Well because I stupidly started doing these tests on the wrong vmware image. Oopsie. Can’t afford the time to upgrade and retest all my day job work, so V7 will have to do.

In reality, this latest V7 incorporates all changes of the V8 branch that are important for pure database performance, so we’re not losing anything in our tests.

Because this machine is used for a lot of day job work, I’ve disabled all the other Forests (parts of databases, akin to shards) in order to provide a real comparison. MarkLogic is capable of holding multiple databases per machine.

This leaves me with the Documents database – one used for testing – with one Forest, and thus one Core in use. There are other internal databases like Security and Modules, but these are required so no disabling there. For now, I’ve left the default indexes in place so you get a realistic number. MarkLogic does a lot of indexing on data added to it, so there is a performance hit. You benefit from this during advanced queries though, which are performed much, much quicker.

I’m using the MLCP (MarkLogic Content Pump – daft name, we know) to load 100 000 very small XML documents in to the server. These have one element: <data>something</data>.

mlcp.sh import -host localhost -port 7777 -username admin -password admin -input_file_path /home/demouser/mldata/simple/ -mode local -input_file_pattern '.*\.xml'
...
15/09/07 06:50:37 INFO contentpump.LocalJobRunner: completed 100%
15/09/07 06:50:37 INFO contentpump.LocalJobRunner: com.marklogic.contentpump.ContentPumpStats: 
15/09/07 06:50:37 INFO contentpump.LocalJobRunner: ATTEMPTED_INPUT_RECORD_COUNT: 100000
15/09/07 06:50:37 INFO contentpump.LocalJobRunner: SKIPPED_INPUT_RECORD_COUNT: 0
15/09/07 06:50:37 INFO contentpump.LocalJobRunner: Total execution time: 245 sec

This gives a result of 408 docs/second added. Which naturally sucks.

HOWEVER mlcp reports timings incorrectly. It reports execution time, which include the 2 minutes it takes to sort its life out looking at the file system. The actual ingest time was 109 seconds, giving 917.43 docs/second. I’ll be using this as in a real application you wouldn’t have the overhead of MLCP’s processing.

This is using the default options of batch_size 100 and thread_count 4 though. Changing the thread count to 50, clearing the DB and trying again, results in:-

15/09/07 07:20:01 INFO contentpump.LocalJobRunner: completed 100%
15/09/07 07:20:03 INFO contentpump.LocalJobRunner: com.marklogic.contentpump.ContentPumpStats: 
15/09/07 07:20:03 INFO contentpump.LocalJobRunner: ATTEMPTED_INPUT_RECORD_COUNT: 100000
15/09/07 07:20:03 INFO contentpump.LocalJobRunner: SKIPPED_INPUT_RECORD_COUNT: 0
15/09/07 07:20:03 INFO contentpump.LocalJobRunner: Total execution time: 215 sec

Again, lies, It actually took 94 seconds, giving 1063.83 docs/second.

One thing I’ve not altered yet – Although my number of clients is set to 50, the number of threads on the MarkLogic XCC app server (MLCP uses XDBC to store data) is limited to 32. Changing this up to 64 results in exactly the same performance though. I guess my requests are so small that altering this setting didn’t change much.

This speed is a factor of 44 slower than Redis. This is to be expected though. MarkLogic is doing a lot of indexing under the hood. So let’s disable as many indexes as possible…

Under this scenario with no indexes, total time taken was 95 seconds, a slight improvement resulting in 1052.63 docs/second.

D’oh alert: I think I’ve hit a CPU limit… I’ve stupidly set my vmware image to have 1 core. I’m restarting it with two cores, and loading data from the host (OS X) rather than the vmware image. (I’ll do the same again for redis). I also summise that a VMWare Core is actually a hardware thread – so actually I’ll give it 4 threads (2 physical cores). Results of my tests are now:-

Redis (with AOF and fsync, 100% new keys):

adamfowbookwork:src adamfowler$ ./redis-benchmark -h 192.168.123.4 -n 100000 -r 100000 -t set -c 50
====== SET ======
 100000 requests completed in 4.69 seconds
 50 parallel clients
 3 bytes payload
 keep alive: 1

0.00% <= 1 milliseconds
8.46% <= 2 milliseconds
99.26% <= 3 milliseconds
99.85% <= 4 milliseconds
99.98% <= 5 milliseconds
100.00% <= 5 milliseconds
21312.87 requests per second

MarkLogic (no indexes, no key re-use): 24 seconds! A rate of 4166.66 docs added per second.

As you can see both suffer from not being on localhost and having to go through VMWare networking, but that’s better than CPU bottleneck issues. There is a massive change in MarkLogic performance. This is mainly due to the CPU bottleneck. MarkLogic uses multiple threads in the kernel, including overhead of maintaining security and modules databases. MLCP also sucks the like out of a CPU thanks to Java’s thread handling. Once I added more cores, this overhead was removed. Also for some reason disc reading I/O seemed to suffer in the CentOS guest OS, so running MLCP from my host vastly improved performance.

You now see that with transactional integrity MarkLogic is only a factor of 5 slower than Redis for simple key storage! Quite an achievement.

Now re-enabling indexes to get a better idea of the performance hit on ingest, the MLCP test took 29 seconds, resulting in a throughput of 3448.28 added/sec. This is the speed I was expecting from my previous blog post!

(I could enable ‘fast load’ mode in MLCP which bypasses the transaction manager, but that would be cheating for this test! I thought I’d mention it in case you ever needed it though.)

Aggregates

One thing I want to mention is that in a document store it is normal to aggregate related information together. So rather than alter individual keys you update data structures. In a K-V store these are hashes, whereas in a document store these are documents. So you may add 10 of the above keys to a single document, and update them together.

Similarly, in a K-V store you may update a hash. This may not be always a fair comparison as use cases for updating in a K-V store may sometimes still be updating an entire key with a binary value, like a set of Java web tier session data.

For some use cases though, you may want to know how hashes vs. documents scale when durability and consistency matter, so I thought I’d run through the tests below.

Ah… Looks like the Redis Benchmark tool doesn’t have support for HSET or HMSET commands… Darn… I’ll have to look elsewhere…

Interestingly, someone else tried comparing the performance of Redis strings with Hashes in an ORM layer. This can be found on this blog article.

This shows for writes of ten elements that HMSET (settings multiple keys in one hit, kinda like saving a doc) was 3.59 times faster per-item – than saving 1 key (which may be a document/aggregate) with SET. I’ll use this as a comparison.

Per-item means per-value rather than per-aggregate, so some division of speeds is needed…

This means Redis’ actual speed for a 10 element document persisted using hashes would be throughput times speed increase, divided by number of items per save request (the per-item from above), giving: 21312.87 * 3.59 / 10 = 7651.32 compound writes/second. This is the number MarkLogic should strive to achieve.

So, I’ve altered my data to have ten elements per document, so I can compare documents/second with the above number. Again without indexes enabled, and with a slight tweak to ingest – using 20 threads instead of 50 so as not to keep unused connections open, and only 10 documents per batch rather than 100 (as the documents are now ten times the size, with ten elements each):-

Total time taken for 100000 ten element documents: 31 seconds. Throughput of aggregates on MarkLogic: 3225.80/second. This is about 25% slower than for simple one element documents. Given it’s 10 times the information, this is a fair swap!

This means that MarkLogic likely only lags behind Redis by 2.37 times. Even with indexing enabled this will likely only be around 3.5x slower. Given all the power you get from the indexes in query capabilities in MarkLogic versus a key-value store, you have to ask yourself, why use a key-value store to store complex aggregates? I guess this is why the document NoSQL database market is still alive and kicking!

Note: I’d really really like someone with redis benchmark experience to write a HMSET and HMGET test for the suite, so I can get some real numbers rather than guestimates in the above! Contact me if you know how to do this please!!!

Conclusion

What have we proved? Our main question was this:

I have both a document store and a K-V store. Is it possible to replace both with just one, given I have data and query loads which can be handled by both?

The conclusion is, if you need ACID compliance whilst storing and querying aggregate data structure, then you can do this with either. The key take aways though are:-

  1. Key-value store performance is significantly degraded with strict journaling to ensure data durability
  2. A Key-value store can only store hashes, whereas a document store can store very complex hierarchical structures
  3. A key-value store doesn’t provide search or other advanced query features, whereas MarkLogic does

Conclusion:

  • MarkLogic can act as a very fast aggregate store, and do this whilst providing very fast and sophisticated query capability.
  • If you need blazingly fast read speeds, like a read only cache, then using Redis on top of a document store may be a valid approach.
  • Don’t replace a document store with a key-value store, even if it supports structures like hashes, as the performance hit of enabling durability, combined with very basic (and thus slow to workaround in your application) query capabilities, removes the speed advantage of a pure in-memory key-value store.

There are some caveats to this research:-

  1. Most people use Redis as an in-memory cache rather than a ‘golden copy’ primary data store. This is a valid choice, and works for many people
  2. Document stores provide a world more functionality over a simple Key-Value store. You may find you need to tune your document store rather than put Redis in front of it, or replace it with Redis due to functions it does not support.

So for most people the above won’t change anything. If you are thinking though: “For applications without insane read/write speed – can I just use my document store alone?” – I think we’ve proved the answer is ‘yes’, and that the same can’t be said of replacing your document store with a key-value store.

Key-Value stores are a great piece of technology, but they’re aimed at a very specific market. This market needs insane read speeds, and sometimes very fast write speeds. They tend not to hold the master copy of data, so you can trade durability and consistency for outright performance. These are valid choices in these limited use cases.

As ever, which tool you choose depends on your use case. Hopefully this article has given you food for thought.

 

 

 

3 comments

  1. Nice post, Adam. Note that fastload does not bypass the transaction controller. It just runs the sharding placement algorithm on the client instead of the server and does direct parallel loading to d-nodes. In your case with only one forest on one host, I wouldn’t expect it to make a difference, but just wanted to be clear that you can use fastload and not sacrifice ACID properties.

    Also, wondering if you tried using triples rather than documents? If your documents are just single values, it may be faster to load them as triples.

    1. Hey David,

      Thanks for that clarification. Don’t know why I said ACID instead of sharding as I did know that! Guess I had a funny five minutes! (Incidentally I’m thinking about doing a separate blog post about the various sharding policies in use).

      I’m steering away from triples at the moment as I’m trying to use ever more complex aggregate structures, but you’re absolutely right that would be a good way to go (and we’d likely get 10x the speed of MarkLogic using that method, as we store 100 triples per document rather than just 10 element values).

      For my next trick, hashes within hashes within hashes… if Redis supports it!… versus MarkLogic 100K documents.

      Thanks for the comment!

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.