Project Voldemort: Scaling Simple Storage at LinkedIn

Code Alert! This is a part of our continuing series on Engineering at LinkedIn. If this isn’t your cup of Java, check back tomorrow for regular LinkedIn programming. In the meanwhile, check out some of our recent announcements, tips and tricks, or success stories. About a month ago LinkedIn released the code for an open source distributed storage system called Project Voldemort. I wanted to give a little more information about what it is good for, how it came to be, and what our plans are for the future. Some Background Like a lot of websites, LinkedIn started with a single big database and a cluster of front-end servers (unlike a lot of websites it also started out with a big social graph in memory on remote machines, but that is a different story). As we grew, this database got split into a variety of remote services for serving up profiles, performing searches, interacting with groups, maintaining network updates, fetching companies, etc. These databases may have read-only replicas, but we didnt have a system for scaling writes. Unfortunately for engineers and DBAs, many of the rich features that people expect from a modern internet site either require massive data sets or high write loads, or both. This became a problem as we looked at how to scale some write-intensive features like Who’s Viewed My Profile that require as many updates as reads. We faced a similar scale problem for offline computed data, such as finding similar profiles—the set of all user profiles is very large, but even a modest subset of the set of all user profile pairs is quite huge. To handle this problem we looked at the systems other internet companies had built. We really like Googles Bigtable, but we didnt think it made sense to try to build it if you didnt have access to a low-latency GFS implementation. Our primary goal was to get low-latency, high-availability access to our data. For complex analysis we had Hadoop and databases, for complex queries we had a distributed search system, and the goal wasnt to try to duplicate any of these. We were inspired by Amazons Dynamo paper, which seemed to meet the needs we have as well as being feasible to implement with low-latency queries--much of our design for Project Voldemort comes from that. Our experience with the system so far has been quite good. We were able to move applications that needed to handle hundreds of millions of reads and writes per day from over 400ms to under 10ms while simultaneously increasing the amount of data we store. Open Source LinkedIn is a big open source user, and we have contributed back a number of the improvements to Lucene we have made such as Zoie, Kamikaze, and Bobo. Most of the things we build are pretty LinkedIn-specific, but things like search and storage are pretty much stand-alone and we are happy to get other users (and contributors!). I myself have been a long-time open source lurker—I am the first to check out the source, but rarely have the time to make any improvements. Fortunately, even if most people are as lazy as me, not all are. In the last few months we have got close to 50 contributions from people around the world. Some are small, just doing a little cleanup, and others have been quite substantial introducing new features or major code improvements. The long-term success of an open source project depends on its not being controlled by a single company, person, group, but forming a real self-sustaining group of interested developers. This is our goal in working on the open source project. LinkedIn is not a storage systems company, and neither are the other web companies facing some of the same problems, so we think we think we can all benefit by sharing our work in this area. The Future So what is next for the project? The most important feature for a storage system is always improving performance and reliability. But there are a couple of other things in the works. We are working on making it easier to incrementally add to clusters of servers, improving our support for batch computed data from Hadoop, and implementing some clients in other programming languages. For more information on the project, check out the main site. We are always looking for contributors to the project, so if you are interested check out the projects page and mailing list. Ideas, bug reports, patches, etc. are all gladly accepted. Interested in similar projects, check out the job openings at LinkedIn to work on it full time. Stay tuned for upcoming blog posts that will reveal more details of the system internals and some of the lessons learned