Blog

Jairam Chandar's picture
Jairam Chandar
Updated on Friday, 5 April, 2013 - 15:03

A year ago, Datasift released Historics, a product that enabled users to access content from the past. Its demand has grown massively over the past year. We have had to make many optimizations in order to keep up with not just the demand, but the scale of our ever-growing archive.

Our Historics archive is very close to one petabyte in size now and we are adding about two terabytes to it each day. We run over 2,000 Hadoop jobs every month that scan over a total of nine trillion records cumulatively. Hence, we must ensure that every single component involved in extracting information from this archive is as efficient as possible. 

Now, here’s the thing about Datasift: we are never satisfied with our improvements. We always strive to make our platform better and faster. A while back, our Chief Technical Architect, Lorenzo Alberton, wrote a blog on how we optimized our Hadoop jobs. Those optimizations vastly improved the speed at which we scanned and filtered our archive to give users what they want quickly. We mainly concentrated on improving our I/O times. And we achieved that by improving our job scheduling to run multiple user queries in one Hadoop job, thereby reading the data once and filtering it for multiple users.

But still not satisfied with our improvements, we have made two major changes: 

  • Moved our archive from HBase to raw Hadoop Distributed File System (HDFS).
  • Changed our scheduling algorithm in order to give each user a fairer share of the cluster. 

Moving our archive from HBase to raw HDFS

HBase is a brilliant solution for high write-throughput use cases. However, the extra overhead it incurred while querying data through Hadoop was giving us a lot of grief. We hit the ceiling with the I/O throughput we could achieve, mainly because Hadoop over HBase has to go through HBase RegionServers in order to ensure any data still in memory (yet to be flushed to disk) is read as well. The idea behind using HBase was to provide random access to data and we realized that almost all our applications working on the archive were doing streaming reads. The ideal solution for us then was to move our archive to raw HDFS. 

When we ran a Historics query on some of the migrated data in our archive, we saw a 300 percent improvement in our job completion times. Earlier, it used to take queries up to 15 hours to read and filter a month’s worth of archive. Now, we are able to achieve the same in close to five hours. This was, of course, for a simple query, but we have seen similar improvements for complex queries too. 

The migration to raw HDFS was more difficult than you would imagine. We couldn’t afford to simply shut down Historics while we migrated the data. So we came up with a solution that could connect to both the old archive on HBase and the migrated archive on HDFS. This solution is still being used as we are in the process of migrating all the archive across to raw HDFS. This also meant that we had to provision new hardware to accommodate for what would be a second archive of the same size as the first, while we still continued to write every new interaction that was coming our way.

New and improved job scheduling 

One of the main concerns for us while designing the Historics system was to ensure all users receive a fair share of our computing cluster. In his blog, Lorenzo explained the original queuing algorithm we used for all Historics queries. We have now updated this algorithm to improve the user experience. 

We have introduced user-based queues where each user gets their own queue of queries. We break these queries into chunks that represent a day’s worth of data from the archive. For example, a Historics query for a week’s worth of data will consist of seven chunks. These chunks are then added to a user queue based on how old the chunk is (oldest first). It is important to note the difference between a job and a chunk. A job refers to the Hadoop job on the cluster, whereas a chunk is simply a day’s worth of work. Multiple chunks can run in the same job. 

With the queues prepared, we pick the ‘n’ number of users with the oldest queries in their queues, where ‘n’ is the pre-determined size of the user pool we process at a given time. We then use the round-robin format among these user queues to pick one chunk to run at a time.

For example, two users with two chunks each would have their chunks executed in the following order:

 

               
 

The reason we pick only ‘n’ users to process is because we don’t want to penalize older queries if we suddenly get a lot of queries from users who are not in the user pool yet.

                                                                                              Fig 1: Queue evaluation

New user queries can enter the user pool in any one of the following ways. If the maximum user pool size ‘n’ has not been reached, they are simply added to the user pool. If the maximum user pool size 'n' has been reached, they have to wait. When the wait time is long, we say that the query is 'starved'. In order to address the problem of starvation, we introduced a starvation interval. If the wait time for a user has exceeded the starvation interval, then we increase the user pool size to accommodate this user. However we always revert back to the configured pool size once the starvation load drops. 

Then, the user queues are evaluated and tickets are allocated for all the chunks in the user queues. 

                                                                                                          Fig 2: Ticket Evaluation

Once all the tickets are allocated, it's a simple case of picking the chunks with the lowest ticket numbers to run first. At this point, we check to see if there is a chunk without a ticket (a new one) that would query the same time period. If so, it is piggybacked with the chunk we just picked. This ensures we minimize I/O and reduce user wait times.

Time estimation

We are now able to estimate how long it will take for a Hadoop job to complete. A user’s Historics query is broken into chunks that are then run as part of Hadoop jobs. The time it will take to complete an entire Historics query is the time it will take to complete the last chunk in its queue. The previous section detailed how tickets are assigned to each chunk. Estimating time then involves iterating over all the chunks with tickets and calculating and accumulating time for each of the chunks.

The estimated time for a job to complete depends on four main factors:

  1. Rule complexity: the more complex the rule(s), the longer the filtering engine will take to process the interactions.
  2. Job cardinality (number of chunks running together): if there are multiple chunks running in a job, it means there are multiple rules that the filtering engine has to load and apply.
  3. Sources being queried: higher volume sources take longer time to complete.
  4. Sample rate: a smaller sample rate means less I/O and fewer interactions to filter and, therefore, shorter time to get executed.

Currently, we lack enough information to take all these factors into account. But we have already introduced additional monitoring so that we can factor in all of the above when estimating the time it will take to complete a job. You can expect the job estimation times to become more accurate over going forward.

Summary

While we are happy with the improvements we have made so far, we are also certain there is room for some more! We continue to analyze our Historics jobs, which will help us respond to any abnormalities more quickly and will help us improve our job estimation algorithms. We are also in the process of improving our message queues so that we can move the data through the rest of the pipeline faster. We are tweaking our filtering engine to further reduce the time it takes to get you the data you want. We look forward to making our processes faster and more robust.

Jacek Artymiak's picture
Jacek Artymiak
Updated on Monday, 11 March, 2013 - 16:17

Like any developer-friendly company, DataSift too has fans of the good old Vim editor working for us and with us. And since we spend so much time inside Vim, it is no wonder that we use it to write CSDL too. Which is why today I'm especially happy to announce that CSDL syntax highlighiting has been added to the Vim source code repository and should be shipping with all major operating systems worth using soon. (An OS worth using is the one that ships with Vim, of course.)

In the meantime, if you absolutely cannot wait to try it but don't want to build Vim from sources, grab a copy of the datasift-vim repository, unpack it, and follow the instructions. (You must have Vim installed first, of course.)

Vim will higlight CSDL automatically when you edit files with the .csdl filename extension. If you want to force CSDL syntax higlighting while you are editing, do the following:

  • Press Esc
  • Type :syn on
  • Press Enter
  • Press Esc
  • Type :set syntax=csdl
  • Press Enter

To achieve the same effect In gVim or MacVim, select Syntax -> Show filetypes in menu and then select Syntax -> C -> CSDL.

So, there you have it. If you like Vim and you like CSDL, the two are best pals now. Enojy our syntax file, and if you spot any problems with it, let us know. The datasift-vim project is Open Source and we do welcome patches, comments, and suggestions. 

And if you like Vim, don't forget the good cause that Vim has been promoting for years. 

PS. I'd like to thank Bram Moolenaar for adding my patches to Vim. It means a lot.

Jacek Artymiak's picture
Jacek Artymiak
Updated on Monday, 25 February, 2013 - 18:29

Social media gives us a way to sample trends and sentiment in real time. Consequently, it is very important that the analysis of the data we are looking at also happens in real time. And we want to help you, because here at DataSift we want our platform to be the Swiss Army knife of the social media analysis tools. We try to be flexible and do as much of the hard work as possible so that you can focus on analyzing the data instead of having to think how to feed it into your processing pipeline.

We strive to achieve that goal with our advanced Push data delivery system and its ever-growing set of connectors that can deliver the data you filter for to a variety of destinations. These could be third-party cloud storage services, such as Amazon AWS DynamoDB or an instance of CouchDB running on your own server. If there is a way to connect to a machine via the internet, we want to be able to deliver data to it.

When time is of essence and you absolutely must be able to start analysing data as soon as you receive it, then keeping data in RAM will help you shorten the time needed to access and process it. One popular tool for managing data in memory is Redis, an Open Source key-value store. And today we are very happy to announce the immediate availability of our new Redis connector, which will deliver the data you filter for straight to your Redis instance.

Getting started with Redis

It is your responsibility to set up your own instance of Redis and make sure it can be reached via the internet. If you have never used Redis before, we have help to get you started. Then it is just a matter of setting up a subscription via our Push API. The data will then be delivered straight into your Redis server ready for processing.

At your end, you will need a way to connect to your Redis server and you can do that with one of Redis clients. Many are available and you should be able to find the one that fits your needs quite easily.

The client alone is just a part of equation. You will also need software that can unpack interactions you get from DataSift from JSON into another format and look for the answers to your questions. Just like Redis, JSON is very well supported and many programing languages include appropriate libraries by default. As for data analysis tools, you will be the best judge of their usefulness, and it always is a good idea to ask your community for suggestions when you are not sure.

Please remember that you will be more likely to get reliable results if you start your analysis with a well-defined data set. That is where a well-written set of CSDL filters can help you pick out the most relevant interactions for further processing.

Those pesky limits (and how to cheat around them)

Keeping data in RAM lets you avoid delays caused by slow disk read and write operations, but that convenience comes at a cost: RAM is volatile and usually not available in large quantities even on high-end servers. It is also expensive to buy. Fortunately, you can architect a solution that reads data from a Redis store and saves it to disk, you can also rent servers with 110GB of RAM or more on a hourly basis, which can be a very cost-effective alternative to buying them or leasing on a long-term contract. Amazon AWS EC2 High-Memory instances are one such solution.

The issue of volatility is important when you do not want to lose data. You can avoid problems by storing multiple copies of data on two or more servers either by replicating it yourself or by creating two or more Push subscriptions based on the same stream hash. You can also make backups of the data held in memory to disk.

And if you really lose data you can retrieve it again using Historics. There will be a delay in receiving data, which may render it no longer relevant, but please keep in mind that there is a way to "replay" your analysis albeit at additional cost of running a Historics query.

RAM size constraints are also fairly easy to overcome. If the data you want to analyze does not fit inside the physical memory installed in your machine, you will need to add RAM, get a machine with more RAM, or use a piece of software that can manage a farm of Redis servers, such as the Redis LightCloud manager.

Go mad!

If your social media analysis business needs to work in real-time our Redis connector is the tool that will help you get further ahead of your competition. Go mad, build something amazing, and let us know how else we could be helping you achieve your goals!

This post was written by Jacek Artymiak with valuable input from Ollie Parsley, the developer of the Redis Push Connector.

Jacek Artymiak's picture
Jacek Artymiak
Updated on Tuesday, 21 May, 2013 - 11:57

At DataSift we love open source. We use it and we create it. As part of our commitment, we're proud to announce that a major new component of the DataSift platform, the Query Builder, is now available. It's open source and you can download it from GitHub today. Take a look at our demo page to try out the Query Builder.

What is the Query Builder?

Everyone talks about Big Data, but not many people know how to handle it. We live it. We created the Query Builder to bring the advanced functionality of DataSift to business users.

We consume over a billion items per day, processing them, augmenting them with analytical data, and making them available in JSON format. The Query Builder includes a built-in dictionary that shows all 450 of the different targets that users can include in their DataSift filters, so even novices can get started right away.


 

The Query Builder is a code generator that produces SQL-like commands that users can share. It does everything via a point-and-click interface where users create queries visually. They can use the features of the Advanced Logic Editor, shown above, to build complex filters by combining simpler ones.

Responsive design and standards compliance for the post-PC era

You worked hard on your site and the last thing you want to put on it is an ugly widget that clashes with the rest of the page. Rest assured that we've put a lot of time and effort into making sure the Query Builder is standards-compliant, responsive, and ready for post-PC touch screen devices. We strive to follow the latest standards for good design, responsiveness, and programming, be they official or commonly agreed upon. In a browser, the Query Builder supports IE7+, Firefox 5+, Safari 4.1+, Opera 12+, and Chrome 12+.

The Query Builder is built using standard tools and technologies (JavaScript, HTML5, JQuery and CSS). The responsive design fits a broad range of screen sizes; it's fully compatible with the iPhone, Android, and iPad as well as laptops and desktops. It even includes graphical assets for Retina-resolution displays. And since it works equally well with a mouse or a touch screen, filtering for answers in an ocean of a billion interactions is as easy as sending a Tweet from your iPhone.

Getting Started

The Query Builder project is hosted on GitHub. When you want to embed it on a web page, log into your server, change the working directory to the document root directory, and then clone the repository with a single command:

    git clone https://github.com/datasift/editor.git

Alternatively, download the project archive and unpack it to the document root directory on your web server.

In both cases, you should end up with a directory that contains a number of subdirectories. Most of the time you will only need datasift-editor/minified, unless you want to do some deep modifications of the code base and the resources. But make sure you read our configuration guides before you do that; in most cases, you only need to make small modifications to the Query Builder object initialization code. This is done by overriding the exposed configuration options.

The Query Builder produces code just like a programmer would using a code editor, but in a user-friendly way. The code is based on DataSift's Curated Stream Definition Language, CSDL, with added machine-readable comments that allow it to work with the Query Builder. This enhanced version of CSDL is known as JavaScript CSDL, or JCSDL.

This process enables users to generate and share CSDL code without knowing how to program. All that power is available without having to learn how to write a single line of code. Simply clone the Query Builder repository or upload the files that the Query Builder needs to run to your server and add eight lines of HTML code to the page where you want to embed it.

Modular and highly-customizable by nature, the Query Builder is easy to embed on a web page, blog, or inside a web view in a desktop or mobile application. You can customize it to match a variety of requirements for integration and branding.

Customizing the Query Builder

The Query Builder code you can download today from GitHub is exactly the same code we use on our website. We give you full freedom of choice when it comes to the use of our code and the approach to implementation.

The simplest form of customization you would perform might be to make the Query Builder follow the look and feel of your site. This is easily done by overriding the CSS style definitions with your own modifications. If you want to go one stage farther, you can replace the Query Builder's graphical assets with your own. The design of the CSS stylesheet is optimized to facilitate quick changes with minimal effort. When you want to add your own CSS, simply import it after the original stylesheet and all will be well.

Next, you might decide to customize the functionality and behavior of the Query Builder. You can modify the responsiveness of the interface or narrow down the choice of the data sources available to users. Changes like these do not require extensive knowledge of programming and can be implemented quickly by someone with a little knowledge of JavaScript. You can find the example of reducing functionality of the Query Builder and a working demo on our developer documentation site.

We have added built-in help in the form of tool tips so that end users of the Query Builder can learn more about DataSift's targets and operators. These are downloaded directly from our servers, so any changes will appear on your users' screens as soon as they are published, without you having to do anything unless you want to jump in and create your own tool tips.

 

We also support you, the developer. We have a whole site dedicated to the subject of embedding, styling, and configuring the Query Builder. 

Connecting to DataSift

Once your implementation of the Query Builder is fully operational, it's time to connect it to our plaform. You need to capture the JCSDL generated by your users, pass it on to DataSift, capture the results, and present them back to the user. You have full freedom to implement your own solution here as well as full freedom of user management.

This is where you can add a lot of your own creativity and value. Processing and presentation of the results is one important area where you can create your own tools and make your users happy. We have prepared a sample implementation to get you started. Read through the code, try it, see what it does, and create your own magic. And you do not have to worry about backward compatibility. If you follow our configuration procedures, upgrading your installation of the Query Builder will be as simple as unpacking an archive.

You are also free to manage your own users in any way you like. You can choose to require your users to provide their own DataSift credentials or you can use a single set of DataSift credentials for company-wide access without having to manage multilpe accounts. Or you could manage your users' accounts for them based on their internal credentials.

So, there you have it. Now go make something amazing and let the world know about it.

Ed Stenson's picture
Ed Stenson
Updated on Thursday, 14 February, 2013 - 13:06

DataSift is built on open source software. Here are some of the comments our developers have made on the subject:

 

    "It's like having a bigger team"

   "We learn from the best by reading and using their code."

   "Without open source, we wouldn't have PHP, we woudn't have Python, we wouldn't have Perl."

   "At DataSift, we're building a world-class platform, and we need to use the very best tools for the job."

 

From PHP to Hadoop, everything we do to filter over one billion items every day is built with components that the international community of developers has shared. Even our favorite data delivery format, JSON, is an open standard. It's obvious that the future lies in open source.

DataSift engineers contribute to and release a great deal of open source software. Some of the most important projects we use and contribute to include:

  • Apache Hadoop - distributed computing framework, including HDFS and MapReduce
  • D3.js - a JavaScript library to display given digital data into graphic, dynamic forms
  • Chef - configuration management tool
  • Redis - advanced key-value store
  • ZeroMQ - advanced socket library

Take a look at some of the open source projects we love and see more of the projects that DataSift's engineers are building.

 

Development

Today, we're releasing a new data tool, the visual Query Builder. It's the latest in a series of open source projects, all of which are available from DataSift's GitHub account. Here's a summary of our recent work:

Title Developer Comments
Query Builder

The Query Builder is a browser-based graphical tool that allows users to create and edit filters without needing to learn the DataSift Curated Stream Definition Language (CSDL). It started life as an internal project at DataSift where our staff quickly recognized its potential. The Query Builder is a serious tool that can be used to build complex CSDL filters without using DataSift's Code Editor.

 

 

Hubflow

Hubflow is an adaptation of GitFlow and the GitFlow tools git extension for working with GitHub.

If you look at Vincent Driessen’s original blog post, he’s listed all of the individual Git commands that you need to use to create all of the different branches in the GitFlow model. They’re all standard Git commands … and if you’re also still getting your head around Git (and still learning why it is different to centralised source control systems like Subversion, or replicated source control systems like Mercurial), it adds to what is already quite a steep learning curve.

Vincent created an extension for Git, called GitFlow, which turns most of the steps you need to do into one-line commands. At DataSift, we used it for six months, and we liked it - but we wanted it to do even more. We also wanted it to work better with GitHub, so to reduce confusion with the original GitFlow tools, we’ve decided to maintain our own fork of the GitFlow tools called HubFlow.

Arrow The Arrow dashboard is a visualization tool designed to show the full capabilities of DataSift. It's a framework that helps us to visualize and analyze DataSift's output streams. The goal was to find a way to show the huge amount of information that we filter. Arrow is open source too; in other words, we built this awesome project and we want you to play with it!

The visualizations are written using the D3 library for rendering. We currently support three types of visualizations: pie charts, line charts, and maps.

We designed Arrow to be as flexible as possible, so you can pull out the visualizations and use them in your projects, or even create visualizations of your own.

Here's a glimpse of one small part of Arrow but there's much, much more:

Dropwizard Extra

This suite of additional abstractions and utilities that extend Dropwizard. There are several modules:

Sound of Twitter

Using DataSift, this is a little application which visualizes the sentiment from Twitter with lights and sounds. You can see a demo over on YouTube or read more information on DataSift Labs.
 


 

Sublime Text CSDL plug-in

Sublime Text plugin to validate and compile DataSift CSDL, consume a sample set of interactions, and enjoy correct syntax highlighting. Do it all without leaving Sublime Text!

 

Code and documentation licensing

The majority of open source software exclusively developed by DataSift is licensed under the liberal terms of the MIT License. The documentation is generally available under the Creative Commons Attribution 3.0 Unported License. In the end, you are free to use, modify and distribute any documentation, source code or examples within our open source projects as long as you adhere to the licensing conditions present within the projects.

Note that our engineers like to hack on their own open source projects in their free time. For code provided by our engineers outside of our official repositories on GitHub, DataSift does not grant any type of license, whether express or implied, to such code.

 

Contact us

We support a variety of open source organizations and we're grateful to the open source community for their contributions. Our goal is to maintain our healthy, reciprocal relationship. If you have questions or encounter problems, please Tweet us at @DataSiftOS.

Pages

Subscribe to Datasift Documentation Blog