Amazon AWS S3

Amazon AWS S3 is scalable storage as a service. Typically, you might use it when you do not know how much storage you are going to need but you may need lots of it. Just like Amazon AWS DynamoDB, it saves you time and money, because it is hosted in the Amazon AWS cloud and you only pay for the resources you actually use. Unlike the Amazon AWS DynamoDB connector, the Amazon AWS S3 connector saves multiple interactions in JSON format in plain text files.

Configuring Amazon AWS S3 for Push delivery

To use Amazon AWS S3 with Push delivery, follow the instructions below, skipping the steps you have already completed. It does not matter which operating systems you use as long as you can connect to the internet:

  1. Create a new S3 bucket. (You need to have an Amazon AWS account.)
    There are two ways of doing this: programmatically or via a web browser. In the examples on this page we will use an S3 bucket called datasift-s3.

  2. Create a new folder inside the datasift-s3 bucket.
    There are two ways of doing this, too: programmatically or via a web browser. In the examples on this page we will use a folder called interactions.

  3. You are now ready to set up the Amazon AWS S3 connector.

Configuring Push for Amazon AWS S3 delivery

  1. To enable delivery, you will need to define a stream or a Historics query. Both return important details required for a Push subscription. A succesful stream definition returns a hash, a Historics query returns an id. You will need either (but not both) to set the value of the hash or historic_id parameters in a call to /push/create. You need to make a call to /push/get or /historics/get to obtain that information or you can use the DataSift dashboard.

  2. Once you have the stream hash or the Historics id, you can give that information to /push/create. In the example below we are making that call using curl, but you are free to use any programming language or tool.

    curl -X POST '' \
    -d 'name=connectors3' \
    -d 'hash=SourceStreamHash' \
    -d 'output_type=s3' \
    -d 'output_params.bucket=datasift-s3' \
    -d '' \
    -d 'output_params.acl=private' \
    -d 'output_params.auth.access_key=YourAmazonAWSAccessKey' \
    -d 'output_params.auth.secret_key=YourAmazonAWSSecretKey' \
    -d 'output_params.delivery_frequency=0' \
    -d 'output_params.max_size=10485760' \
    -d 'output_params.file_prefix=Datasift' \
    -H 'Authorization: datasift-user:your-datasift-api-key'

  3. For more information, read the step-by-step guide to the API to learn how to use Push with DataSift's APIs.

  4. When a call to /push/create is successful, you will receive a response that contains a Push subscription id. You will need that information to make successful calls to all other Push API endpoints (/push/delete, /push/stop, and others). You can retrieve the list of your subscription ids with a call to /push/get.

  5. You should now check that the data is being delivered to your Amazon AWS S3 bucket/folder. Log in to your AWS account and examine the contents of the datasift-s3/instances folder. When DataSift is able to connect and deliver interactions to this directory, it uses filenames that follow the patterns described in the output_params.file_prefix output parameter definition later on this page.

    Please remember that the earliest time you can expect the first data delivery is one second after the period of time specified in the output_params.delivery_frequency parameter. If there is a longer delay, this might be due to the fact that the stream has no data in it or there is a problem with your server's configuration. In the first case, preview your stream using the DataSift web console and in the second case, make a call to /push/log to find out if there are any clues in there.

    Please make sure that you watch your usage and add funds to your account when it is running low. Also, stop any subscriptions that are no longer needed otherwise you will be charged for their usage. There is no need to delete them. You can can have as many stopped subscriptions as you like without paying for them. Remember that any subscriptions that were paused automatically due to insufficient funds, will resume when you add funds to your account.

  6. To stop delivery, call /push/stop. To remove your subscription completely, call /push/delete.

  7. Familiarize yourself with the output parameters (for example, the bucket name) you'll need to know when you send data to an Amazon AWS S3 bucket.

IAM permissions

IAM permissions provide a way of an Amazon Web Services master user to delegate file and directory permissions to other users. For instance you might only want to allow a user access to the directory where they push their DataSift data. Here's an example that you can modify and submit to AWS:

  "Version": "2012-10-17",
  "Statement": [
      "Effect": "Allow",
      "Action": "s3:list*",
      "Resource": "arn:aws:s3:::bucket_name_here",
      "Condition": {"StringEquals":{"s3:prefix":["my_directory/"]}}
      "Effect": "Allow",
      "Action": "s3:list*",
      "Resource": "arn:aws:s3:::bucket_name_here",
      "Condition": {"StringLike":{"s3:prefix":["my_directory/*"]}}
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::bucket_name_here/my_directory/*"
      "Effect": "Allow",
      "Action": "s3:*",
      "Resource": "arn:aws:s3:::bucket_name_here/my_directory/file_prefix_here$folder$"

The first two sections allow the user to list files in the "my_directory" directory inside the "bucket_name_here" bucket. The next two sections allow the user to put/move/delete files inside the "my_directory" directory inside the "bucket_name" bucket.

In the user tab of the security credentials page in the Amazon Web Services web console you can create a new user:

  1. Click Create New Users.
  2. Enter a username such as "restricted_credentials".
  3. Click Download credentials (this will give you a file containing the access_key and secret_key you put into your DataSift Destination.
  4. Back on the list of users check the box next to "restricted_credentials".
  5. Choose the Permissions tab at the bottom of the screen.
  6. Click Attach User Policy.
  7. Choose Custom Policy then Select.
  8. Type a Policy Name such as "restricted_policy".
  9. Paste the JSON shown above into the Policy Document box.

You're done! Now you can use the access_key and secret_key from the downloaded file and DataSift will push to S3 with the restricted and more secure credentials.

HTTP headers

Requests made by the DataSift's S3 connector include additional headers. They contain useful information about the data sent to your HTTP server. You can use it to create unique filenames, database rows/tables, or content handlers.

Element Content
X-Datasift-Hash For a Historics query, this contains the Historics Id.
For a stream, this contains the stream hash.
X-Datasift-Hash-Type Either "historic" or "stream".
X-Datasift-Id The subscription id for this query.
X-Datasift-Remaining-Bytes The number of bytes remaining in the buffer.
Content-Encoding Set to "gzip" if the content is in GZIP format. If the content is not compressed, this header does not need to be present.

GZIP compression

The default data delivery format used by DataSift in uncompressed plain-text JSON. When your server is not capable of processing large amounts of data or when you do not have enough bandwidth, you should consider using compression. DataSift is happy to deliver compressed data. All it takes is adding another parameter in a /push/create call. Remember to store and uncompress the data you are receiving on your side.


Twitter sends delete messages which identify Tweets that have been deleted. Under your licensing terms, you must process these delete messages and delete the corresponding Tweets from your storage.

Output parameters

Parameter: Description:
default = json_meta
The output format for your data:
  • json_meta - The current default format, where each payload contains a full JSON document. It contains metadata and an "interactions" property that has an array of interactions.
  • json_array - The payload is a full JSON document, but just has an array of interactions.
  • json_new_line - The payload is NOT a full JSON document. Each interaction is flattened and separated by a line break.

If you omit this parameter or set it to json_meta, your output consists of JSON metadata followed by a JSON array of interactions (wrapped in square brackets and separated by commas).

Take a look at our Sample Output for File-Based Connectors page.

If you select json_array, DataSift omits the metadata and sends just the array of interactions.

If you select json_new_line, DataSift omits the metadata and sends each interaction as a single JSON object.

The access key for the S3 account that DataSift will send to.

Make sure that this value is properly encoded, otherwise your /push/create request will fail.

Please create custom credentials to ensure that access to your Amazon S3 account is restricted.
The secret key for the S3 account that DataSift will send to.

Make sure that this value is properly encoded, otherwise your /push/create request will fail.

Please create custom credentials to ensure that access to your Amazon S3 account is restricted.

The minimum number of seconds you want DataSift to wait before sending data again:

Learn more...


The maximum amount of data that DataSift will send in a single batch:

  • 102400 (100KB)
  • 256000 (250KB)
  • 512000 (500KB)
  • 1048576 (1MB)
  • 2097152 (2MB)
  • 5242880 (5MB)
  • 10485760 (10MB)
  • 20971520 (20MB)
The bucket within that account into which DataSift will deposit the file.

An optional directory name in the bucket.

The authenticating user (defined by output_params.auth.access_key and output_params.auth.secret_key) must have permissions to create a new directory.


The access level of the file after it is uploaded to S3:

  • private (Owner-only read/write)
  • public-read (Owner read/write, public read)
  • public-read-write (Public read/write)
  • authenticated-read (Owner read/write, authenticated read)
  • bucket-owner-read (Bucket owner read)
  • bucket-owner-full-control (Bucket owner full control)
default = DataSift

An optional prefix to the filename. Each time Datasift delivers a file, it constructs a name in this format:

file_prefix + subscription id + timestamp.json


The encryption type used by Amazon. It can be:

  • none (default)
  • AES256

The Amazon AWS region in which the bucket specified in output_params.bucket is located. It can be:

  • blank (default)
  • us-east-1
  • us-west-1
  • us-west-2
  • eu-west-1
  • eu-central-1
  • ap-northeast-1
  • ap-southeast-1
  • ap-southeast-2
  • sa-east-1
  • cn-north-1
  • us-gov-west-1

If you hit the Test button in the UI or hit the /push/validate endpoint, DataSift will prompt you for the region if it needs a region and you have not supplied one.

<div> If your bucket name is of the form a.b.c (that is, using at least two periods) you must specify a region.

The compression setting that you want DataSift to use:

  • none
  • gzip

Data format delivered:

JSON document.

Storage type:

For each delivery, DataSift sends one file containing all the available interactions.


Take care when you set the max_size and delivery_frequency output parameters. If your stream generates data at a faster rate than you permit the delivery, the buffer will fill up until we reach the point where data may be discarded.


There are a large number of S3 SDKs available. Please refer to the Amazon AWS SDK list.