Golang Stream Data to S3

  • Status: Closed
  • Prize: $1000
  • Entries Received: 6
  • Winner: jorissuppers

Contest Brief

We are looking to contract several Golang developers to help build out our backend microservices. To do so we are offering a contest to find the best Golang developers in the world. We welcome both independent Freelancers and outsource companies to apply. This is your chance to show your awesome skills and work on a very meaninful project.

Recommended Skills

Top entries from this contest

View More Entries

Public Clarification Board

  • TwoHat
    Contest Holder
    • 4 years ago

    it should have only one writer per file. if you generate a guid and append to the filename you'll know it is yours

    • 4 years ago
    1. yadavgajender087
      yadavgajender087
      • 4 years ago

      Usage of ./s3_uploader:
      -acl="bucket-owner-full-control": ACL for new object
      -bucket="": S3 bucket name (required)
      -chunk_size=50MB: multipart upload chunk size (bytes, understands standard suffixes like "KB", "MB", "MiB", etc.)
      -expected_size=0: expected input size (fail if out of bounds)
      -key="": S3 key name (required; use / notation for folders)
      -mime_type="binary/octet-stream": Content-type (MIME type)
      -region="us-west-2": AWS S3 region
      -retries=4: number of retry attempts per chunk upload
      -sse=false: use server side encryption

      • 4 years ago
  • TwoHat
    Contest Holder
    • 4 years ago

    at least 10000qps using a max 8 core machine.

    • 4 years ago
    1. yadavgajender087
      yadavgajender087
      • 4 years ago

      is these one is right S3 has a maximum multipart count of 10000, therefore: total_input_size / chunk_size

      • 4 years ago
  • yadavgajender087
    yadavgajender087
    • 4 years ago

    Stream to S3 from stdin using concurrent, multipart uploading.
    Intended for use with sources that stream data fairly slowly (like RDS dumps), such that getting the initial data is the dominant bottleneck. It is also useful to upload large files as quickly as possible using concurrent multipart uploading

    • 4 years ago
  • TwoHat
    Contest Holder
    • 4 years ago

    Contest closes tomorrow. Looking forward to all the great submissions

    • 4 years ago
  • ankurs13
    ankurs13
    • 4 years ago

    Is there any QPS expectation for this service (under what constraints)? Also, what should happen if the file corresponding to the message already exists in S3 (when the program starts)? Do we overwrite the file or append to it? Will there be multiple writers to the same log file? Do we need to handle that situation?

    • 4 years ago

Show more comments

How to get started with contests

  • Post your contest

    Post Your Contest Quick and easy

  • Get tons of entries

    Get Tons of Entries From around the world

  • Award the best entry

    Award the best entry Download the files - Easy!

Post a Contest Now or Join us Today!