Require consultancy on Bigdata / Spark / Kafka implementation. The project is related to handling 200-300 TB of data. Following are the expectations:
1. Architecture for scaling Upto this amount of data.
2. Implementation level HLD and LLD
3. Performance tuning
4. Dashboard and reports using this data.
5. Right technology selection from Hadoop ecosystem.
Candidates with past consultancy experience on handling such massive amount of data will be preferred.
24 freelancers are bidding on average $38/hour for this job
hey i have already deployed the cluster for many organisations and startups.. i can do this task for you while keeping all the performance parameter in the mind...
Hello, I am working as a Big Data Consultant. I have experience with big data technologies such as, Hadoop, Spark, Pig and Hive. Kindly contact for more details. Thanks
Very good experience in cluster setup to handle humongous amount of data processing using Kafka, spark, hbase,scala/python Technologies. Written application using scala spark to parse the data.
Ready to help in providing solution to Big data problem. 9 years of experience in designing and implementation of big data and cloud orientated solutions, such as away,azure,cloudera,spark,hive,pig,hbase ..