Require consultancy on Bigdata / Spark / Kafka implementation. The project is related to handling 200-300 TB of data. Following are the expectations:
1. Architecture for scaling Upto this amount of data.
2. Implementation level HLD and LLD
3. Performance tuning
4. Dashboard and reports using this data.
5. Right technology selection from Hadoop ecosystem.
Candidates with past consultancy experience on handling such massive amount of data will be preferred.
28 freelancers are bidding on average $38/hour for this job
hey i have already deployed the cluster for many organisations and startups.. i can do this task for you while keeping all the performance parameter in the mind...
Ready to help in providing solution to Big data problem. 9 years of experience in designing and implementation of big data and cloud orientated solutions, such as away,azure,cloudera,spark,hive,pig,hbase ..