Find Jobs
Hire Freelancers

Effective Mass Crawling using Apache Nutch storing data on HDFS - 27/04/2018 12:54 EDT

$30-250 USD

Closed
Posted almost 6 years ago

$30-250 USD

Paid on delivery
Have to crawl the data and store it to HDFS using Apache nutch with the integration of Hadoop!
Project ID: 16803276

About the project

6 proposals
Remote project
Active 6 yrs ago

Looking to make some money?

Benefits of bidding on Freelancer

Set your budget and timeframe
Get paid for your work
Outline your proposal
It's free to sign up and bid on jobs
6 freelancers are bidding on average $244 USD for this job
User Avatar
Hi, I have experience of setting up Nutch to store data in Hadoop/Hbase. Please let me know if you are interested and I am available to start right away.
$555 USD in 10 days
4.0 (17 reviews)
5.7
5.7
User Avatar
Hello sir, I have experinece in HDFS and Hadoop. for more info ping me. I did several project in this field.
$155 USD in 3 days
4.4 (5 reviews)
4.5
4.5
User Avatar
Hi, Expertise for crawl data using Hdfc and hadoop In regards of your job post, I would like to inform that we have skilled professionals team who have strong skills for it. In continuation, i request to drop your queries so we can go through it and provide you right solution. Look forward to hearing you. Thanks Ricky
$222 USD in 3 days
0.0 (1 review)
0.0
0.0
User Avatar
Dear Client With more than 10 year experiences in Big Data Hadoop, I have following experiences with best projects... Implementation and ongoing administration of Hadoop infrastructure, Cluster maintenance as well as creation and removal of nodes,* HDFS support and maintenance.. Cluster Monitoring and Troubleshooting Design, implement and maintain security Works with application teams to install operating system and Hadoop updates, patches, version upgrades. Deploying a secure platform based on Kerberos authentication and apache ranger authorization, Including Hadoop, HBase, Kafka, spark, ambari ,Sqoop,Hortanworks,Cloudera VM ware,Elastic search,Cassadra. Automated deployment of spark, Flink clusters Spark, spark streaming, Flink, storm kernel Integrating Jupyter notebooks containing python, R, Scala for deployment in the spark, storm and flink environments Introduction of management of quotas, including HDFS, HBase, Kafka Machine Learning with TensorFlow - Build a solution which can recognize images on search words and can run on distributed computing like Hadoop/ Spark etc. for a photo storage company.. Setting up Eclipse project, maven dependencies to add required Map Reduce Libraries Coding, packaging and deploying project on hadoop cluster to understand how to deploy/ run map reduce on Hadoop Cluster Twitter Sentiment Analytics - Collect and real time data (JSON format), and perform sentiment analysis on continuously flowing streaming data Regards Suzan
$155 USD in 3 days
0.0 (0 reviews)
0.0
0.0

About the client

Flag of AZERBAIJAN
Azerbaijan
1.4
1
Payment method verified
Member since Mar 1, 2018

Client Verification

Thanks! We’ve emailed you a link to claim your free credit.
Something went wrong while sending your email. Please try again.
Registered Users Total Jobs Posted
Freelancer ® is a registered Trademark of Freelancer Technology Pty Limited (ACN 142 189 759)
Copyright © 2024 Freelancer Technology Pty Limited (ACN 142 189 759)
Loading preview
Permission granted for Geolocation.
Your login session has expired and you have been logged out. Please log in again.