Send me more jobs like this

Hadoop (Big Data) Developer

Keywords / Skills : Java, J2EE, Big Data, Hadoop, Hive, Yarn, Spark, Impala, kafka, Solr, Big Data Analytics, Big Data Developer, ETL, AWS, AZURE, MongoDB, SQL

3 - 12 years
Posted: 2018-12-14

Industry
Any
Function
IT
Role
Software Engineer/ Programmer
Team Leader/ Technical Leader
Datawarehousing Consultants
Posted On
14th Dec 2018
Job Description
  • Ability to design and implement end to end solution
  • Build utilities, user defined functions, and frameworks to better enable data flow patterns
  • Research, evaluate and utilize new technologies/tools/frameworks centered around Hadoop and other elements in the Big Data space
  • Define and build data acquisitions and consumption strategies
  • Build and incorporate automated unit tests, participate in integration testing efforts
  • Will work as a Developer/Engineer for Big Data Platform Engineering Team.
  • Will be responsible to work closely with the development and application support teams to deliver/resolve issues as part of Big Data Infrastructure engineering and operations.
  • Will be responsible to install/automate/support Big Data Components e.g. Hive, YARN, Spark, Impala, Kafka, SOLR, Oozie, Sentry, Encryption, HBase, etc.
  • Design and implement distributed data processing pipelines using Spark, Hive, Sqoop, Python, and other tools and languages prevalent in the Hadoop ecosystem
  • 5+ years of development experience in Java/J2EE.
  • 1+ year of development experience in Scala, Python is preferred.
  • 2+ years of experience building and integrating secure REST APIs is must to have.
  • 2+ years of experience with MVC Architecture or Spring Framework is added advantage.
  • 2+ years hands on experience in XML and Jason is good to have.
  • Knowledge in Big Data Environment i.e. Map-reduce, Kafka and Spark is a big plus.
  • Experience in Unix/Linux shell scripting is must.
  • Experience with GIT, Jira and Jenkins is a preferred.
  • Knowledge of Angular JS framework is good to have.
  • 5+years designing and implementing large scale data loading, manipulation, processing solutions, including 2 years+ team management.
  • High proficiency in data integration package
  • High proficiency in ETL development
  • Experience in streaming integration development
  • Cloud development experience (e.g. AWS, Azure)
  • Experience in implementing solutions using Hadoop/NoSQL technologies (eg. HDFS, Hbase, Hive, Sqoop, Flume, Spark, MapReduce, Cassandra, MongoDB etc.)


Disclaimer: This job is posted on behalf of our client and as such HackerTrail.com will make recommendations to you on other job opportunities that align to your profile. We take your personal information and privacy seriously. Take a look at our Privacy Policy for details: https://www.hackertrail.com/public/privacy


About Company

"Using technology to find technologists"

HackerTrail is a curated marketplace exclusively for IT talent ranging from developers to infrastructure specialists to data scientists. Using clever technology and gamification, HackerTrail connects the right candidate to the right job opportunities with top companies across Southeast Asia.
Similar Jobs
View All Similar Jobs


Walkin for you