*Note: It is only on W2.
Please C2C candidates don't apply *
*Required Skills*
* 3-6years experience in Hadoop stack and storage technologies, HDFS, MapReduce, Yarn, HIVE, sqoop, Impala , spark, flume, kafka and oozie
* Extensive Knowledge on Bigdata Enterprise architecture (Cloudera preferred)
* Excellent analytical capabilities - Strong interest in algorithms
* Experienced in HBase, RDBMS, SQL, ETL and data analysis
* Experience in No SQL Technologies (ex.
, Cassandra/ MongoDB, etc )
* Experienced in scripting(Unix/Linux) and scheduling (Autosys)
* Experience with team delivery/release processes and cadence pertaining to code deployment and release
* Research oriented, motivated, pro-active, self-starter with strong technical, analytical and interpersonal skills.
* A team player with good verbal and written skills, capable of working with a team of Architects, Developers, Business/Data Analysts, QA and client stakeholders
* Versatile resource with balanced development skills and business acumen to operate at a fast and accurate speed
* Proficient understanding of distributed computing principles.
Continuously evaluate new technologies, innovate and deliver solution for business critical applications.
Job Type: Contract
Pay: $65.
00 per hour
Experience level:
* 8 years
Experience:
* Hadoop, Apache Hive, Big Data,: 5 years (Required)
* in HBase, RDBMS, SQL, ETL and data analysis: 5 years (Required)
* HDFS, MapReduce, Yarn, sqoop, Impala , spark: 5 years (Required)
* flume, kafka and oozie: 5 years (Required)
Ability to Commute:
* Charlotte, NC 28213 (Preferred)
Ability to Relocate:
* Charlotte, NC 28213: Relocate before starting work (Preferred)
Work Location: Hybrid remote in Charlotte, NC 28213