We have an opening for Big Data Engineer-AWS.
Location - Plano, TX, or Herndon, VA
3-4 non-negotiable requirements
-Must have Big Data Tools: Hadoop/Spark/Kafka
-Python experience, will consider someone with Java
-Would prefer someone that has experience leading a squad.
•Highly experienced in the use continuous integration tools (e.g. Jenkins, Hudson, etc) and infrastructure automation (VM Ware, Puppet, Chef, Vagrant, Docker, etc).
•Develops and maintains scalable data pipelines and builds out new API integrations to support continuing increases in data volume and complexity.
•Strong analytic skills related to working with unstructured datasets
•A successful history of manipulating, processing and extracting value from large disconnected datasets
•Build the infrastructure required to process data from a variety of data sources using SQL.
•Create data tools for analytics and data scientists to optimize data
•Experience working with either a Map Reduce or an MPP system on any size/scale
•Experience with big data tools: Hadoop, Spark, Kafka, etc.
•5+ years of Experience with object-oriented/object function scripting languages: Python, Java, C++, Scala, et
• AWS cloud services: EC2, EMR, RDS, Redshift
• SQL experience and No-SQL experience is a plus
- provided by Dice