Search Jobs

Big Data Engineer

Plano, TX 75023

Posted: 08/17/2020 Industry: Data Development / Management Job Number: 10653

Job Description


Big Data Engineer - Hadoop, SQL, Python, AWS

 

**W2 Contract - minimum 12 months - Plano, TX (Remote to start)**

 

The main function of the Big Data Engineer is to develop, evaluate, test and maintain architectures and data solutions within our organization. The typical Big Data Engineer executes plans, policies, and practices that control, protect, deliver, and enhance the value of the organization’ s data assets.

 

Responsible for completing our client’ s transition into fully automated operational reports across different functions within Care (including repair operations, contact center, digital support, product quality and finance) and for bringing the Care Big Data capabilities to the next level by designing and implementing a new analytics governance model, with emphasis on architecting consistent root cause analysis procedures resulting in enhanced operational and customer engagement results.

 

Job Responsibilities:
  • Design, construct, install, test and maintain highly scalable data management systems.
  • Ensure systems meet business requirements and industry practices.
  • Design, implement, automate and maintain large scale enterprise data ETL processes.
  • Build high-performance algorithms, prototypes, predictive models and proof of concepts.

 

Day to Day duties:
  • Data collection – gather information and required data fields.
  • Data manipulation – Join data from multiple data sources and build ETLs to be sent to Tableau for reporting purpose
  • Measure & Improve - Implement success indicators to continuously measure and improve, while providing relevant insight and reporting to leadership and teams.

 

MUST HAVE QUALIFICATIONS
  • Analytical and problem solving skills, applied to Big Data domain
  • Proven understanding and hands on experience with Hadoop, Hive, Pig, Impala, and Spark
  • 5-8 years of Python or Java/J2EE development experience
  • 3+ years of demonstrated technical proficiency with Hadoop and big data projects
  • Top technical skills
  • Hadoop
  • SQL
  • Python/Shell Scripting (exchanging data between UNIX and other sources into Hadoop. All Hive tables we create will be pointed to the files in Hadoop)
  • AWS - ideal, but not a must have – some data comes from AWS S3

 

Education/Experience:
  • Bachelor' s degree in a technical field such as computer science, computer engineering or related field required.
  • 5-7 years of experience required.
  • Process certification, such as, Six Sigma, CBPP, BPM, ISO 20000, ITIL, CMMI.

 

Additional Skills:
  • Ability to work as part of a team, as well as work independently or with minimal direction.
  • Excellent written, presentation, and verbal communication skills.
  • Collaborate with data architects, modelers and IT team members on project goals.
  • Strong PC skills including knowledge of Microsoft SharePoint.
#INDSTD 10653

Job Requirements

Scott

Meet Your Recruiter

Korynn Stark
Technical Recruiter

To me, you’re more than just a resume – I work to show clients the whole YOU! I also offer guidance on your job search by providing market intelligence and honest feedback to help you work towards your goals.

Apply Online
Apply with LinkedIn Apply with Facebook Apply with Twitter
Chat With A Recruiter

Send an email reminder to:

Share This Job:

Related Jobs:

Login to save this search and get notified of similar positions.