Big Data Engineer
6625 Excellence Way Plano, TX 75023
Big Data Engineer - Hadoop, SQL, Python, AWS
**W2 Contract - minimum 12 months - Plano, TX (Remote to start)**
The main function of the Big Data Engineer is to develop, evaluate, test and maintain architectures and data solutions within our organization. The typical Big Data Engineer executes plans, policies, and practices that control, protect, deliver, and enhance the value of the organization’ s data assets.
Responsible for completing our client’ s transition into fully automated operational reports across different functions within Care (including repair operations, contact center, digital support, product quality and finance) and for bringing the Care Big Data capabilities to the next level by designing and implementing a new analytics governance model, with emphasis on architecting consistent root cause analysis procedures resulting in enhanced operational and customer engagement results.
- Design, construct, install, test and maintain highly scalable data management systems.
- Ensure systems meet business requirements and industry practices.
- Design, implement, automate and maintain large scale enterprise data ETL processes.
- Build high-performance algorithms, prototypes, predictive models and proof of concepts.
Day to Day duties:
- Data collection – gather information and required data fields.
- Data manipulation – Join data from multiple data sources and build ETLs to be sent to Tableau for reporting purpose
- Measure & Improve - Implement success indicators to continuously measure and improve, while providing relevant insight and reporting to leadership and teams.
MUST HAVE QUALIFICATIONS
- Analytical and problem solving skills, applied to Big Data domain
- Proven understanding and hands on experience with Hadoop, Hive, Pig, Impala, and Spark
- 5-8 years of Python or Java/J2EE development experience
- 3+ years of demonstrated technical proficiency with Hadoop and big data projects
- Top technical skills –
- Python/Shell Scripting (exchanging data between UNIX and other sources into Hadoop. All Hive tables we create will be pointed to the files in Hadoop)
- AWS - ideal, but not a must have – some data comes from AWS S3
- Bachelor' s degree in a technical field such as computer science, computer engineering or related field required.
- 5-7 years of experience required.
- Process certification, such as, Six Sigma, CBPP, BPM, ISO 20000, ITIL, CMMI.
- Ability to work as part of a team, as well as work independently or with minimal direction.
- Excellent written, presentation, and verbal communication skills.
- Collaborate with data architects, modelers and IT team members on project goals.
- Strong PC skills including knowledge of Microsoft SharePoint.
Meet Your Recruiter
To me, you’re more than just a resume – I work to show clients the whole YOU! I also offer guidance on your job search by providing market intelligence and honest feedback to help you work towards your goals.