Job Description
Pyspark Developer Interact with business users to understand requirements and troubleshoot issues. Implement Projects based on functional specifications
Responsibilities
- Design, develop, test, deploy, support, enhance data integration solutions seamlessly to connect and integrate enterprise systems in our Enterprise Data Platform.
- Innovate data integration in Apache Spark-based Platform to ensure the technology solutions leverage cutting edge integration capabilities.
- Facilitate requirements gathering and process mapping workshops, review business/functional requirement documents, author technical design documents, testing plans and scripts.
- Assist with implementing standard operating procedures, facilitate review sessions with functional owners and end-user representatives, and leverage technical knowledge and expertise to drive improvements.
Requirements
- 2+ years of Experience with AWS Cloud on data integration with Apache Spark, EMR, Glue, Kafka, Kinesis, and Lambda in S3, Redshift, RDS, MongoDB/DynamoDB ecosystems
- Strong real-life experience in python development especially in pySpark in AWS Cloud environment.
- Design, develop test, deploy, maintain, and improve data integration pipeline.
- Experience in Python and common python libraries.
- Strong analytical experience with database in writing complex queries, query optimization, debugging, user defined functions, views, indexes etc.
- Strong experience with source control systems such as Git, Bitbucket, and Jenkins build and continuous integration tools.
- Databricks or Apache Spark Experience is a plus.
Benefits
- Career Development
- Competitive Salary
- Flexible Hours
- Health Insurance
- Performance based Rewards