Are you passionate about transforming and shaping leading organisations? Do you enjoy providing solutions to complex business problems for opportunities requiring in depth knowledge of organisational objectives? Would you like to be part of the digital team and a major contributor on large deals which will have a real impact on the company's success?
We offer you an environment where everyone's opinion matters, ideas are openly shared, where change is the norm and where you will work with some of the fastest moving, well recognised brands in the world.
What will you do?
We are looking for enthusiastic Data Engineers with 2-10 years of expertise in the Data Engineering space with eagerness to learn and deliver large complex Digital Transformation programmes across UK & Ireland. As a Data Engineer you will have worked on Data Integration into Cloud Data Warehouses or Data Lakes, programming, APIs, etc. in an Agile environment.
Your key responsibilities will be:
- Identify and analyse user requirements
- Prioritize, assign and run tasks throughout the software development life cycle
- Develop new applications
- Write well-designed, efficient code
- Review, test and debug team members' code
- Ensure our applications are secure and up-to-date
What experience are we looking for?
- Highly proficient with Talend Data integration and Talend Big Data, Talend data quality.
- Experience in setting up the ETL architecture for the data platform on any of the cloud provider infrastructure preferably AWS or Cloudera platform.
- Experience in setting up the Talend TAC server and managing the infrastructure tools, identifying and troubleshooting the Talend infrastructure issues.
- Having experience with Talend On-cloud version would be an advantage.
- Extensively worked on building ETL pipelines from various sources like RDBMS, NoSQL, Kafka and other batch or streaming sources and flat files
- Experience with AWS DMS and AWS S3
- Strong experience on working with various data formats like Parquet, Avro, CSV, JSON and XML and other unstructured formats in batch and real time environments.
- Strong knowledge of big data processing framework - Spark and should be able to tune the ETL pipelines build using the framework.
- Strong understanding of distributed data processing and MPP databases.
- Strong understanding of relational database concepts & technology - data modeling (dimensional/data vault), SQL, query optimization.
- Knowledge of any of the NoSQL databases like HBase, MongoDB etc would be an advantage.
- Understand existing ETL tool and/or business logic and rules, data sources and convert same ETL jobs to Talend pipelines
- Familiarity with CI/CD process as well as tools and processes including Git, Jenkins, Jira. Candidate would be required to set up the complete CI/CD and development process within the project.
- Experience with setup and configuration, setting up design best practices, coding best practices and configuration management process for the data engineering team as well as with BI tools
- Experience working in Development, Support, Maintenance, and Enhancement projects