Roles & Responsibilities:
▪ Understanding the data requirements of all stake holders, consolidating them, rationalizing redundancies.
▪ Leading a technical data integration team with solution development mindset.
▪ Translate complex business requirements into technical solutions.
▪ Designing and developing the right modes of access for different needs – from API’s to direct database access, depending access level of the users and the purpose of usage.
▪ Turn around solutions quickly but thoughtfully, balancing speed to market with quality, longevity and scalability.
▪ Work with a team of Business Intelligence team members to develop and optimize a centralized Data Warehouse to support various analytical data cubes.
▪ Create and Maintain data pipelines (ETL, ELT) to blend disparate data sources together. Understand existing data pipelines and transformations to maintain and enhance them.
▪ Diagnose performance issues and provide recommendations to improve platform scale.
▪ Maintain technical standards and documents (Data Dictionary, development style guide, ERD, etc.).
▪ Designing appropriate data structures to store them securely and access easily – includes designing both tech stack as well as data structure.
Ideal Candidate for the Position:
▪ Experience of atleast 2 years in Azure Data Factory (ADF) creating multiple pipelines and activities using Azure for full and incremental data loads(Delta load) into Azure Data Lake Store and Azure Synapse SQL pools (formerly Azure SQL DW)
▪ Bachelor’s/Masters degree in a relevant field (e.g. Computer Science, Software engineering, Data Science, AI/ML).
▪ 5+ years of experience Developing, deploying, and monitoring end to end data integration pipelines with extensive knowledge of evaluation metrics and best practices.
▪ 4+ years Data Warehouse/Data Lake Architecture and Development.
▪ 3+ years Data Modeling & Architecture with strong programming program
▪ 3+ years’ experience on Cloud-Native Solutions (In AWS , Azure GCP)
▪ ETL/ELT, data Pipelines, Data Quality, blueprints development.
▪ Cloud-Native data integration & analytics services (AWS Glue, S3, Lambda, EMR, Azure Synapse, Azure Data Factory, Azure Data lake, Databricks etc..)
▪ Data API, Embedded Analytics, Event-Driven Microservices architecture exp.
▪ Experience in SQL, Apache Spark, Python, Java/Scala.
▪ A strong DataOps, DevOps, MLOps, Data & Analytics background
▪ Understanding of data structures, data modeling and data architecture
▪ Must be able to implement Integration solution in Microsoft Azure.
▪ Working experience with Databricks, Synapse Spark framework experience as well as experience optimizing Spark queries
▪ Experience for Azure Data Lake Storage and working with Parquet files and partitions.
▪ Must have experience in generating data pipelines in Azure Data Factory.
▪ Must have experience in ingesting data from different Sources using in Azure Data Factory.
▪ Must have experience in transforming the data using Azure Databricks.
▪ Must have experience in Azure DevOps CICD Stories, Bugs and Issue management.
▪ Must be able to analyse and troubleshoot data related issues and assist in the resolution of data issues.
▪ Azure experience in designing and implementing redundant systems, policies, and procedures for disaster recovery and data archiving to ensure effective protection and integrity of data assets.
▪ Manage and resolve database access and performance issues.
▪ Sound programming skills with knowledge of languages like Python and Java. The candidate must possess a solid expertise in at least one of the programming languages.
▪ Must be able to connect with customer and translate business requirements to technical Specs and Agile ways of working.
▪ Strong experience in Data Management, Integration, Governance and Visualization.
▪ Should have strong perspective on how to create Data-as-a-Service internally in the organization
▪ Familiarity and experience with Reporting Technologies (i.e., Tableau, PowerBI)
▪ Should have handled a delivery team of 6-8 resources and have worked in similar capacity for 2 years
▪ Analytical Problem-Solving: Approaching high-level data challenges with a clear eye on what is important; employing the right approach/methods to make the maximum use of time and human resources.
▪ Effective Communication: Carefully listening to management, data analysts and relevant staff to come up with the best data design; explaining complex concepts to non-technical colleagues.
▪ Professional working proficiency in English Language.
▪ Industry Knowledge: Understanding the way your chosen industry functions and how data are collected, analysed and utilized; maintaining flexibility in the face of big data developments.
Interested candidates please share your updated CV in MS Word format at firstname.lastname@example.org…
Thanks in Advance…
To apply for this job please visit akscellenceinfo.com.