Digihelic Solutions Private Limited
DigiHelic Solutions - Azure Databricks Engineer
Job Location
in, India
Job Description
Job Title : Data Engineer Experience : 5 Years Location : Noida, India Work Mode : Hybrid Job Description : We are seeking a skilled and experienced Data Engineer to join our team in Noida. The ideal candidate will have a strong background in building and maintaining data pipelines, with expertise in cloud platforms (AWS or Azure), ETL processes, SQL, Databricks, Python/Scala, PySpark/Spark, and experience with Apache NiFi or Hive. This role requires a problem-solver who can design and implement robust data solutions to support our business needs. Responsibilities, Deliverables, and Expectations : Data Pipeline Development : - Design, develop, and maintain scalable and efficient data pipelines using ETL/ELT methodologies. - Implement data ingestion, transformation, and loading processes from various data sources. - Ensure data quality, consistency, and reliability throughout the data lifecycle. Cloud Platform Expertise (AWS/Azure) : - Utilize cloud-based data services on either AWS or Azure to build and manage data infrastructure. - Deploy and manage data engineering solutions in a cloud environment. - Optimize cloud resources for performance and cost efficiency. Database Management and SQL : - Design and implement database schemas and data models. - Write complex SQL queries for data extraction, transformation, and analysis. - Optimize database performance and ensure data integrity. Databricks and Spark Development : - Develop data processing applications using Databricks and Spark. - Utilize PySpark or Scala to implement data transformations and analytics. - Optimize Spark jobs for performance and scalability. Programming and Scripting : - Write and maintain Python or Scala code for data processing and automation. - Develop scripts for data validation, monitoring, and error handling. Data Integration and Orchestration : - Implement data integration solutions using Apache NiFi or Hive. - Design and implement data orchestration workflows to automate data pipelines. - Monitor and troubleshoot data integration processes. Data Quality and Governance : - Implement data quality checks and validation processes. - Ensure compliance with data governance policies and standards. - Document data pipelines and data lineage. Collaboration and Communication : - Collaborate with data scientists, analysts, and other stakeholders to understand data requirements. - Hands-on experience with Databricks and Spark. - Programming experience in Python or Scala. - Experience with PySpark or Spark. - Experience with Apache NiFi or Hive. Required Expertise : - Ability to design and implement efficient data pipelines. - Strong understanding of database concepts and data warehousing. - Experience in optimizing data processing and performance. - Ability to work in a fast-paced and dynamic environment. (ref:hirist.tech)
Location: in, IN
Posted Date: 5/7/2025
Location: in, IN
Posted Date: 5/7/2025
Contact Information
Contact | Human Resources Digihelic Solutions Private Limited |
---|