Job Description :

Sr. Data Engineer
Location: Warren, NJ (Onsite)
Duration: 12 months.
Position type: W2 contract
Visa: Any visa independent candidate.

Mandatory skills: Data Engineer, Databricks, SQL, Python, Spark, Azure Data Factory, Snowflake, Amazon Redshift, Google BigQuery, Azure Synapse, AWS, GCP, CI/CD pipelines, P & C domain experience

Summary:

We are seeking a highly skilled and experienced Data Engineer to join our team. The ideal candidate will have a strong background in data processing and transformation using Databricks, proficiency in SQL and Python, experience with Azure Data Factory, and a solid understanding of data warehouse migration strategies.

Responsibilities:

  • Design, develop, and maintain scalable data pipelines using Databricks, and integrate with Azure Data Factory (ADF) workflows.
  • Develop and optimize SQL queries and data models for performance and scalability.
  • Lead data warehouse migration projects, including planning, execution, and validation of data transfer from legacy systems to modern platforms.
  • Familiarity with cloud data warehouses such as Snowflake, Amazon Redshift, Google BigQuery, or Azure Synapse.
  • Collaborate with cross-functional teams to gather requirements, define data architecture, and implement solutions that adhere to business needs and data governance standards.
  • Write complex data transformation logic using Python and Spark.
  • P & C domain experience preffered
  • Monitor and ensure the stability of data processing jobs, troubleshoot issues, and perform root cause analysis.
  • Implement data quality checks, data cleansing procedures, and ensure data integrity.
  • Document all data engineering processes and create standard operating procedures.
  • Stay current with industry trends and best practices in data engineering and contribute to continuous improvement of the data platform.

Qualifications:

Bachelor's degree in Computer Science, Engineering, or a related field, or equivalent work experience.
Minimum of 3 years of experience in a data engineering role with a focus on Databricks, data pipelines, and data warehouse technologies.
Proven experience with SQL, Python, and Spark.
Hands-on experience with Azure Data Factory and familiarity with other Azure services.
Experience with data warehouse migration, including schema conversion, data replication, and ETL processes.
Strong understanding of data modeling, data warehousing concepts, and big data architectures.
Excellent problem-solving skills and the ability to work independently or as part of a team.
Strong communication skills and the ability to collaborate effectively with both technical and non-technical stakeholders.

Preferred Skills:

Certifications in Databricks and Azure Data Factory.
Experience with additional cloud platforms (e.g., AWS, GCP) and data integration tools.
Knowledge of data governance, security, and compliance best practices.
Experience with CI/CD pipelines and version control systems like Git.

             

Similar Jobs you may be interested in ..