Role: Lead Data Engineer
Bill Rate: $105/hour C2C
Location: Remote
Duration: 12+ months/ long-term
Interview Criteria: Telephonic + Skype
Direct Client Requirement
Job Details: Lead Data Engineer
1. Do you love building and pioneering in the technology space? Do you enjoy solving complex business problems in a fast-paced, collaborative, inclusive, and iterative delivery environment? At Capital One, you'll be part of a big group of makers, breakers, doers and disruptors, who sol
|
 |
Role: – Senior Data Engineer
Bill Rate: $77/hour C2C
Location: Chicago, IL
Duration: 12+ months/ long-term
Interview Criteria: Telephonic + Skype
Direct Client Requirement
Summary
· We are looking for a Senior Data Engineer. This role is a leader in data services for the Analytics Center of Excellence. It will work closely closely with all within the Enterprise Analytics Platform to ensure the platform is developed, delivered, maintained, and supported to the highest standards
|
 |
Role: – Senior Data Engineer
Bill Rate: $104/hour C2C
Location: Remote
Duration: 12+ months/ long-term
Interview Criteria: Telephonic + Skype
Direct Client Requirement
Job Details
· Design, develop, and maintain real-time or batch data pipelines to process and analyze large volumes of data. Designs and develops programs and tools to support ingestion, curation, and provisioning of complex first party and third-party data to achieve analytics, reporting, and data science. Design a
|
 |
Role: Data Engineer
Bill Rate: $86/hour C2C
Location: Remote
Duration: 12+ months/ long-term
Interview Criteria: Telephonic + Skype
Direct Client Requirement
Responsibilities:
Must Have Skills:
· Spark
· Kafka
· Integrating Data from multiple sources into one
· Data design, building data services, some warehousing
· Sql - creating structures, tables, building services around that
· R/ SSAS
· Azure Sql, google big query
Note: If you are interested, please share y
|
 |
Required Qualifications:
Bachelor’s degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in the specialty in lieu of every year of education
At least 4 years of experience in Information Technology.
At least 2 years of hands-on experience developing Scala/spark pipelines
At least 2 years of hands-on experience in handling large amount of big data using Pyspark Ecosystems.
At least 1 years of
|
 |
The Client is seeking a highly skilled Agentic Data Engineer to design, develop, and deploy data pipelines that leverage agentic AI that solve real-world problems. The ideal candidate will have experience in designing data process to support agentic systems, ensure data quality and facilitating interaction between agents and data. Responsibilities: Designing and developing data pipelines for agentic systems, develop Robust data flows to handle complex interactions between AI agents an
|
 |
RESPONSIBILITIESBuild large-scale batch and real-time data pipelines with data processing frameworks in Azure cloud platform.Designing and implementing highly performant data ingestion pipelines from multiple sources using Azure Databricks.Direct experience of building data pipelines using Azure Data Factory and Databricks.Developing scalable and re-usable frameworks for ingesting of datasetsLead design of ETL, data integration and data migration.Partner with architects, engineers, information a
|
 |
Agentic Data Engineer to design, develop, and deploy data pipelines that leverage agentic AI that solve real-world problems. The Client is seeking a highly skilled Agentic Data Engineer to design, develop, and deploy data pipelines that leverage agentic AI that solve real-world problems. The ideal candidate will have experience in designing data process to support agentic systems, ensure data quality and facilitating interaction between agents and data. Responsibilities: Guiding and me
|
 |
Pyspark Developer
Raleigh,North Carolina,United States
Richardson,Texas,United States
Required Qualifications:
Bachelor's degree or foreign equivalent required from an accredited institution. Will also consider three years of progressive experience in a specialty in lieu of every year of education.
Atleast 4 years of experience in Information Technology.
Atleast 3 years of Pyspark experience.
At least 2 year of experience in Hadoop, Spark, Python & PySpark
|
 |
Responsibilities:· Lead the design, development, and implementation of data solutions using AWS and Snowflake.· Collaborate with cross-functional teams to understand business requirements and translate them into technical solutions.· Develop and maintain data pipelines, ensuring data quality, integrity, and security.· Optimize data storage and retrieval processes to support data warehousing and analytics.· Provide technical leadership and mentorship to junior data engineers.· Work closely with s
|
 |