Job Description :

Big Data Engineer with Java

Phoenix, AZ

Onsite(Hybrid)

Contract W2

24+ Months

07+ Years

Must-Have:

Minimum 7 years of experience as a Big Data Engineer or Java Developer, with a strong focus on big data technologies.

Proficient in Java programming language, including experience with object-oriented design, data structures, and concurrency.

Hands-on experience with big data technologies, such as Apache Hadoop, Apache Spark, Apache Kafka, and Apache Hive.

Knowledge of data processing patterns and architectural patterns used in big data systems, such as batch processing, stream processing, and lambda architecture.

Familiarity with data modeling techniques and experience in designing efficient data storage solutions, including relational databases, NoSQL databases, and data lakes.

Strong analytical and problem-solving skills, with the ability to identify and address performance bottlenecks and data quality issues.

Excellent communication and collaboration skills, with the ability to work cross-functionally and translate technical concepts to non-technical stakeholders.

Experience with cloud-based big data platforms, such as Amazon EMR, Google Cloud Dataproc, or Microsoft Azure HDInsight.

Familiarity with data visualization and business intelligence tools, such as Tableau, Power BI, or Looker.

Knowledge of machine learning and data science techniques, and experience in integrating them into big data pipelines.

Exposure to DevOps practices and tools, such as Docker, Kubernetes, or CI/CD pipelines.

Company Overview:

We are a leading data-driven organization that leverages the power of big data and advanced analytics to drive business insights and innovation.

We are seeking a talented Big Data Engineer with expertise in Java to join our team and help us unlock the full potential of our data assets.

Responsibilities:

Design and implement scalable and efficient big data processing pipelines using Java and big data technologies.

Develop and maintain batch and real-time data ingestion, transformation, and processing workflows.

Integrate and optimize big data technologies, such as Apache Hadoop, Apache Spark, Apache Kafka, and Apache Hive, to create a robust and high-performing data ecosystem.

Collaborate with data scientists, analysts, and business stakeholders to understand data requirements and translate them into technical solutions.

Ensure data quality, consistency, and security throughout the data lifecycle, implementing best practices for data governance and compliance.

Optimize the performance and scalability of big data systems, leveraging techniques like partitioning, indexing, and caching.

Automate and streamline data processing tasks using tools like Apache Airflow, Jenkins, or custom scripts.

Stay up-to-date with the latest trends and advancements in the big data and Java technology landscape, and recommend improvements to the company's data architecture.

,

Riya Raj

Account Manager

         

7730 E Greenway #201 Scottsdale, AZ 85260

O:  |

             

Similar Jobs you may be interested in ..