Graph DB, ETL Framework, DataWarehousing, NoSQL, PySpark & Apache Spark experience
- Python
- Airflow DAGs with Flink
- Cloud - AWS , S3, Iceberg experience
- A background in computer science, engineering, mathematics, or similar quantitative field with a minimum of 2 years professional experience
• Experience using Spark, Kafka, Hadoop, or similar technologies
• Experience in implementing data pipelines using python
• Experience with workflow scheduling / orchestration such as Kubernetes, Airflow
• Experience with query APIs using JSON, XML
• Experience with Unix-based command line interface and Bash scripts
• Experience in BigData/Hadoop Platform.
If interested to pursue send your updated CV or reach me out