Job Description :
Duties:
The analytics engineer acts as a bridge between a data engineer and a data analyst. This position
is primarily responsible for modeling raw data sets into curated, reusable, trusted data sets which
power analytics across the enterprise. These data sets will serve as the single source of truth for
data and enable self-service analytics. In addition to the development of data models, this role is
responsible for maintaining data quality within these data sets via the use of monitoring, testing,
and automation. An additional component of the role is to improve the effectiveness of data
analysts and data scientists. This maybe via providing technical expertise in query development,
extending data models via the addition of new metrics, and/or consulting on software development
practices. The Analytics Engineer owns the entire workflow of data associated with their domain;
data pipeline development, ELT performance, timely loading of data sets, and maintenance.
This role will work within various business units and partner with data analysts and data scientists
to obtain a deep understanding of operational data and develop scalable data products which
empower data-driven decision making across the enterprise.
1. Collaborate with business subject matter experts, data analysts, and data scientists to
understand/identify the opportunities to develop well-defined, integrated, re-usable data
sets which power analytics.
2. Codify reusable data access patterns to speed up time to insights.
3. Perform Logical and Physical data modeling with an agile mindset.
4. Build automated, scalable, test-driven ELT pipelines.
5. Utilize software development practices such as version control via Git, CI/CD, and release
management
6. Build data products using various visualization, BI tools and data science tools.
7. Collaborate with Data Engineers, DevOps engineers and architects on improvement
opportunities for DataOps tools and frameworks.
8. Implement data quality frameworks and data quality checks.
9. Help define analytical product roadmap to drive the business goals and superior quality
outcomes.
10. Work with Data Scientists, Statisticians and Machine learning engineers to
implement/scale advanced algorithms to solve health care, operational and quality
challenges.
11. Work independently and effectively manage ones time across multiple priorities and
projects.
12. Make recommendations about platform adoption, including technology integrations,
application servers, libraries, and frameworks.
13. Participate in a shared production on-call support model.
14. Be a critical part of a scrum team in an agile environment, ensuring the team
successfully meets its deliverables each sprint.
Skills:
Prior experience in working with EPIC/CLARITY datasets are preferred.
Strong SQL, Data Modeling and Data Warehousing fundamentals.
Experience with data integration tools: DBT (must have), Informatica PowerCenter, MS Integration Services etc.
Experience working with Qlik Sense (must have)
Experience working with relation databases (Snowflake experience is preferred)
Experience with software development practices; version control (github), code review, CI/CD
Prior experience in working with EPIC/CLARITY datasets are preferred.
Good hands-on experience with Linux (RHEL/Debian) operating system
Ability to code with other scripting languages such as Python, Bash, groovy etc.,
Experience utilizing Agile methodology for development