September 18, 2024
Electric Energy Jobs

Data Engineer

Organization:
Ontario Power Generation
Region:
Canada, Ontario, Oshawa
End of contest:
September 18, 2024
Type:
Full time
Category:
Data
Description
Req ID:  48424

JOB OVERVIEW  

Ontario Power Generation (OPG) is looking for dynamic, strategic, and results-driven professional to join our team in the role of a Data Developer.   

Reporting to the Senior Manager, IT Programs, the Data Developer is primary responsible for building and supporting the data driven applications which enable innovative, customer centric digital experiences. You will be working as part of a cross-discipline agile team who help each other solve problems across all business areas. You will build reliable, supportable & performant data lake & data warehouse products to meet the organization's need for data to drive reporting analytics, applications, and innovation. You will employ best practice in development, security and accessibility to achieve the highest quality of service for our customers.

KEY ACCOUNTABILITIES

  • Build and productionize modular and scalable data ELT/ETL pipelines and data infrastructure leveraging the wide range of data sources across the organization
  • Implement curated common data models that offer an integrated, business-centric single source of truth for business intelligence, reporting, and downstream system use, in collaboration with Data Architect
  • Work closely with infrastructure and cyber teams to ensure data is secure in transit and at rest
  • Clean, prepare and optimize datasets for performance, ensuring lineage and quality controls are applied throughout the data integration cycle
  • Support Business Intelligence Analysts in modelling data for visualization and reporting, using dimensional data modeling and aggregation optimization methods
  • Troubleshoot issues related to ingestion, data transformation and pipeline performance, data accuracy and integrity
  • Collaborate with business analysts, data scientists, data engineers, data analysts, solution architects and data modelers to develop data pipelines to feed our data marketplace
  • Assist in identifying, designing, and implementing internal process improvements: automating manual processes, optimizing data delivery, re-designing infrastructure for greater scalability, etc.
  • Work with tools in the Microsoft Stack; Azure Data Factory, Azure Data Lake, Azure SQL Databases, Azure Data Warehouse, Azure Synapse Analytics Services, Azure Databricks, Microsoft Purview, and Power BI
  • Work within the agile SCRUM work management framework in delivery of products and services, including contributing to feature & user story backlog item development, and utilizing related Kanban/SCRUM toolsets
  • Assist in building data catalog and maintenance of relevant metadata for datasets published for enterprise use
  • Develop optimized, performant data pipelines and models at scale using technologies such as Python, Spark and SQL, consuming data sources in XML, CSV, JSON, REST APIs, or other formats
  • Document as-built pipelines and data products within the product description, and utilize source control to ensure a maintainable code-base
  • Implement orchestration of data pipeline execution to ensure data products meet customer latency expectations, dependencies are managed, and datasets are as up-to-date as possible, with minimal disruption to end-customer use
  • Create tooling to help with day to day tasks, and reduce toil via automation wherever possible
  • Work with Continuous Integration/Continuous Delivery and DevOps pipelines to automate infrastructure, code delivery and product enhancement isolation and proper release management and versioning
  • Monitor the ongoing operation of in-production solutions, assist in troubleshooting issues, and provide Tier 2 support for datasets produced by the team, on an as-required basis
  • Implement and manage appropriate access to data products via role-based access control
  • Write and perform automated unit and regression testing for data product builds, assist with user acceptance testing and system integration testing as required, and assist in design of relevant test cases
  • Participate in peer code review sessions, and approve non-production pull requests
  • Other Duties as Required

EDUCATION

  • 4-year University education in computer science, computer/software engineering or other relevant programs within data engineering, data analysis, artifical intelligence, or machine learning

QUALIFICATIONS

  • Minimum 6 years experience with Data Engineering 
  • Experience as a Data Engineer building data pipelines.
  • Fluent in creating data processing frameworks using Python, PySpark, SparkSQL and SQL
  • Experience with Azure Data Factory, ADLS, Synapse Analytics and Databricks
  • Experience building data pipelines for Data Lakehouses and Data Warehouses
  • Good understanding of data structures and data processing frameworks
  • Knowledge of data governance and data quality principles
  • Effective communication skills to translate technical details to non-technical stakeholders

Read the full posting.

Contact

Ontario Power Generation

700 University Ave

Toronto

Ontario Canada

www.opg.com