You're using an older version of Internet Explorer that is no longer supported. Please update your browser.

Data Engineer

Vancouver, BC
13 days ago
As a result of growth our client is looking to add a contract data engineer to the team to build new components and add additional functionality to a large-scale system being implemented. 
  • 12-month Data Engineer contract
  • Great chance to be part of a large-scale business transformation
  • Enterprise organization with a global footprint who has seen tremendous growth year over year. 

What & Why: 

You will be responsible for creating and maintaining the optimal data pipeline architecture and building the components required for optimal extraction, transformation, and loading of data from a wide variety of data sources using SQL and AWS ‘big data’ technologies. You will be working with stakeholders including the POs, BSAs, Data and Integration teams to assist with data-related technical issues and support their data engineering needs. You will be responsible for creating data tools for analytics and data scientist team members that assist them in building and optimizing our product into an innovative industry leader.


This client has a global presence and is one of the most recognizable Canadian brands. They are well known for their corporate culture and have recently won major awards for their accomplishments in 2019. 2020 is shaping up to be the largest year of their 20+ year history with major investment in technology as well as an all-star leadership team who keeps you accountable and provides support while still affording you tremendous autonomy. They are located close to rapid transit in newly renovated offices in downtown Vancouver with full remote on-boarding capability.


You will bring the following education, skills and experience to the role:
  • Proven experience in cloud-based big data platforms 
  • Excellent understanding of software development and design principles 
  • Solid scripting capability for analysis and reporting (Strong PL/SQL) 
  • Strong understanding of algorithms and data structures 
  • Experience building and optimizing ‘big data’ data pipelines, architectures and data sets 
  • Strong background in data modeling concepts 
  • Deep understanding of Spark 
  • Previous working experience with Airflow, Kafka, Grafana, and Splunk 
  • Must have: Working experience in dealing with big data and data manipulation. 
  • Desired: Familiarity with DevOps practices like CI/CD pipelines 
  • Desired: Working experience with cloud services namely AWS EMR, RDS, Glue, Athena, and Redshift 
  • Bachelor’s degree in Comp Sci, MIS, Business or related experience. 

Next Steps: 

If the sound of this opportunity excites you, and you’re confident that it’s a good fit for your experience and career goals, then we’d love to hear from you! Please send your updated resume to us by applying to this posting and one of our awesome team of recruiters will be in touch.
Software and Programming Information Technology