BeZero Carbon is a global ratings agency for the Voluntary Carbon Market. Our ratings allow all market participants to price and manage risk. BeZero’s ratings and research tools support buyers, intermediaries, investors, and carbon project developers.
Founded in April 2020, our 100-strong team combines climatic and earth sciences, sell-side financial research, earth observation, machine learning, data and technology, engineering, and public policy expertise. We work from six continents.
BeZero is looking for a (geospatial) data engineer to join our data engineering & data science team. The team currently consists of 5 experienced data engineers and scientists, who are building data and machine learning products for our client-facing platform and internal teams. The team sits at the heart of the business, working across our ratings, product and technology teams. This role will also work closely with our Earth Observations (EO) research team and is focused on building scalable data pipelines for our geospatial data products.
We prefer an engineer with experience handling geospatial data sets for this role, but welcome applications from experienced engineers who are looking to get into geospatial data engineering but don’t have that experience yet.
As a data team, we have a bias towards shipping products, staying close to our internal and external customers, and owning our infrastructure and deployments end-to-end. This is a team that follows software engineering best practices closely: no production code found in Jupyter notebooks here! Our data stack looks as follows:
Snowflake as our central data warehouse for tabular data. AWS S3 is used for any non-tabular data (e.g., satellite imagery data).
dbt for building SQL-style data models and taking care of the ‘T’ in ELT.
Python jobs for non-SQL data transformations, using packages like NumPy, Pandas, scikit-learn, PyTorch, and many more.
Prefect as our workflow orchestration manager, with our jobs executed on AWS ECS.
Metabase as a dashboarding solution for end-users.
GitHub Actions for CI / CD.
As a geospatial data engineer, you will be deeply embedded in our data and EO teams. You’ll be responsible for building productionised data pipelines that handle (large) satellite-derived and other geospatial data sets, define best practices with our EO scientists for geospatial data analysis, and play a key role in building the underlying infrastructure for our computer vision ML models (e.g., for forest cover segmentation).
You’re welcome to work in our London-based office, but we very much welcome applications remotely if you can work GMT +/- 3 hours. You’d be reporting to our Chief Data Strategist Thomas, who joined recently from data and machine learning consultancy QuantumBlack and McKinsey & Co, and represents the data organisation at the executive level.
- You will be an individual contributor on our central data team, focused on scoping and building geospatial data products to be deployed on our carbon markets platform.
- You will be charged with finding ways to monitor and maintain data flows, enable self-service analysis to business users, and to create and improve scalable data pipelines.
- You will build automated data pipelines that collect, and manipulate large data sets (such as optical satellite imagery, SAR and LiDAR, climate data and others).
You’ll be our ideal candidate if:
- You care deeply about the climate and carbon markets and are excited by solutions for decarbonizing our economy.
- You have 3+ years of experience in building data pipelines in production for (geospatial) data engineering and machine learning use cases.
- You have experience with handling raster and vector data formats (GeoJSONs, Shapefiles, etc) and geospatial SQL and Python packages (e.g., PostGIS, xarray, geopandas, rasterio, shapely, gdal).
- You have experience with using a workflow orchestration tool (e.g., Prefect, Dagster, Airflow), cloud platforms (we use AWS, but experience with any cloud platform will do), and the Python scientific computing stack (NumPy, SciPy, matplotlib, pandas, etc).
- You can write clean, maintainable, scalable, and robust code in Python and SQL, and are familiar with collaborative coding best practices (e.g., for Python PEP8 code style, unit testing, continuous integration tools such as flake8, black, isort, etc).
- Bonus points (but we’d still like to hear from you if you don’t have experience in any of these)
- You have practical knowledge of software engineering concepts and best practices, inc. DevOps, DataOps, and MLOps.
- You have experience in building and deploying deep learning models for computer vision use-cases.
Research has shown that women are less likely than men to apply for a role if they don’t have experience in 100% of the requirements outlined in a job description. Please know that even if you don’t have experience in all the areas above but think you could do a great job and are excited about shaping company culture, finding great people, and building great teams, we’d love to hear from you!
What we’ll offer:
- Competitive remuneration and opportunity for equity in a rapidly growing VC-backed start-up through share options.
- Ability to learn and develop alongside a range of sector specialists from the scientific, technology, economic and business community.
- A fast-growing data organisation that is central to the company, with clear opportunities to set the pace and vision for how we work with geospatial data.
- Flexible working arrangements: A central London office space (Old Street) if you wish to work from an office, but opportunity to be completely remote as well.
- Competitive benefits package including 25 days holiday, extra half day leave on your birthday, extra holiday between Christmas and New Year, life insurance, critical illness cover, income protection, private medical insurance, dental insurance, healthcare cash plan, cycle to work, workplace nursery, monthly wellbeing budget and more
- Enhanced parental leave
Our interview process:
- Initial screening interview
- Technical case study interview with data engineers and / or remote sensing scientists
- Final interviews with senior management
- Reference checks + offer
We value diversity at BeZero Carbon. We need a team that brings different perspectives and backgrounds together to build the tools needed to make the voluntary carbon market transparent. We’re therefore committed to not discriminate based on race, religion, colour, national origin, sex, sexual orientation, gender identity, marital status, veteran status, age, or disability.