Senior Data Engineer - Credit Loss Modeling & Provisioning @ Klarna
$28 - $451/year
Job Description
Senior Data Engineer - Credit Loss Modeling & Provisioning
Engineering · Stockholm · Full-time · kr 611,050 SEK - kr 843,151 SEK
OverviewApplication
What You'll Do
- Build and maintain production data pipelines using Airflow-based orchestration and SQL to process credit balances, delinquency histories, credit scores, and provisioning outputs across multiple regions.
- Design and optimize complex SQL transformations in Amazon Redshift, working with multi-terabyte datasets spanning loan-level exposures, payment histories, charge-offs, write-offs, and recoveries.
- Develop ETL processes using PySpark on AWS Glue to extract and transform balance data, credit bureau scores, and feature sets from source systems.
- Own feature engineering pipelines that produce training datasets and model inputs, including delinquency features, payment behavior, and macroeconomic indicators.
- Build and maintain cloud infrastructure using Terraform to manage AWS resources including Batch compute environments, Lambda functions, ECS services, S3, SNS, and CloudWatch.
- Develop event-driven data flows using Kafka and Avro, including schema management via Schema Registry, to support bookkeeping and downstream financial systems.
- Orchestrate end-of-month production runs using Lambda and Batch - coordinating data preparation, model scoring, and result exports across regions.
- Maintain ML platform infrastructure (MLflow on ECS) supporting experiment tracking and model registry for the data science team.
- Build data quality and stability checks through reconciliation pipelines, freshness monitoring, and automated validations to catch issues before they reach production models.
- Develop monitoring dashboards to track input data quality, delinquency patterns, loss rates, and fair value metrics.
- Manage the losses data model covering provisioning, charge-offs, write-offs, recoveries, and fair value that feeds downstream financial reporting.
Who you are
- 1+ years of experience as a Data Engineer, Analytics Engineer, or similar role.
- Advanced SQL skills - complex multi-CTE queries, window functions, and performance-tuned transformations in a columnar warehouse (Redshift, BigQuery, or Snowflake).
- Python proficiency for data processing, pipeline orchestration, and scripting (pandas, boto3).
- Experience with workflow orchestration - Airflow, Dagster, Prefect, or similar DAG-based schedulers.
- Strong AWS fundamentals - hands-on experience with S3, Lambda, Batch, IAM, and at least one managed database service.
- Infrastructure as Code experience with Terraform or CloudFormation.
- Docker containerization for reproducible builds and deployments.
- Git workflows and CI/CD - automated testing and deployment pipelines.
Awesome to have
- Experience with Amazon Redshift specifically (distribution keys, sort keys, Spectrum, Redshift Serverless).
- Exposure to PySpark / AWS Glue for large-scale ETL.
- Experience with event streaming platforms (Kafka, Avro, Schema Registry).
- Experience deploying or maintaining MLflow or similar ML platform infrastructure.
- Familiarity with templated SQL pipelines (Jinja2 or similar).
- Understanding of financial data domains - credit risk, provisioning, accounting, or regulatory reporting.
- Experience with AWS Batch for compute-heavy workloads.
- Knowledge of IFRS 9, ECL concepts, or credit loss modeling is a strong plus but not required.
Apply
Upload your resume to autofill the application
Upload
First name *
Last name *
Email *
Dial code
Phone number
LinkedIn profile
Please upload your resume (preferably as a PDF file) *
Click or drag file to upload
Supported file format: .PDF; Maximum file size: 10.49 MB; Maximum file: 1;
Mandatory
Are you currently based in the location the job is posted in? *
Yes
No
Are you legally authorised to work in the location where this position is based? *
Yes
No
Will you now or in the future require sponsorship for employment authorization or visa status in the location this position is based? *
Yes
No
AI disclaimer
This application process may use AI-enabled systems to support candidate evaluation based on factors such as experience, technical skills, and qualifications. All processing is carried out in accordance with applicable data protection, AI governance, and labour laws. Human oversight is maintained at all stages, and final hiring decisions are made by people. Personal data submitted as part of this application is not used to train AI models.
I have read and understood the privacy policy and acknowledge that my personal data will be processed as part of this application.
Submit
Hire withPrivacy Policy
For more details click Job Post.
About Deel
Deel is a global payroll and HR compliance platform that enables companies to hire, onboard, and pay international employees and contractors in over 150 countries in full compliance with local laws. Industry: Human Resources Technology & Global Payroll