Posted 2 months, 3 weeks ago Active
Senior Data Engineer 
	-  London, United Kingdom (Hybrid)
 
	
 Contract 
	
 Software Development 
	- Per Day £650
 
 
 We are seeking a highly skilled Senior Data Engineer to help shape and scale a cutting-edge global analytics platform that supports front office trading across multiple asset classes. This role offers the opportunity to work at the intersection of data engineering, cloud technologies, and advanced analytics, building solutions that power decision-making in a fast-moving, data-driven environment.
Key Responsibilities
	- Design, build, and maintain scalable data pipelines and Delta Lake architectures in Databricks on AWS.
	
 - Develop and enhance enterprise-level data warehouses to ensure performance, reliability, and data quality for trading analytics.
	
 - Collaborate with data scientists, quants, and ML engineers to prepare ML-ready datasets and deliver production-grade ML/AI pipelines.
	
 - Implement CI/CD pipelines, testing frameworks, and observability tools for robust data workflows.
	
 - Contribute to MLOps practices, including model tracking, deployment, and monitoring with Databricks and MLflow.
	
 - Participate in solution design, code reviews, and cross-functional collaboration.
	
 - Ensure compliance with data governance, security, and performance standards.
	
 - Stay current with Databricks and cloud data ecosystem developments to drive innovation and best practices.
 
Skills & Experience
	- BS/MS in Computer Science, Software Engineering, or related technical field.
	
 - 8+ years of hands-on experience designing and implementing large-scale distributed data pipelines.
	
 - Deep expertise in Apache Spark, PySpark, and Databricks (Delta Lake, Unity Catalog, MLflow, Databricks Workflows).
	
 - Strong proficiency in Python and SQL, with experience building modular, testable, reusable pipeline components.
	
 - Proven track record with AWS services (S3, Lambda, Glue, API Gateway, IAM, EC2, EKS) and their integration with Databricks.
	
 - Advanced skills in Infrastructure as Code (Terraform) for provisioning and managing data infrastructure.
	
 - Experience in MLOps pipelines and modern ML frameworks (e.g., scikit-learn, XGBoost, TensorFlow).
	
 - Familiarity with real-time streaming architectures (Kafka, Structured Streaming) is a plus.
	
 - Strong DevOps practices in data engineering: versioning, testing, deployment automation, monitoring.
	
 - Understanding of data governance, access control, and compliance in regulated industries.
	
 - Excellent communication, collaboration, and problem-solving skills with a strong sense of ownership.
	
 - Experience in commodity trading markets (e.g., power, gas, crude, freight) is advantageous.
	
 - Relevant certifications in Databricks, AWS, or big data technologies are preferred.
 
Apply For This Job