Summary: The role of Artificial Intelligence Engineer involves building and operating AI/ML infrastructure, supporting model deployment, and automating workflows. The position requires a strong background in AI Ops, MLOps, or Infrastructure Engineering, with hands-on experience in relevant technologies. The role is based in Edinburgh, requiring onsite presence three days a week, and is classified as outside IR35. The contract duration is expected to be between 12 to 24 months, with a likelihood of extension.
Key Responsibilities:
- Build and operate AI/ML infrastructure used in production.
- Support model deployment, monitoring, and scaling.
- Automate workflows around training, evaluation, and deployment.
- Work with GPU-based systems, distributed compute, and CI/CD pipelines.
- Partner closely with data scientists and engineers to keep AI systems stable and fast.
Key Skills:
- Strong background in AI Ops, MLOps, DevOps, or Infrastructure Engineering.
- Hands-on experience with Linux, automation, and scripting (Python/Bash).
- Experience with distributed systems and compute-heavy environments.
- Familiarity with containers & orchestration (Docker, Kubernetes or similar).
- Nice to have GPU/CUDA experience.
- Exposure to HPC or large-scale AI platforms.
- Monitoring & observability tools (Prometheus, Grafana, etc.).
Salary (Rate): £71.00 hourly
City: Edinburgh
Country: United Kingdom
Working Arrangements: on-site
IR35 Status: outside IR35
Seniority Level: Mid-Level
Industry: IT
Location: Edinburgh, UK
Onsite: 3 days per week – mandatory / Remote
Start: ASAP
Duration: 12-24 months (extension very likely)
Language: English (must-have)
Valid UK work permit / right to work
All required documentation to operate as a UK contractor / freelancer
Engagement classified as Outside IR35
What you’ll do
- Build and operate AI / ML infrastructure used in production
- Support model deployment, monitoring and scaling
- Automate workflows around training, evaluation and deployment
- Work with GPU-based systems, distributed compute and CI/CD pipelines
- Partner closely with data scientists and engineers to keep AI systems stable and fast
What you bring
- Strong background in AI Ops, MLOps, DevOps or Infrastructure Engineering
- Hands-on experience with Linux, automation, scripting (Python/Bash)
- Experience with distributed systems and compute-heavy environments
- Familiarity with containers & orchestration (Docker, Kubernetes or similar)
- Comfortable operating onsite in Edinburgh 3x/week
Nice to have
- GPU / CUDA experience
- Exposure to HPC or large-scale AI platforms
- Monitoring & observability tools (Prometheus, Grafana, etc.)
What we need
If you are interested and available – or if you know someone you would recommend – I’d be happy to receive your updated CV with a short email incl. contact details to: Joseph@WorkGenius.com
Please always include:
- Availability start date
- Hourly rate (Edinburgh, UK & Remote)
- A short 2–3 line summary explaining why your background is a good fit for this project
Thank you!