Back to all jobs
L

Platform Engineer - m/w/d

Langdock

Berlin, GermanyPosted 11 hours agoFull-time

Job details

Company

Langdock

Location

Berlin, Germany

Employment type

Full-time

Primary category

Software Development

Posted date

7 May 2026

Valid through

Job description

Help Us Change the Way the World Works

Build something that matters.

Langdock exists to change the way the world works, bridging the gap between what technology can do and what people actually do with it. We bring all leading AI models into one secure, model-agnostic platform and make them usable across entire organizations. Over 6,000 companies use our platform every day, from fast-growing startups to some of Europe's largest enterprises. Their employees open Langdock to draft strategies, analyze documents, or automate workflows - helping them to work smarter, think more creatively, and reach their full potential.

The role

Platform Engineers at Langdock work on shared backend systems that many product features depend on. This includes the AI engine, queues, document processing, integrations, code execution, authentication, and billing.

The job is to make these systems reliable and understandable enough that other engineers can build on them. That means designing clear APIs, choosing the right data models, handling failure cases, writing tests for important invariants, adding useful observability, and keeping abstractions simple enough to maintain.

Typical problems include deciding which parts of a long conversation to keep in context, retrying background jobs without running the same side effect twice, handling model-provider failures during streaming responses, refreshing integration tokens before they expire, and enforcing tenant boundaries in shared services.

What you will work on

Examples of the kind of systems Platform Engineers own:

  • The AI engine at the core of the platform. It handles the prompts users send through Langdock and abstracts over providers such as OpenAI, Anthropic, Google, Azure, Bedrock, Mistral, and open-source models. The work includes prompt caching, routing, failover across model deployments, and normalizing provider-specific behavior behind a stable internal interface.

  • The workflow runtime. Workflows need to execute reliably across agent steps, conditions, loops, structured-output extraction, human-in-the-loop pauses, and actions across hundreds of integrations. The platform work is about making that execution model predictable, observable, and safe to extend.

  • Context-window optimization. Long conversations, uploaded files, and tool calls need to stay correct while model costs stay under control. This is one of the most cost-sensitive parts of the platform.

  • A code execution service for running untrusted customer code. Today this means JavaScript in some places, but the direction includes broader execution environments such as Bash. The hard part is clear isolation: secrets, filesystem access, outbound network access, and tenant boundaries need to be controlled explicitly.

  • A flexible integration layer for connecting Langdock to external services. Most integrations are standard REST APIs, but the platform also needs to support industry standards for tools and agents, including Model Context Protocol (MCP) and agent-to-agent (A2A) communication.

You will pick up a platform area over time and can shape where it goes next.

Tech stack

  • TypeScript across a Turborepo monorepo

  • Next.js, React, and Tailwind on the front end

  • Node.js services and workers on the back end

  • PostgreSQL with Prisma; Redis with BullMQ

  • A multi-provider model abstraction across OpenAI, Anthropic, Google, Azure, Bedrock, Mistral, and open-source models

  • Sandboxed Node.js for code execution

  • Multi-cloud storage abstraction over AWS S3, Azure File Share, and GCS

  • Terraform for infrastructure orchestration

  • Kubernetes for workloads that need portable deployment across cloud providers

  • Linear for ticket management

  • Datadog and Sentry for observability

You should be familiar with most of this. We trust you to pick up the rest quickly.

How we ship

  • Small temporary squads of 2 to 3 engineers around a topic. Squads form, ship, and dissolve.

  • Every change is linked to a Linear ticket and ships via PR. CI runs lint, tests, and AI review.

  • We deploy continuously to production.

  • We use AI tools heavily in engineering. You have freedom in the tools to use (eg. Cursor, Claude, Codex). We are building a strong harness that allows engineers to move fast while shipping high quality software.

  • We are building a strong operating system around AI-assisted shipping: clear ticket context, focused branches, AI review before human review, explicit rollout notes for risky changes, and production verification after release.

  • The engineer who ships a change owns it in production. If something breaks, you lead the fix.

What we are looking for

  • 3 to 6 years of experience building back-end systems that handle real load: queues, caches, databases, streaming pipelines, distributed schedulers. You can talk through failure modes and tradeoffs from systems you have actually run.

  • Strong in TypeScript and Node. Opinionated about API design, abstractions, and where boundaries should sit.

  • Some infrastructure experience: Terraform, Kubernetes, cloud deployments, networking, or operating services across AWS, Azure, or GCP.

  • Very driven. You have high standards for yourself, move fast without needing to be pushed, and want to do the best work of your career.

  • Working knowledge of the LLM ecosystem at a technical level: context windows, tool calling, streaming protocols, provider quirks, prompt caching.

  • Habit of adding metrics, traces, and structured logs because you have been on call before and know you will need them.

  • Security-first mindset. Multi-tenant data isolation is a design constraint, not a postscript.

  • Heavy user of AI tooling in your own development workflow, with opinions on what works and what does not. We are looking for people who are already compounding their output with AI.

Working here

We work from one office in Berlin, Greifswalder Str. 212. Engineering is fully in person; we believe the hardest problems get solved faster at a whiteboard than in a Slack thread.

Most engineers start around 8:30. Lunch and dinner are together when people stay. Running and gym are part of the routine for many of us.

Compensation

Top of market, transparent levels, all roles include equity. Levels are tied to scope, not negotiation or years of experience. We narrow down the range early in the process.

Next steps

If this sounds like your kind of work, we would like to meet you.

We move fast. Most processes complete within two weeks.

More jobs from Langdock

More software development jobs in Germany

More jobs in Berlin