About the Role
Cursor ships daily. Every release leaves signals behind: telemetry, prompts, completions, agent runs, sessions. Those signals power model improvement, evals, and experimentation. Data infrastructure is what turns them into something teams can trust.
A lot of systems here started simple so we could move fast. Over time, the constraints change and the “good enough” version becomes the bottleneck. This role owns the full ladder: patch what should be patched, redesign what should be redesigned, ship the replacement, and operate it.
Privacy guarantees are part of correctness. What we can retain and use depends on Privacy Mode and org configuration, and getting that wrong breaks a product promise. We choose work by business impact: what blocks product and model teams today, and what will block them next month.
Sample projects include…
-
A core pipeline started as a pragmatic reuse of infrastructure built for something else. It works, but it cannot guarantee properties downstream consumers now need (for example, point-in-time consistency). You design and ship the replacement while keeping the existing system running.
-
A new product surface ships without instrumentation. You talk to the team, define what needs to be captured, and wire it through before the absence becomes anyone else’s problem.
-
Eval coverage drops. You trace it to an instrumentation gap introduced weeks ago by a product change nobody flagged. You fix the gap, add a contract so it cannot recur, and ship the dashboard that would have caught it earlier.
-
Multiple consumers depend on overlapping data. You design schema evolution and validation so changes in one place do not silently degrade the others.
-
Storage costs rise faster than usage. You decide what is worth keeping, implement retention and compression, and delete what is not.
What we're looking for
We’re looking for someone who has built real systems at scale and cares about correctness, cost, and ergonomics.
Strong signals include:
-
Deep experience with Spark (Databricks or open-source Spark both count)
-
Production experience with Ray Data
-
Hands-on ownership of large data pipelines and storage systems
-
Comfort debugging performance issues across client instrumentation, streaming, storage, and model-facing workflows, as well as, compute, storage, and networking layers
-
Clear thinking about data modeling and long-term maintainability
-
You have good judgment about when to patch and when to rebuild
Nice to have
-
Experience running or scaling ClickHouse
-
Familiarity with dbt, Dagster, or similar orchestration and modeling tools
We're in-person with cozy offices in North Beach, San Francisco and Manhattan, New York, replete with well-stocked libraries.