Senior Product Designer / Data Platform

WE ARE LOOKING FOR THE BEST Senior Product Designer — Data Platform (Healthcare Interop & Intelligence)

About HeyDonto

HeyDonto transforms fragmented dental and healthcare data into interoperable, FHIR-native experiences—bi-directionally.

Our architecture spans on-prem synchronizers and cloud
APIs, Kafka + Confluent Schema Registry, GCP,
Temporal-orchestrated microservices, and an AI mapping engine rooted in evolutionary neural networks.

We’re extending these foundations toward self-healing, self-evolving substrates and semantic intelligence that adapt in real time across heterogeneous clinical ecosystems.

The Mission

Own and unify the end-to-end product design of HeyDonto’s data platform—Data Warehouse Management, Pipelines/Workflows, Procedures, AI/NN Analysis, Data Mapping + Mapping UI, Transformation UI, and Data Exposure UI.

The goal:
make complex capabilities discoverable, safe, fast, and lovable for power users (data/ML/interop engineers) and approachable for newcomers.

Why this role matters now

  • Phase One: scalable, bi-directional FHIR ⇄ legacy EHR transformations.
  • Phase Two: adaptive, self-healing mapping and multi-schema intelligence.
  • Phase Three: self-evolving substrates with memory, confidence, and audit.
  • Phase Four (emerging): semantic intelligence across large healthcare warehouses.

You’ll design the connective tissue and workflows that expose these capabilities coherently in the UI—today and as they evolve.

What you’ll design (scope):

  • Platform Information Architecture:
    Global nav + object model across Site → Warehouse → Dataset → Pipeline (DAG) → Mapping Profile → Transform → Exposure → Run/Artifacts; Environments, versioning, RBAC/ABAC, audit/logging, approvals.

  • Pipeline & Orchestration UX:
    DAG canvas, inspectors, schedule/retries, Temporal-aware timelines, lineage & impact analysis.

  • Mapping & Transformation UIs:
    Visual schema browser, drag-map, AI-suggested mappings, validation gates, bi-directional transforms.

  • Editors & Dev-tooling:
    SQL/JSON/YAML/code + visual hybrid authoring, autocomplete, linting, inline docs.

  • Observability & Safety:
    Runs, logs, metrics, SLAs/SLOs, incident views, guardrails, rollbacks.

  • AI-assisted UX:
    “Explain this failure,” self-healing proposals with confidence and lineage.

  • Design System (Data-Dense):
    Components for tables, diff viewers, DAG nodes, code editors, theming & accessibility.

Key Responsibilities:

  • Drive product strategy with PM/Eng for authoring, runtime, governance, and intelligence features.

  • Deliver flows, prototypes, specs, and redlines; validate with usability tests on real data scenarios.

  • Establish and steward a Data Experience Design System (tokens, components, usage rules).

  • Define success metrics (e.g., time-to-first-pipeline, mapping accuracy, failure prevention).

  • Champion accessibility and performance in data-dense UI.

  • Partner deeply with platform, interop, security, and customer engineering teams.

Qualifications (must-have)

  • 5–10+ years in product design; 3+ years designing data/ETL/devtools or ML/observability platforms.

  • Portfolio with shipped work in DAG/pipeline tools, schema/mapping UIs, query/transform editors, or observability consoles

  • Working fluency with SQL, schemas, ETL concepts (lineage, CDC, SCDs), versioning, and rollbacks

  • Proven ability to structure complex domains into clear mental models and scalable IA.

  • Expert in Figma (components/variants, prototypes) and UX specs for engineers.

Nice-to-have

  • Exposure to Kafka/Confluent, Flink/Temporal, dbt, Spark, GCP.
  • Healthcare data standards: FHIR/HL7, PHI handling, masking, audit.
  • MLOps/LLMOps consoles; evaluation tooling; confidence/lineage UX patterns.
  • Experience with data governance and RBAC/ABAC at scale.

Preferred Background

Data-Driven Software Companies
: ClickHouse, Snowflake, Datadog, Airbyte, Confluent, Ververica, Segment.io

Cloud Providers
: Google Cloud Platform (GCP), Azure, AWS

Hiring Details:

  • Work Type: Remote
  • Location: US and México (
    Guadalajara, CDMX & Monterrey)

  • If you are interested in applying, please send your English Resume through LinkedIn or send it to
    [email protected]
    , mentioning the name of the role you are applying for in the subject of the email.

  • Also, please share Portfolio with shipped work in
    DAG/pipeline tools
    ,
    schema/mapping UIs
    ,
    query/transform editors
    , or
    observability consoles
    .

In the body of the email, please include the following information:

  • Salary expectations
  • Availability for interview
  • Availability to join the team

Interested in this position?

Apply