Documentation Index
Fetch the complete documentation index at: https://docs.trainy.ai/llms.txt
Use this file to discover all available pages before exploring further.
Using an AI coding assistant? Install the Konduktor Skills plugin (npx skills add Trainy-ai/konduktor-skills) to let your AI generate Python SDK code for you — run /draft-python-task to get started.
When to use the Python API
The CLI YAML format remains the primary way to describe jobs, but the same
capabilities are exposed through the Python package. Use it when you want to:
- Generate tasks dynamically inside your own tooling.
- Share a reusable Python helper that sets up common resources or file mounts.
- Launch quick experiments without writing YAML files.
Available entry points
konduktor.Task
konduktor.Task(
name: str, # required
run: str | Callable[[int, list[str]], str | None] | None = None, # required
envs: dict[str, str] | None = None, # optional
workdir: str | None = None, # optional
num_nodes: int | None = None, # optional; defaults to 1
)
Useful setters:
task.set_resources(konduktor.Resources(...))
task.set_file_mounts({"/remote/path": "./local"})
task.set_serving(konduktor.Serving(...)) when serving deployments.
konduktor.Resources
konduktor.Resources(
cpus: int | float | str | None = None, # required;
memory: int | float | str | None = None, # required;
accelerators: str | None = None, # optional;
image_id: str | None = None, # required;
labels: dict[str, str] | None = None, # required: kueue.x-k8s.io/queue-name
job_config: dict[str, int | str] | None = None, # optional; only max_restarts and completions
)
konduktor.Serving
konduktor.Serving(
min_replicas: int | None = None, # required
max_replicas: int | None = None, # optional; defaults to min_replicas
ports: int | None = 8000, # optional; defaults to 8000
probe: str | None = '/health', # optional; defaults to probe=None (general) or /health (vLLM)
# EXCLUDE PROBE COMPLETELY w VLLM DEPLOYMENTS
)
konduktor.launch
konduktor.launch(
task: konduktor.Task,
dryrun: bool = False,
detach_run: bool = False,
) -> str | None
Submits the task. When dryrun=True, Konduktor prints the rendered spec instead
of launching. detach_run=True returns immediately after submission; otherwise
logs stream until completion. The return value is the job ID when available.
Example
import konduktor
# Describe the work to run on each node.
task = konduktor.Task(
name="python-api-demo",
run="python train.py --epochs 10",
workdir=".",
envs={"myenv": "foo"},
)
# Specify compute requirements and image
resources = konduktor.Resources(
cpus=4,
memory=12,
accelerators="H100",
image_id="docker.io/ryanattrainy/pytorch-mnist:cpu",
labels={"mylabel": "bar"},
job_config={
"max_restarts": 3,
"completions": 2,
},
)
task.set_resources(resources)
job_id = konduktor.launch(task)
print(f"Submitted job: {job_id}")
Save the file (for example as launch_from_python.py) and run:
python launch_from_python.py