Roadmap
March
★
Agent workers through HTTPBackend
Currently Remote Agent Workers still need to connect to the central Postgresql Database using a direct tcp connection and work leveraging RLS policies in Postgresql and require a complex and deep understanding of Windmill and Postgresql.
HTTP Agent workers will only communicate to the central servers using HTTP and will be seamless to setup using simply a secure policy as config.
HTTP Agent workers will only communicate to the central servers using HTTP and will be seamless to setup using simply a secure policy as config.
April
Batch Runs UIUI
The goal of this feature is to allow to run more than 1 jobs at once from the UI
Scenarios that need to be handled:
Filtering jobs from the runs page, selecting them (all within page, all within filters, or one by one) and re-running all of them (v0)
Take a time window range within a schedule that is in the past and run all ticks that would have happened in the past, with optionally not re-running the ticks that did actually happen
In terms of job arguments, here are the different scenarios:
Rerun with exactly same args as the re-ran job (v0)
If having selected different script versions or flow versions, which have different schemas:
Have a tab for each script/flow version containing a schema form
Have a "common tab" where fields that are common can be set there
every field can either be set with static info or a javascript expression like for scripts/flows, where possible values usable in that javascript expressions are the date at which the job was originally schedules or the value of other fields
Scenarios that need to be handled:
Filtering jobs from the runs page, selecting them (all within page, all within filters, or one by one) and re-running all of them (v0)
Take a time window range within a schedule that is in the past and run all ticks that would have happened in the past, with optionally not re-running the ticks that did actually happen
In terms of job arguments, here are the different scenarios:
Rerun with exactly same args as the re-ran job (v0)
If having selected different script versions or flow versions, which have different schemas:
Have a tab for each script/flow version containing a schema form
Have a "common tab" where fields that are common can be set there
every field can either be set with static info or a javascript expression like for scripts/flows, where possible values usable in that javascript expressions are the date at which the job was originally schedules or the value of other fields
★
Global Windmill AI Copilot/ChatAI
- cursor-like autocompletion for all script editors (inline)
- MCP server to give context about Windmill workspace
- AI chat panel available globally that adapts to context and unify all AI interactions (AI Fix, AI gen, AI edit, Workflow Edit, Explain Code, Summarize info from workspace)
- MCP server to give context about Windmill workspace
- AI chat panel available globally that adapts to context and unify all AI interactions (AI Fix, AI gen, AI edit, Workflow Edit, Explain Code, Summarize info from workspace)
May
★
Dedicated workers improvementsBackend
Dedicated workers are currently a setting for individual scripts, flows and worker groups. A dedicated worker can only be dedicated to one script or flow. We mean to change that so that in the extreme case, it can handle a full workspace.
We want them to now have a configuration, let's call it the dedicated worker configurations that select all the flows and scripts (potentially apps) it will be dedicated to:
for all those it needs to do a compaction for ts and python where it will essentially route internally the job to the right subfunction based on job path which mean all those scripts can now run on the same runtime and we have at most 2 running runtimes (bun, python) on a dedicated worker. Those configuration can be deployed and each deployment will have a deployment version and a status . It corresponds to the status of creating the single lockfile for ts and single lockfile for python. If we have incompatible reqs for different scripts it will fail otherwise it pass.
Instead of having to declare if a script or flow is ran on dedicated worker on the script or flow, they will be automatically updated when they are part of a successful deployment of a worker config such that their tag become dedicated:<dedicated_worker_config_name>
The creation of the compacted view is the "hard part". Once we have those, we can use our normal traversers to generated or fail to generate the single lockfile.
We want them to now have a configuration, let's call it the dedicated worker configurations that select all the flows and scripts (potentially apps) it will be dedicated to:
for all those it needs to do a compaction for ts and python where it will essentially route internally the job to the right subfunction based on job path which mean all those scripts can now run on the same runtime and we have at most 2 running runtimes (bun, python) on a dedicated worker. Those configuration can be deployed and each deployment will have a deployment version and a status . It corresponds to the status of creating the single lockfile for ts and single lockfile for python. If we have incompatible reqs for different scripts it will fail otherwise it pass.
Instead of having to declare if a script or flow is ran on dedicated worker on the script or flow, they will be automatically updated when they are part of a successful deployment of a worker config such that their tag become dedicated:<dedicated_worker_config_name>
The creation of the compacted view is the "hard part". Once we have those, we can use our normal traversers to generated or fail to generate the single lockfile.
June
July
Shardable job queue for unlimited scalabilityBackend
Windmill can scale horizontally unlimited except for the database. The biggest bottleneck is the v2_job_queue table which limits the maxmimum theoritical throughput of Windmill to around 20k rps. By allowing sharding of the job queue on multiple databases (by sharding by workspace id first then by the hash of the uuid of the root job), the scalability of Windmill will be virtually infinite.
TBD
Cloudflare Workers supportBackend
Cloudflare workers support for native jobs