Roadmap

March

Java SupportLanguages    


GCP PubSub SupportIntegration    

Currently, Windmill does not have a native support for GCP PubSub. It would be great to have a native support for GCP PubSub.

Backend side jsonschema validation of argsBackend    

Currently, the jsonschema validation of args is done on the frontend. This is a problem because there are no guarantees that the payload will actually respond the jsonschema

Dynamic table and column name for SQLLanguages    

Currently, SQL scripts can only have prepared statement parameters. It would be great to have dynamic table and column names.

Agent workers through HTTPBackend    

Currently Remote Agent Workers still need to connect to the central Postgresql Database using a direct tcp connection and work leveraging RLS policies in Postgresql and require a complex and deep understanding of Windmill and Postgresql.

HTTP Agent workers will only communicate to the central servers using HTTP and will be seamless to setup using simply a secure policy as config.
April

Ansible ImprovementsLanguages    


Batch Runs UIUI    

The goal of this feature is to allow to run more than 1 jobs at once from the UI

Scenarios that need to be handled:

Filtering jobs from the runs page, selecting them (all within page, all within filters, or one by one) and re-running all of them (v0)
Take a time window range within a schedule that is in the past and run all ticks that would have happened in the past, with optionally not re-running the ticks that did actually happen

In terms of job arguments, here are the different scenarios:

Rerun with exactly same args as the re-ran job (v0)
If having selected different script versions or flow versions, which have different schemas:
Have a tab for each script/flow version containing a schema form
Have a "common tab" where fields that are common can be set there
every field can either be set with static info or a javascript expression like for scripts/flows, where possible values usable in that javascript expressions are the date at which the job was originally schedules or the value of other fields

Interactive Script debuggerDeveloper    

Add breakpoints and run an interactive debugger for Python and Typescript.

Code/React UI BuilderUI    

Write apps fully in code in React/Svelte/Vue with any libraries and see preview and code editor directly. If you like Lovable/v0.dev, you will love this since it will be equivalent but much better integrated with windmill backend flow/script capabilities.

Free Form FlowsDefault    

- Free positioning of nodes
- Colorable Rectangle to group nodes
- Free Text annotable anywhere

Global Windmill AI Copilot/ChatAI    

- cursor-like autocompletion for all script editors (inline)
- MCP server to give context about Windmill workspace
- AI chat panel available globally that adapts to context and unify all AI interactions (AI Fix, AI gen, AI edit, Workflow Edit, Explain Code, Summarize info from workspace)
May

Dedicated workers improvementsBackend    

Dedicated workers are currently a setting for individual scripts, flows and worker groups. A dedicated worker can only be dedicated to one script or flow. We mean to change that so that in the extreme case, it can handle a full workspace.

We want them to now have a configuration, let's call it the dedicated worker configurations that select all the flows and scripts (potentially apps) it will be dedicated to:
for all those it needs to do a compaction for ts and python where it will essentially route internally the job to the right subfunction based on job path which mean all those scripts can now run on the same runtime and we have at most 2 running runtimes (bun, python) on a dedicated worker. Those configuration can be deployed and each deployment will have a deployment version and a status . It corresponds to the status of creating the single lockfile for ts and single lockfile for python. If we have incompatible reqs for different scripts it will fail otherwise it pass.

Instead of having to declare if a script or flow is ran on dedicated worker on the script or flow, they will be automatically updated when they are part of a successful deployment of a worker config such that their tag become dedicated:<dedicated_worker_config_name>
The creation of the compacted view is the "hard part". Once we have those, we can use our normal traversers to generated or fail to generate the single lockfile.

OpenAPI -> HTTP routes + scripts templatesIntegration    

Ability to transform an OpenAPI spec into a set of http routes + scripts pre-generated by AI

Workspace -> OpenAPIIntegration    

Generate all the webhooks and HTTP routers, as OpenAPI endpoints, including summary and description

Improved local dev experienceDeveloper    

- Python and Typescript (bun) to respect the windmill lockfile of a script when running it
- leverage the MCP integration to add context to AI ide like Cursor and Windsurf
June

Ruby SupportLanguages    


AI step as Flow primitiveAI    

- More powerful and native API calls to AI models

Exhaustive Hub integrations generated by AIIntegration    

- Improve AI agents to generate both code and test for all possible integrations
- Make it easier for community to contribute to this process
July

Data pipelines v2Backend    

- Apache iceberg support
- Data lineage + column-wise data-lineage
- Asset/Dataset centric view of data pipelines
- Better support for data materialization
- Better support for streaming/incremental pipelines
- Integration with metadata platforms
- Better integration with duckdb

Shardable job queue for unlimited scalabilityBackend    

Windmill can scale horizontally unlimited except for the database. The biggest bottleneck is the v2_job_queue table which limits the maxmimum theoritical throughput of Windmill to around 20k rps. By allowing sharding of the job queue on multiple databases (by sharding by workspace id first then by the hash of the uuid of the root job), the scalability of Windmill will be virtually infinite.
TBD

Cloudflare Workers supportBackend    

Cloudflare workers support for native jobs