Rendered at 23:21:13 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
peterkelly 7 hours ago [-]
I've always been of the view that for a workflow language, you should use a proper, turing-complete functional language which gives you all the usual flexiblity for transformations on intermediate data, while also supporting things like automatic parallelisation of things like external, compute-intensive tasks.
The gap in flexiblity between DAG-only and a full language designed for the task is a significant one.
ofrzeta 5 hours ago [-]
I guess that ship has sailed and also it's maybe nitpicking but I find it a bit unfortunate to call a new programming language "Rex" when there's already "Rexx" for several decades.
smartmic 4 hours ago [-]
I wonder, isn‘t any Lisp, be it Clojure, Scheme, etc. not exactly suited for such tasks?
Spark in Scala does the ETL part of this well. The orchestration part is another story.
antonvs 6 hours ago [-]
Do you implement a DAG within your system to act as a kind of well-defined backbone for analysis and execution, or do you dispense with (explicit) DAGs entirely?
Moosifer 3 hours ago [-]
[dead]
panda888888 5 hours ago [-]
How is this different from Airflow or commercial data orchestration tools, like Astronomer, Dagster, Prefect, etc.?
hyperbovine 4 hours ago [-]
It's buggier and less functional.
Keyframe 3 hours ago [-]
quite a statement considering which were mentioned!
kovariance 5 hours ago [-]
YAML as a programming language is something I consider as an anti-pattern (see AWS Step Functions). Very difficult to read/debug/test. It's better to use a real programming language that compiles into a DAG (e.g. Temporal, Dagger.io).
SkyPuncher 6 hours ago [-]
I'm working on something similar as a side project. I'm finding frustration with a lack of repeatability in my LLM flows. 90% of my code is AI written, but most of my guidance to LLMs is not particularly specific. It's "make sure you've read this file", "how does that match against existing patterns", "what's the performance like".
I've ended up building my workflow engine directly in Python, despite YAML being the default choice for LLMs.
I found that YAML had some drawbacks:
* LLMs don't have an inherent understanding of YAML conventions. They tend to be overly verbose. Python code solved this because "good" code is generally as short as you need.
* YAML isn't really composable. Yes, you can technically compose it, but you'll be fighting the LLM the entire time. Python solved this because the LLM knows how to decouple code.
* I want _some_ things to be programatic still. Having Python solves that
* Pretty much any programming language would do. Python just feels like the default for LLM-centric code.
b4rtaz__ 6 hours ago [-]
It’s interesting to see something new in this space, especially since some people claim that flowcharts will be replaced by AI automation or AI-generated code.
This particular example aside, I don’t think it being derivative and simplified is necessarily bad. Libraries that are popular today were written for humans and reinforced by LLMs via training. It’s unlikely they represent the ideal interaction surface for an agent.
There was a study recently that LLms prefer resumes written by LLMs rather than by humans. Stands to reason they would prefer apis written by LLMs.
This is probably the early days of such intentionally simplified agentic semantic primitives like “DAG Workflow” where the answer for why not Temporal is that LLMs prefer different things than humans.
bognition 4 hours ago [-]
Production Ready?
That is a is a pretty bold claim for a repo that existed for a few days, has 0 issues, PRs, etc...
tibbar 8 hours ago [-]
I was expecting to see some verbose LLM output, but actually the code has a distinctly hand-crafted feel. Nice to see! I'm not sure if "production ready" is a safe claim 7 commits in to a project ;)
I've seen LLMs include that exact "production-ready" claim on code they generate. But of course it gets that from its training data.
zaptheimpaler 5 hours ago [-]
I have several sources of data I want to fetch, retry, process periodically. Like exporting Claude chats into .md files that go to Obsidian, fetching Garmin data from the API and processing it for a custom tool, exporting replays for a game, maybe even running some browser automation to get bank CSVs. I have some ad-hoc python scripts for all of this but no central way to manage them, schedule, handle errors and retries, store the original data and processed versions, resume from the last point etc.. is a workflow engine useful for something like that?
momojo 3 hours ago [-]
Check out Airflow and Dagster.
I've used Dagster but I can't compare to airflow. But in terms of DX, I've found Dagster pretty easy to use. Instead of writing their own DSL, they have a python library that lets yo tag your pre-made methods as @ops and and string them together into a DAG.
tedchs 6 hours ago [-]
How does this compare to Temporal? That seems to be the current baseline for application-oriented workflow engines.
purpleidea 6 hours ago [-]
Here's a different kind of workflow engine with a proper DSL. It turns out config management is the same problem as workflow engines, if you use my modern definition of config management.
I have a project in this space that I've run many thousands of jobs through. It's solid and full featured. Feel free to connect: https://stepwise.run/
kamikazechaser 4 hours ago [-]
Just looking at the features, this is pretty cool!
taybin 8 hours ago [-]
What makes it production ready? What's the code coverage on your tests? There are only seven commits in this repo as of this comment.
Hasnep 6 hours ago [-]
The LLM generated the words "production ready" so it must be true!
tbrownaw 4 hours ago [-]
These are always a fun couple-day project. :)
_ZeD_ 7 hours ago [-]
how it compares to airflow?
colton_padden 7 hours ago [-]
Was going to ask the same thing. The orchestration space already has some very well established frameworks like Airflow and Dagster. Would be curious to see the pros and cons.
saltyoldman 7 hours ago [-]
I think the future of replacements to well established frameworks written in Python/etc.. are zero dependency binaries (from Rust or Go) that require so little configuration and tuning and they "just work".
That being said, that's not this project.
topspin 5 hours ago [-]
Agreed. Right now, if I needed "workflow" for a greenfield that could tolerate some risk, I'd look at https://www.restate.dev/ which matches your model of a self contained binary.
subhobroto 6 hours ago [-]
This is a good exercise but IMHO, when you really start using a workflow for production usecases, you need a a proper, turing-complete programming language as a DSL.
There used to be a project called Benthos (since acquired and rebranded by Redpanda in 2024) that was amazing, that you might want to gain some inspiration from.
However, durable workflows have also gained popular acceptance as functional design reaches a wider audience.
While Temporal is the most popular choice when it comes to durable workflows, DBOS (cofounded by the father of PostgreSQL) is my personal favorite.
At the moment, orchestration in DBOS has certain gaps - you might very well consider spending your effort on closing those gaps. The value there would be phenomenal!
FelipeCortez 6 hours ago [-]
I love Temporal and am DBOS-curious. what do you think DBOS does better?
subhobroto 5 hours ago [-]
Hi Felipe! Just point your agent at https://docs.dbos.dev/python/prompting and give it a go - you can really play around with it as much as you want and solve real problems you care about than me lecturing you about it :)
That said, DBOS really makes durable workflows accessible and approachable. Having already used Temporal, I think you're really appreciate how quickly you can get started with DBOS. I forget if they support SQLite but if you have a PostgreSQL server set up, you really don't need anything else to write your first few DBOS durable workflows (vs. needing a Temporal server or cluster)
Let me know if I got you interested to try it out. I first learned about Temporal from Mitchell Hashimoto as they were using it for Hashicorp Cloud. Eventually I discovered DBOS and now all my personal projects are on DBOS.
esafak 8 hours ago [-]
I don't see any references to existing orchestrators, which are way more complete, so I presume you did this as an exercise?
Just seeing YAML used for workflows in this age makes me automatically nope out.
afshinmeh 8 hours ago [-]
Curious, what format would you prefer to use to represent a workflow instead of YAML?
esafak 7 hours ago [-]
Type-safe code. Workflows are not configuration! If I wanted YAML hell I could stick to Github Actions.
But that's only the start. There are a lot of other things I would expect of a new workflow orchestrator in 2026 so if you are not comparing yourself to the competition you probably don't know what you're getting yourself into.
afshinmeh 7 hours ago [-]
Yeah, that makes sense. I looked at a few workflow orchestrators and I'm building something that I will release soon, but my thinking is that the "workflow engine" should be an abstraction that takes the input and executes the steps. "What" you use to define that workflow is probably the SDK layer though, but I can certainly see the value in using type safe code to define as opposed to a YAML file.
I'm mainly focusing on the portability aspect of it (e.g. use TS/Python/etc. to define the workflow/steps or just simple a simple YAML file).
verdverm 7 hours ago [-]
Are you planning to map those varied definitions onto varied orchestrators?
afshinmeh 7 hours ago [-]
Sort of. My thinking is that the input to define the workflow should be anything you prefer to use (TS, Go, YAML, etc.) and the orchestrator's job is to model that and execute the job, given your deployment model.
verdverm 5 hours ago [-]
There are a number of widely used orchestrators, it would be nice to deploy to one of those vs a new kid on the block
afshinmeh 4 hours ago [-]
I'm mainly looking at Rust based projects and haven't been able to find something to use out of the box, without hacky RPC/Shell execs. Curious if you have any suggestions?
esafak 7 hours ago [-]
[dead]
535188B17C93743 4 hours ago [-]
[dead]
blobmty 11 hours ago [-]
DAG Workflow Engine
A production-ready DAG (Directed Acyclic Graph) workflow engine driven by a YAML DSL. Validates, executes, and visualizes workflows with support for parallel execution, retries, conditional branching, batch iteration, and pluggable actions.
I recommend checking out https://github.com/peterkelly/rex and also my PhD thesis on the topic https://www.pmkelly.net/publications/thesis.pdf.
The gap in flexiblity between DAG-only and a full language designed for the task is a significant one.
https://insitro.github.io/redun/
I've ended up building my workflow engine directly in Python, despite YAML being the default choice for LLMs.
I found that YAML had some drawbacks:
* LLMs don't have an inherent understanding of YAML conventions. They tend to be overly verbose. Python code solved this because "good" code is generally as short as you need.
* YAML isn't really composable. Yes, you can technically compose it, but you'll be fighting the LLM the entire time. Python solved this because the LLM knows how to decouple code.
* I want _some_ things to be programatic still. Having Python solves that
* Pretty much any programming language would do. Python just feels like the default for LLM-centric code.
P.S. I'm the author of a similar solution:
* https://github.com/nocode-js/sequential-workflow-designer
* https://github.com/nocode-js/sequential-workflow-machine
There was a study recently that LLms prefer resumes written by LLMs rather than by humans. Stands to reason they would prefer apis written by LLMs.
This is probably the early days of such intentionally simplified agentic semantic primitives like “DAG Workflow” where the answer for why not Temporal is that LLMs prefer different things than humans.
That is a is a pretty bold claim for a repo that existed for a few days, has 0 issues, PRs, etc...
I've used Dagster but I can't compare to airflow. But in terms of DX, I've found Dagster pretty easy to use. Instead of writing their own DSL, they have a python library that lets yo tag your pre-made methods as @ops and and string them together into a DAG.
https://github.com/purpleidea/mgmt/
https://github.com/swetjen/daggo
That being said, that's not this project.
There used to be a project called Benthos (since acquired and rebranded by Redpanda in 2024) that was amazing, that you might want to gain some inspiration from.
However, durable workflows have also gained popular acceptance as functional design reaches a wider audience.
While Temporal is the most popular choice when it comes to durable workflows, DBOS (cofounded by the father of PostgreSQL) is my personal favorite.
At the moment, orchestration in DBOS has certain gaps - you might very well consider spending your effort on closing those gaps. The value there would be phenomenal!
That said, DBOS really makes durable workflows accessible and approachable. Having already used Temporal, I think you're really appreciate how quickly you can get started with DBOS. I forget if they support SQLite but if you have a PostgreSQL server set up, you really don't need anything else to write your first few DBOS durable workflows (vs. needing a Temporal server or cluster)
Let me know if I got you interested to try it out. I first learned about Temporal from Mitchell Hashimoto as they were using it for Hashicorp Cloud. Eventually I discovered DBOS and now all my personal projects are on DBOS.
Just seeing YAML used for workflows in this age makes me automatically nope out.
But that's only the start. There are a lot of other things I would expect of a new workflow orchestrator in 2026 so if you are not comparing yourself to the competition you probably don't know what you're getting yourself into.
I'm mainly focusing on the portability aspect of it (e.g. use TS/Python/etc. to define the workflow/steps or just simple a simple YAML file).