Rendered at 09:37:30 GMT+0000 (Coordinated Universal Time) with Cloudflare Workers.
MSaiRam10 3 days ago [-]
Notebooks as the output format is funny because notebooks are famously bad for reproducibility. Out of order execution, hidden state, etc. You're solving "chat isn't reproducible" with a format that also isn't really
pplonski86 1 days ago [-]
Python notebooks are not reproducible when used by humans. When notebook format is used to store conversations for AI data analysis, it preserves the chat history and is ideal for reproducibility.
hasyimibhar 3 days ago [-]
How does this compare to open source Deepnote[0]? We use the cloud version (BYOC) at my previous company to replace self-hosted Jupyter notebooks, and it's pretty great.
The goal of MLJAR Studio is to make it easy to analyze data for people with large domain knowledge but lack of programming skills. We do not focus on notebooks. Python notebook for us is compute and store layer. Our main interface is chat with AI data analyst. The conversation can be opened as classic notebook, but the main UI is simple chat.
hasyimibhar 7 hours ago [-]
You should check them out, their interface pretty much looks like chat nowadays.
pplonski86 2 hours ago [-]
Thank you! I will check them out. It is worth to mention that MLJAR Studio is a desktop application, which is easy to install. It is running locally, and support local LLMs so all data stay safe.
trymamboapp 20 hours ago [-]
"AI saves analysis as notebooks" is fighting the wrong fight ig. The reproducibility issue with notebooks isn't the format. it's out-of-order cell execution and silent kernel state
llm generation makes that worse: the model has no memory of what state existed when it wrote cell 7, and neither does the user.
pplonski86 19 hours ago [-]
User is not touching notebook at all, user just ask questions in natural language, and AI is using Python to compute answer, the ipynb notebook format is used to save the conversation.
2ndorderthought 3 days ago [-]
This is one of those product areas I would call high-risk without a human in the loop. So I am glad you kept a person in the loop. It's really easy to lose tons of money making decisions based on bad statistics or models. Anyone remember how much money zillow lost because of automatic time series models?
I do have concerns about the workflow. Data people aren't usually the best programmers. Models hallucinate and make mistakes sometimes subtle sometimes not. Can you think of a way to prevent data scientists from having to be expert code reviewers? I feel like taking away the code gives them the chance to find and fix mistakes in their reasoning but I have no evidence for that.
pplonski86 1 days ago [-]
Human in the loop in data analysis is really challenging task. We provide Python code for inspection, so user can check details how results were produced. Additionally, we run AI on results - user need to check the outputs and AI provided insights.
amirathi 3 days ago [-]
Really cool. If somebody doesn't want to adopt a new platform, take a look at open source Jupyter MCP Server[1]. Once integrated with Claude, it can execute code on the live notebook kernel.
I just let Claude write notebooks, run top to bottom, debug & fix errors & only ping me when everything is working.
Thanks for sharing! MLJAR Studio was created for people with domain knowledge but not much technical expertise. For them, setting up a Python environment, installing required packages, configuring Jupyter Lab, the MCP server, and Claude Code might be technically demanding.
MLJAR Studio is a desktop application available for Windows, MacOS, and Linux. MLJAR Studio creates a Python environment for the user and installs all required packages. The user can focus on data rather than fighting technical challenges.
12 hours ago [-]
estetlinus 3 days ago [-]
This is one shot with Claude Code. What’s the moat?
2ndorderthought 3 days ago [-]
Not the op or affiliated but.
You really shouldn't and often cannot legally send off data or information about data to 3rd parties. Maybe schemas are okay but 1 mistake and your company can be in serious trouble. So local models is a good idea.
This is a safer workflow if implemented correctly to prevent certain types of mistakes when LLMs inevitably hallucinate or make a mistake.
That said, 200 usd? I don't believe the value is there. Someone can run a local model very easily, 1 command line call and do this themselves. For free.
arriemeijer 3 days ago [-]
My guess is you can't.
the best you can do is show them the code and hope they catch mistakes. Data scientists who can't read code probably shouldn't be running AI generated analysis on real data.
jiggunjer 3 days ago [-]
IME "real data work" doesn't involve notebooks.
msp26 3 days ago [-]
I like starting most of my projects on marimo notebooks now and slowly moving parts of it to the main codebase + db.
By the end of it I might remove the notebook entirely but usually I keep it for some visualisation + running stuff as a cli tool.
ThomIves 3 days ago [-]
[flagged]
3vo-ai 3 days ago [-]
[dead]
WindyBolt907 2 days ago [-]
[dead]
HollowRidge427 2 days ago [-]
[dead]
BoldBrook418 2 days ago [-]
[dead]
HollowRidge427 3 days ago [-]
[dead]
amarcheschi 3 days ago [-]
it looks like the fakest, ai sloppiest site i've seen, together with reviews that don't seem real
[0] https://github.com/deepnote/deepnote
llm generation makes that worse: the model has no memory of what state existed when it wrote cell 7, and neither does the user.
I do have concerns about the workflow. Data people aren't usually the best programmers. Models hallucinate and make mistakes sometimes subtle sometimes not. Can you think of a way to prevent data scientists from having to be expert code reviewers? I feel like taking away the code gives them the chance to find and fix mistakes in their reasoning but I have no evidence for that.
I just let Claude write notebooks, run top to bottom, debug & fix errors & only ping me when everything is working.
[1] https://github.com/datalayer/jupyter-mcp-server
MLJAR Studio is a desktop application available for Windows, MacOS, and Linux. MLJAR Studio creates a Python environment for the user and installs all required packages. The user can focus on data rather than fighting technical challenges.
You really shouldn't and often cannot legally send off data or information about data to 3rd parties. Maybe schemas are okay but 1 mistake and your company can be in serious trouble. So local models is a good idea.
This is a safer workflow if implemented correctly to prevent certain types of mistakes when LLMs inevitably hallucinate or make a mistake.
That said, 200 usd? I don't believe the value is there. Someone can run a local model very easily, 1 command line call and do this themselves. For free.
the best you can do is show them the code and hope they catch mistakes. Data scientists who can't read code probably shouldn't be running AI generated analysis on real data.
By the end of it I might remove the notebook entirely but usually I keep it for some visualisation + running stuff as a cli tool.
oh and it also hijacks back button