6 Comments
User's avatar
Om Prakash Pant's avatar

This felt less about “vibe coding” and more about removing the fear of starting.

once the cost of trying drops, you stop overthinking the perfect approach and just move. and in that movement, you actually learn what matters vs what you thought mattered.

but there’s also a flip side i keep seeing - it’s easy to feel productive without really understanding what’s happening underneath. that gap shows up later.

so yeah, better flow, faster starts. but staying with it long enough to build real depth is still the harder part.

Privacat's avatar

Oh, I’ve started quite a bit — this is probably my 10th project, and my second or third big project. For me, it was more about bigger picture thinking and scale, which is something that, until now I hadn’t really been considering too hard.

FWIW, I plan to stick with this. It’s been a daily-use product for me since it went live. I was just using it now actually, and I’ve found that it speeds up my research process 10x — I can now ask a broader set of research questions because I no longer have to slog through and find everything to skim and read over.

Karen Spinner's avatar

I will reach out once I’ve started using Obsidian, this looks great! Also, the overall approach you outlined, which I’m simplifying to “just nuke the legacy code,” is often a lot more efficient than refactoring given how quickly AI-generated coding tools can work. 💡

Privacat's avatar

Interestingly, I didn't nuke everything. A lot of what's in the new core analysis engine was already in feedforwardv1. Since there was actual utility in what I had, Claude wanted to leverage as much of that as made sense.

Karen Spinner's avatar

Glad that Claude was able to use a good chunk of your existing logic! I liked the original FeedForward even without Obsidian!

Luis Bruno's avatar

one architecture decision i've made in my own pipeline thingy is to make a state engine in the DB: fetching each raw RSS feed is one step, parsing it and extracting info is another, running classifier is another, etc

expose each of these steps via MCP, you can then orchestrate via the LLM agent; in other words, you should have code for the basic operations, but scripts running multiple steps in a row is probably an anti-pattern if you have an LLM anywhere in your stack: use it for the orchestration

but also store the state of each step in the DB: “i have refreshed this RSS feed” is state, and so is “i have parsed it” and “i have downloaded this full article linked from the parsed feed” etc -- not just the result of each step, but whether you've ran it and a link/DB-primary-key to the results, the input, etc