• 2 Posts
  • 13 Comments
Joined 3 days ago
cake
Cake day: December 1st, 2025

help-circle

  • It’s professional development of an emerging technology. You’d rather bury your head in the sand and say it’s not useful?

    The reason not to take it seriously is to reinforce a world view instead of looking at how experts in the field are leveraging it, or having discourse regarding the pitfalls you have encountered.

    The Marketing AI hype cycle did the technology an injustice, but that doesn’t mean the technology isn’t useful to accelerate determistic processes.


  • It depends on the methodology. If you’re trying to do a direct port. You’re probably approaching it wrong.

    What matters to the business most is data, your business objects and business logic make the business money.

    If you focus on those parts and port portions at a time, you can substantially lower your tech debt and improve developer experiences, by generating greenfield code which you can verify, that follows modern best practices for your organization.

    One of the main reasons many users are complaining about quality of code edited my agents comes down to the current naive tooling. Most using sloppy find/replace techniques with regex and user tools. As AI tooling improves, we are seeing agents given more IDE-like tools with intimate knowledge of your codebase using things like code indexing and ASTs. Look into Serena, for example.





  • While it’s possible to see gains in complex problems through brute force, learning more about prompt engineering is a powerful way to save time, money, tokens and frustration.

    I see a lot of people saying, “I tried it and it didn’t work,” but have they read the guides or just jumped right in?

    For example, if you haven’t read the claude code guide, you might have never setup mcp servers or taken advantage of slash commands.

    Your CLAUDE.md might be trash, and maybe you’re using @file wrong and blowing tokens or biasing your context wrong.

    LLMs context windows can only scale so far before you start seeing diminishing returns, especially if the model or tools is compacting it.

    1. Plan first, using planning modes to help you, decomposition the plan
    2. Have the model keep track of important context externally (like in markdown files with checkboxes) so the model can recover when the context gets fucked up

    https://www.promptingguide.ai/

    https://www.anthropic.com/engineering/claude-code-best-practices

    There are community guides that take this even further, but these are some starting references I found very valuable.


  • If you’re not already messing with mcp tools that do browser orchestration, you might want to investigate that.

    For example, if you setup puppeteer, you can have a natural conversation about the website you’re working on, and the agent can orchestrate your browser for you. The implication is that the agent can get into a feedback loop on its own to verify the feature you’re asking it to build.

    I don’t want to make any assumptions about additional tooling, but this is a great one in this space https://www.agentql.com/