_______ __ _______ | | |.---.-..----.| |--..-----..----. | | |.-----..--.--.--..-----. | || _ || __|| < | -__|| _| | || -__|| | | ||__ --| |___|___||___._||____||__|__||_____||__| |__|____||_____||________||_____| on Gopher (inofficial) URI Visit Hacker News on the Web COMMENT PAGE FOR: DIR Launch HN: Dedalus Labs (YC S25) â Vercel for Agents kitan232 wrote 1 hour 48 min ago: Congratulations on the launch. Been trying this out for the past hour, and really like how easy it is to host your own MCP servers. Would love to see more public MCPs on the marketplace. Also, I was wondering when you plan to support auth? windsor wrote 1 hour 30 min ago: Thanks for trying us out! Let me know if you run into any issues with deployment or the like. If you want a clean MCP template to get up and running, check out this example: [1] Auth is a big one for us, and we're working really hard to provide a robust auth experience that is easy to use for both LLM agents and human users. One of our goals is to help mold the shape that the community takes with MCP. Weâll be launching our auth solution around end of September. Iâm personally really excited to tackle this problem. URI [1]: https://github.com/dedalus-labs/brave-search-mcp mnafees wrote 2 hours 27 min ago: Congratulations on the launch! Iâve been writing Go for the past 4 years, and Iâd strongly suggest avoiding Stainless for auto-generating Go SDKs. Some of the issues Iâve run into: - Awkward package naming (e.g., githubcomdedaluslabsdedalussdkgo) - Methods with unnecessary complexity, including excessive use of reflection for JSON handling - Other codegen quirks that make the SDK harder to use and maintain From experience, Iâd recommend either using another code generator or hand-writing the Go SDK for a cleaner and more idiomatic developer experience. windsor wrote 1 hour 54 min ago: Thanks! What's your favorite SDK generator for Go? What are the most important things for you when interacting with Go SDKs? just_human wrote 2 hours 46 min ago: Congrats on the launch and looks interesting! I love how easy it is to combine local code "tools" with remote mcp servers. The marketplace looks promising, but would be helpful to have some curation as many of the servers don't have descriptions and link to private github repo's. Neat vision and look forward to trying this. windsor wrote 1 hour 56 min ago: Thanks! Really glad you noticed this feature. Mixing client-side and server-side tool calls was something we spent a lot of time thinking through. The current SOTA, e.g. OpenAIâs Responses API or Anthropicâs Computer Use API, basically mandates that server-side tool results return directly, while client-side tool results have to be manually parsed and executed by the user (for obvious security reasons). As a result, it was extremely unclear how a user would be able to chain together tool calls that mixed local and remote tools. We wanted to close this DX gap, which surprisingly had no real incumbent solution. Users should be able to just define tools and get back clean responses. For power users, we still support manual JSON parsing for full low-level control, but our belief is simple: developers should spend their time building, not doing plumbing work like post-processing tool results. mifydev wrote 3 hours 2 min ago: Congrats on the launch! Iâm curious, do you have to store the tool inputs and outputs on the server side while either of the sides are waiting for response? Iâm building a specialized coding agent for integrations and I had to avoid stateful api, because I donât want to store user code. windsor wrote 2 hours 7 min ago: We tried to keep things simple, so our runner utility class is currently stateless. However, it's highly extensible, and we can support stateful logic if we wanted to. For instance, we have a cool concept called "Policy" in our SDKs, which is basically a user-defined callback function that is run after each runner step (see our docs for more info). You can build some pretty advanced use cases with this, e.g. executing conditional database calls on a per-step basis. The code for the runner is open source, in case you wanted to check out how we did it: URI [1]: https://github.com/dedalus-labs/dedalus-sdk-python/blob/main... muratsu wrote 5 hours 22 min ago: Congrats on the launch. I see that you're charging 5% on Balance Reloads. This pricing model seems to be getting popular across multi-LLM applications. Was curious to know how did you go about implementing it? or are you just passing on the 5% of openrouter windsor wrote 5 hours 7 min ago: Good eye. A ~5% surcharge on prepaid credits is the standard model right now for most multi-LLM services. We actually do not use OpenRouter internally, so this number is flexible. One thing I'll note is that we try to be as upfront and transparent about our platform fee as possible so that no one is surprised. muratsu wrote 4 hours 44 min ago: Oh interesting. I've previously looked into implementing it myself but seemed like it would require a lot of effort. I would love to connect and learn more about your implementation. What's the best way to reach out to you? My email is available on my profile. DIR <- back to front page