Hacker Newsnew | past | comments | ask | show | jobs | submit | muzani's commentslogin

I find the inverse as well - asking a LLM to be chatty ends up with a much higher output. I've experimented with a few AI personality and telling it to be careful etc matters less than telling it to be talkative.

It's why it starts with "You're absolutely right!" It's not to flatter the user. It's a cheap way to guide the response in a space where it's utilizing the correction.

I think CLI should be the default where supported. i.e. the AI has already been trained on Supabase, Heroku, AWS.

For everything else, MCP.


It's not necessarily faux pas. Community types come to /ask first but some go to /show first.

It's more that releases have become the new devlog these days. Someone who only talks about an idea and hasn't written a line of code will likely never do the idea. If they're open to reading comments and discuss feedback, it's cool.

But using HN purely as a marketing channel is rather rude.


Yeah, it's the opposite of showdead. Many of these get flagged automatically anyway. But I would like to read what trolls and throwaways want to say, so I turn on showdead. Maybe some people want more filters.

I joined a few low code hackathons back in 2024, before this agentic stuff.

They're fully dead from what I see. With low code, you'd drag a component onto the screen, click it, look for a field (which may have a different name to the original field), fill that in, and then spend 30 min trying to align it on screen.

With agents, you just tell them what to do. Draw boxes on a piece of paper, take a photo with Claude on your phone, and you'll have a functioning UI.

If you wanted to modify layouts, you can do it straight from the toilet seat on your phone.

The other big feature in low code is maintaining API specs. You'd tell it what the tables are, what to connect to, data objects, and all that. Another thing that AI does better.


ChatGPT makes sense though. They shouldn't be dumping the whole chat into memory the entire time. They're compressing it in some way. If they do it on device, it saves the cost of doing it on the cloud. ChatGPT's memory feature is a good leap ahead of the competition and it could be due to things like this, which may query memories of conversations made long ago and not just the recent ones.

Tools like Cursor do something very similar. They claim to be using 2 million tokens or something, but those are cheap tokens which make the code space more searchable.


The startup game is about building assets and then cashing out on them during exit.

Assets are harder to measure. Facebook used to say something silly like every user was worth $100. That sounded ridiculous for a completely free app but over a decade later, the company is worth more than that. Revenue is an easier way of measuring assets than profit.

Profit doesn't really matter. It gets taxed. But it's not about dodging taxes; it's because sitting on a pile of money is inefficient. They can hire people. They can buy hardware. They can give discounts to users with high CLTV. They can acquire instead of building. It's healthy to have profit close to $0, if not slightly negative. If revenues fall or costs increase, they can make up for the difference by just firing people or cutting unprofitable projects.

Also when they're raising money, it makes absolutely no sense to be profitable. If they were profitable, why would they raise money? Just use the profits.


At least they have customer service who can give us refunds right?

Sometimes we try out of mischief, but this might only work on the most primitive of LLMs, like GPT-5.3 or some various self hosted ones. The new ones are more resistant to such prompt hacks.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: