Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think open source still has an important advantage in the pro environment despite being less convenient, and it's the possibility of adding things in between the generation process like control net, and custom loras with new concepts or characters.

Plus in local generation you're not limited by the platform moderation that can be too strict and arbitrary and fail with the false positives.

Yes comfy UI can be intimidating at first vs an easy to use chatgpt-like ui, but the lack of control make me feel these tools will still not being used in professional productions in the short term, but more in small YouTube channels and smaller productions.



I don't think this is just about convenience - you're not going to get these results with a 14B video model. I'd much prefer to have something I could hack on in ComfyUI but the open weights models don't compete with this anymore than a 32B LLM competes with Gemini 2.5 Pro for coding. And at least in coding you can easily edit the output from the LLM regardless...


> you're not going to get these results with a 14B video model

Foundation models are starting to outstrip any consumer hardware we have.

If Nvidia wants to stay ahead of Google's data center TPUs for running all of these advanced workloads, they should make edge GPU compute a priority.

There's a future where everything is a thin client to Google's data centers. Nvidia should do everything in its power to prevent that from happening.


Your post strangely sounds like Nvidia primarily makes graphic cards for consumers.

Last time I checked, they couldn't produce enough H100s/GB100s to satisfy demand from everyone and their mother running a data center. And their most recent consumer hardware offerings have been repeatedly called a "paper launch" - probably because consumer hardware isn't a priority, given the price (and profit) delta.


I read their comment as meaning that Nvidia should prioritise a specific kind of consumer/prosumer hardware.

Nobody is running H100s at home, nor are most video companies running ones. So the choice for them is to "rent" them from Google, or... invest a lot in almost impossible to obtain Nvidia hardware? One has lower initial cost, and is available now.


Thanks for the (possible) clarification.

But as long as Google isn't their _only_ customer, why would Nvidia care?


>There's a future where everything is a thin client to Google's data centers. Nvidia should do everything in its power to prevent that from happening.

there has always been, the mainframe concept is not new. but it goes in and out of fashion.

>>>> mainframe

<<<< personalpc

>>>> web pages/social media

<<<< personal phones/edge

>>>> cloud ai

<<<< ???? personal robotics, chips and ai ???

>>>> ???? rented swarms ???


Control net etc can be served via API; the intrinsic advantage of open-source is the ability to train and run inference privately.


Someone out there might care about nudity, but unfortunately, nobody that matters.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: