Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> Using a Mistral-7B FT trained on GPT-4 outputs allowed us to parse through hundreds of thousands of Reddit threads in a simple way with just hundreds of dollars of compute.

Great idea. These sort of clever approaches are needed to be able to build these sort of products that benefit from scale. When the cost of inference goes down, it enables new experiences. And clever ways to reduce cost before the big providers do, is a massive competitive advantage that makes it tough for those who wait to compete with you.

Anyone building AI products should take note.



The missing part of the story is when we made an early prototype using GPT-4, leaving it on overnight, and realizing that we've spent several thousand dollars of OpenAI credits...


Aaah, I can imagine the panic I'd be in.

Yet, such pain is where the innovation comes from :). Wishing you all the best! And plan to try this out once it covers more product categories.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: