> Using a Mistral-7B FT trained on GPT-4 outputs allowed us to parse through hundreds of thousands of Reddit threads in a simple way with just hundreds of dollars of compute.
Great idea. These sort of clever approaches are needed to be able to build these sort of products that benefit from scale. When the cost of inference goes down, it enables new experiences. And clever ways to reduce cost before the big providers do, is a massive competitive advantage that makes it tough for those who wait to compete with you.
The missing part of the story is when we made an early prototype using GPT-4, leaving it on overnight, and realizing that we've spent several thousand dollars of OpenAI credits...
Great idea. These sort of clever approaches are needed to be able to build these sort of products that benefit from scale. When the cost of inference goes down, it enables new experiences. And clever ways to reduce cost before the big providers do, is a massive competitive advantage that makes it tough for those who wait to compete with you.
Anyone building AI products should take note.