Token 1.17: Deploying ML Model: Best practices feat. LLMs - Companies will have a question of what models to use: cloud based, cloud open, or internal open. - GPT: closed cloud. - Llama: open internal. - Perplexity/Mistral hosting: open cloud. - my Assumption is that open cloud will catch up. Especially when they match it with vector embeddings. No provider does both embeddings and inference as a service, which is interesting. - Thoughts on vector DBs Reddit - Dive into anything - Seems cool: Lightning AI, which allows you to do like a repl type solution but for AI. Curious how good the editor is.
The Lazy Tyranny of the Wait Calculation - by Ethan Mollick - waiting for tech to be competent to achieve your vision or hop on the train? - I think software will catch up quickly. Our ability to get to a solution faster means faster prototyping, pickier customers, but ultimately more junk. The question of product market fit will always be the question, regardless if you wait. But some ideas may be worth trying a year from now rather than doing all this stuff that will be obsolete with the next AI release. - This blog itself wouldn’t be possible without Obsidian Mobile, free GitHub repos, Working Copy App, and quarto and iOS shortcuts. so the technologies need to converge to enable certain innovations (in this case my peculiar system for how to blog online from my phone)
GPTs won’t make you rich - by Charlie Guo - GPTs are now available. The Teams pricing is now available.
Why knowledge management is foundational to AI success - Stack Overflow - Generic old news. General principles that garbage in, garbage out.
_________________________
Bryan lives somewhere at the intersection of faith, fatherhood, and futurism and writes about tech, books, Christianity, gratitude, and whatever’s on his mind. If you liked reading, perhaps you’ll also like subscribing: