• 0 Posts
  • 75 Comments
Joined 4 months ago
cake
Cake day: March 3rd, 2024

help-circle


  • Understanding the variety of speech over a drive-thru speaker can be difficult for a human with experience in the job. I can’t see the current level of voice recognition matching it, especially if it’s using LLMs for processing of what it managed to detect. If I’m placing a food order I don’t need a LLM hallucination to try and fill in blanks of what it didn’t convert correctly to tokens or wasn’t trained on.







  • I know you’re asking for a single place to go, which isn’t going to happen until “modern” places that were captured by Google and the like turn into the old places. Sometimes you can dig into old archives and find pieces of things that were digitized. Put enough of them together and you might get some answers. It’s difficult and very regional dependent on what was done over the decades. Just finding an online copy of old highway maps is a challenge, and I figured that would be easy. But if you can find some sources, it’s fascinating to try and overlay old and new and see just how much has (and hasn’t) changed. I’ve found old roads in my area that were cut up by newer and by lots of development, but are still there, just not connected in the same way.







  • The narrow purpose models seem to be the most successful, so this would support the idea that a general AI isn’t going to happen from LLMs alone. It’s interesting that hallucinations are seen as a problem yet are probably part of why LLMs can be creative (much like humans). We shouldn’t want to stop them, but just control when they happen and be aware of when the AI is off the tracks. A group of different models working together and checking each other might work (and probably has already been tried, it’s hard to keep up).