A far more likely scenario is that they have been overstating what the software can do and how much room for progress remains with current methods.
AI has blown up so fast with so much hype, that I’m very skeptical. I’ve seen what it can do, and it’s impressive over past machine learning algorithms. But it does play on the human tendency to anthropomorphize things.
I’ve not been super stoked on ai specifically because of my track record using them. Maybe it’s my use case (primarily technical/programming/cli questions that I haven’t been able to answer myself) or my prompts are not suited for ai assistance, but I’ve had dozens of interactions with the various ai bots (bard, bing, gpt3/3.5) have been disappointing to say the least. Never gotten a correct answer, rarely given correct syntax, and it frequently just repeats answers I’ve already told it are incorrect and/or just don’t work.
Ai has been nothing more than a disappointment to me.
Adjust or create your custom instructions. This is almost the only thing I use AI for, and it’s been life changing. GPT4 is better, and I always log off when it makes me use GPT3.5.
Yeah, in the settings menu is an option for Custom Instructions. These are referred to by ChatGPT every time it generates a response. I’ve got mine setup to help with DevOps/Linux/Python questions. At work, I mostly just want RHEL/Ubuntu specific answers, and it knows I prefer vim over nano, etc.
You can ask ChatGPT to generate the instructions for you too.
From what I understand he was fired by the non-profit board of the company and it’s the investors and money people who want him back. It sounds like the opposite, the people making it are becoming concerned about what is about to start happening with this tech.
Experts from different companies have been saying AGI within a decade and that Al the current issues seem solvable.
A far more likely scenario is that they have been overstating what the software can do and how much room for progress remains with current methods.
AI has blown up so fast with so much hype, that I’m very skeptical. I’ve seen what it can do, and it’s impressive over past machine learning algorithms. But it does play on the human tendency to anthropomorphize things.
I’ve not been super stoked on ai specifically because of my track record using them. Maybe it’s my use case (primarily technical/programming/cli questions that I haven’t been able to answer myself) or my prompts are not suited for ai assistance, but I’ve had dozens of interactions with the various ai bots (bard, bing, gpt3/3.5) have been disappointing to say the least. Never gotten a correct answer, rarely given correct syntax, and it frequently just repeats answers I’ve already told it are incorrect and/or just don’t work.
Ai has been nothing more than a disappointment to me.
Adjust or create your custom instructions. This is almost the only thing I use AI for, and it’s been life changing. GPT4 is better, and I always log off when it makes me use GPT3.5.
Eli5? I’m assuming this has something to do with the prompt?
Yeah, in the settings menu is an option for Custom Instructions. These are referred to by ChatGPT every time it generates a response. I’ve got mine setup to help with DevOps/Linux/Python questions. At work, I mostly just want RHEL/Ubuntu specific answers, and it knows I prefer vim over nano, etc.
You can ask ChatGPT to generate the instructions for you too.
From what I understand he was fired by the non-profit board of the company and it’s the investors and money people who want him back. It sounds like the opposite, the people making it are becoming concerned about what is about to start happening with this tech.
Experts from different companies have been saying AGI within a decade and that Al the current issues seem solvable.
AGI has been five to ten years away for decades.