My bad, its Microsoft that keeps changing their recommendations, had it in my mind it was bad for some reason.
My bad, its Microsoft that keeps changing their recommendations, had it in my mind it was bad for some reason.
ICANN can pry “.local” from my cold dead hands!
They mention this other article as a source at the bottom, it has pictures. Just high surface area objects made of reactive materials and 3d printing is an easy way to make them
The way I understand the users didn’t necessarily realize McAfee is responsible, just that a bunch of sqlite files appeared in temp so they might not connect the dots here anyway. Or even know McAfee is installed considering their shady practices.
Personally my threshold for intelligence versus consciousness is determinism(not in the physics sense… That’s a whole other kettle of fish). Id consider all “thinking things” as machines, but if a machine responds to input in always the same way, then it is non-sentient, where if it incurs an irreversible change on receiving any input that can affect it’s future responses, then it has potential for sentience. LLMs can do continuous learning for sure which may give the impression of sentience(whispers which we are longing to find and want to believe, as you say), but the actual machine you interact with is frozen, hence it is purely an artifact of sentience. I consider books and other works in the same category.
I’m still working on this definition, again just a personal viewpoint.
I feel like its difficult to quantify for jobs where you’re being paid to think. Even when I’m goofing off, the problem I need to solve for the day is still lingering in the back of my head somewhere. Actively squinting at it doesn’t seem to make things go any faster and when I do return to work it’s usually to mash out reems of code after letting it stew, but yes, the actual amount of time I’m fulfilling my job description is… less than my working hours.
Not an answer to the question, but in case performance is the goal, Torchaudio has it here
Ah, even then it could just be a consequence of training samples usually being chronological(most often the expected resolution for conflicting instructions is “whatever you heard last”, with some exceptions when explicitly stated) so it learns to think that way. I did find the pattern also applies to GPT trained on long articles where you’d expect it not to, so wanted to just explain why that might be.
Or I should explain better: most training samples will be cut off at the top, so the network sort of learns to ignore it a bit.
Yes, that’s by design, the networks work on transcripts per input, it does genuinely get cut off eventually, usually it purges an entire older line when the tokens exceed a limit.
It would be luck based for pure LLMs, but now I wonder if the models that can use Python notebooks might be able to code a script to count it. Like its actually possible for an AI to get this answer consistently correct these days.