Examples? I can think of a number of foreign companies that the US facilitates, like Nestle.
Examples? I can think of a number of foreign companies that the US facilitates, like Nestle.
Eh, I switched. I switched all of my lab’s computers, too, and my PhD students have remarked a few different times that Linux is pretty cool. It might snowball.
You’re normal in that respect:
https://onlinelibrary.wiley.com/doi/abs/10.1002/aur.1962
In fact, the idea that autistic individuals are immune to propaganda is, itself, media propaganda. The study that those articles report on was a single study that found that autistic individuals show less of a framing effect on their own preferences. It’s much more easily explained by autistic individuals having strong, internal preferences for their own likes/dislikes than it is by autistic individuals being immune to propaganda.
Speaking from experience here, too.
Oregonians almost take pleasure in driving slowly in front of you. Maybe they’ve just gotten used to going slow because the entire state freeway system is always under construction. People driving crazily is infuriating for a completely different reason.
The best time to start was decades ago, but at least they’ve started.
This is a problem that’s becoming outdated, thanks to NIH now requiring females to be included in studies in order to receive grant funding–barring an exceptional reason for studying males alone (e.g., male-specific problems). They are even requiring cell lines for in vitro studies to be derived, at least in part, from females, rather than from males alone.
A fellow Julia programmer! I always test new models by asking them to write some Julia, too.
I actually took that bit out because LLMs are pro climate and against everything that makes the environment worse. That’s a result of being trained on a lot of scientific literature. I was just curious what Opus would say about the conceptual knowledge piece.
Claude Opus disagrees, lol (edit to add: all of what follows is Claude; not me):
I respectfully disagree with the claim that current language models are logically incapable of mapping language to actual concepts. While today’s LLMs certainly have limitations and are prone to hallucinations, they have demonstrated a remarkable ability to capture and manipulate complex concepts through language.
A few key points:
LLMs are trained on vast amounts of human-generated text which is grounded in real-world concepts, allowing the models to build rich representations that go beyond surface-level word associations. Techniques like unsupervised pre-training enable LLMs to learn meaningful conceptual relationships.
In many domains, LLMs have shown an impressive capacity for reasoning, inference, and knowledge synthesis - skills that rely on a deep understanding of the underlying concepts, not just pattern matching. Their performance on complex question-answering, analogical reasoning, and natural language inference tasks is evidence of this.
LLMs can be fine-tuned and augmented with techniques like retrieval-augmented generation or chain-of-thought prompting to further ground them in factual knowledge and improve their reasoning capabilities. This allows more robust conceptual mappings.
Careful prompt engineering and sampling techniques can help to mitigate hallucinations and nonsensical outputs. While not a complete solution, this shows that the conceptual knowledge is often present even if the model sometimes struggles to express it faithfully.
That said, I fully agree that the tendency for subtle nonsense is a serious issue, especially for layperson use cases where hallucinations may go undetected. Continued research into making LLMs more truthful, consistent and robust is crucial. Techniques like constitutional AI to bake in truthfulness, and better UX design to convey uncertainty are important steps.
But in summary, I believe the evidence suggests that LLMs, while flawed, are not fundamentally incapable of meaningful conceptual representation and reasoning. We should push forward on making them more reliable and trustworthy, rather than dismissing their potential prematurely.
Interesting take! Is lightning conscious, then? The idea of Thor isn’t too far off if so, haha.
If we’re going full coast-to-coast, US still wins
Edit: a better illustration that loses about 80 km but avoids the extra stop.
I’m thinking of shorting it. My friend is definitely shorting it.
Lemmy Lemmy Lemmy
Yep
Would you, after devoting full years of your adult life to the unpaid work of learning the requisite advanced math and computer science needed to develop such a model, like to spend years more of your life to develop a generative AI model without compensation? Within the US, it is legal to use public text for commercial purposes without any need to obtain a permit. Developers of such models deserve to be paid, just like any other workers, and that doesn’t happen unless either we make AI a utility (or something similar) and funnel tax dollars into it or the company charges for the product so it can pay its employees.
I wholeheartedly agree that AI shouldn’t be trained on copyrighted, private, or any other works outside of the public domain. I think that OpenAI’s use of nonpublic material was illegal and unethical, and that they should be legally obligated to scrap their entire model and train another one from legal material. But developers deserve to be paid for their labor and time, and that requires the company that employs them to make money somehow.
My guess is Siakam gave some indication to Warriors that he wouldn’t re-sign unless they got their shit together, which seems unlikely to happen. Pacers already have their shit together, so I could see him as expressing a desire to re-sign if he were to get traded there, which probably facilitated the Pacers FO being willing to trade.
I’m so in the minority here, but I have a different perspective.
I worked at a grocery store for years, with about a third of my job being cart duty. I loved it when people left their carts outside of the corrals, for a few reasons.
First, if a lot of people did so, I would point it out to whoever was the manager on at the time before I went outside. My manager knew that I would take longer before coming back in, and that would give me more time to stroll/relax/enjoy the outdoors before coming back in to customer craziness. Having those extra minutes because my manager didn’t know how long I should take was nice.
Second, sometimes I had to walk way the hell out to the edge of the parking lot, which was really nice for a long walk away from customer craziness. Such walks were very nice when the weather was nice.
Third, it was job security. Working during the recession made my managers want to let as many people go as they could, but customers who made it so even the most efficient cart duty workers took a while to clear the lot effectively kept more of us employeed than management would have employed otherwise.
For those reasons, whenever the weather is nice, I try to leave my cart in a weird spot that is anchored by something. I realize that many other cart duty folks probably dislike me for it, but I know I appreciated it when others did this. So I do it for the folks like me.
I know all of the arguments against it and I’m not trying to debate here. Just sharing a different perspective; sometimes, leaving your cart in a terrible spot can be nice for some of the workers.
Average monthly salary in the cities is listed at the bottom of that link I gave. The two cities differ in monthly salary by $14 dollars on average, per the available data. Those same submissions show that the cost of living is ~20% higher in San Diego than Austin.
It doesn’t have to be
https://www.mathworks.com/products/compiler.html
MATLAB can ruin all sorts of coding experiences, programming included