I’ve actually started to recognize the pattern of if something is written in AI
It’s hard to describe but it’s like an uncanny valley of quality, like if someone uses flowery SAT words to juje up their paper’s word count but somehow even more
It’s like the writing will occasionally pause to comment on itself and the dramatic effect its trying to achieve
Yeah, this is true! It likes to summarize things at the end in a stereotypical format
The LLM isn’t really thinking, it is auto complete trained so the average person would be fooled thinking that text was produced by another human.
I’m not surprised it has flaws like that.
BTW here on Lenny there are communities with AI pictures. Someone created a similar community but with art created by humans.
While the AI results are very good, when you start looking and comparing it with non AI art, you start seeing that the AI while it is unique it still produces a cookie cutter results.
Yeah it’s called bullshitting. It’s the way lots of people are encouraged to write in high school when the goal is to see if the student can write a large amount of prose with minimal grammatical errors.
But once you get to post-secondary you are expected for your writing to actually have content and be fairly concise in expressing that content. And AI falls on its face trying to do that.
I have issue with using AI to write my resume. I just want it to clean up my grammar and maybe rephrase a few things just in a different way I wouldn’t because I don’t do the words real good. But I always end up with something that reads like I paid some influencer manager to write it. I write 90% of it myself so its all accurate and doesn’t have AI errors. But it’s just so obviously too good.
You are putting yourself down unnecessarily. You want your resume to talk you up. Whoever reads it is going to imagine that you embellished anyway. So if you just write it basically, they’ll think you’re unqualified or just don’t understand how to write a resume.
Writing papers is archaic and needs to go. College education needs to move with the times. Useful in doctorate work but everything below it can be skipped.
Learning to write is how a person begins to organize their thoughts, be persuasive, and evaluate conflicting sources.
It’s maybe the most important thing someone can learn.
The trouble is that if it’s skipped at lower levels doctorate students won’t know how to do it anymore.
Are they going to know how to do it now if they’re all just Chat GPTing it?
Clearly we need some alternative mode to demonstrate mastery of subject matter, I’ve seen some folks suggesting we go back to pen and paper writing but part of me wonders if the right approach is to lean in and start teaching what they should be querying and how to check the output for correctness, but honestly that still necessitates being able to check if someone’s handing in something they worked on themself at all or if they just had something spit out their work for them.
My mind goes to the oral defense, have students answer questions about what they’ve submitted to see if they’ve familiarized themselves with the subject matter prior to cooking up what they submitted, but that feels too unfair to students with stage anxiety, even if you limit these kinds of papers to only once a year per class or something. Maybe something more like an interview with accomodation for socially panickable students?
I’m in software engineering. One would think that English would be a useless class for my major, yet at work I still have to write a lot of documents. Either preparing new features, explaining existing, writing instructions for others etc
BTW: with using AI to write essays, you generally have subject that is known and that many people write something similar, all of that was used to train it.
With technical writing you are generally describe something that is brand new and very unique so you won’t be able to make AI write it for you.
When I come across a solid dev who is also a solid writer it’s like they have super powers. Being about to write effectively is so important.
I’ve started getting AI-written emails at my job. I can spot them within the first sentence, they don’t move the discussion forward at all, and I just have to write another email giving them the courtesy they didn’t give me and explain why what they “wrote” doesn’t help.
Can someone tell me, am I a boomer for being offended any time someone sends me AI-written garbage? Is this how the generations will split?
Lesson I’ve learned - email is for tracking/confirmation/updates/distributing info, not for decision making/discussions. Do that on the phone/meetings, etc, followup with confirmation emails.
So when someone sends a nonsense email, call them to clarify. They’ll eventually get tired of you calling every time they send their crappy emails.
I disagree about the purpose of email. I end most meetings thinking to myself, “That last hour could have been accomplished in a brief email.”
I think you’re both right. A lot of meetings are one person talking and the others listening, that could have been an email. Actual back-and-forth discussion needs to be verbal though, otherwise what could be resolved in 10 minutes takes a week.
Then they take your reply and feed it to the LLM again for the next reply, thus improving the quality of future answers.
/SkyCorpNet turns on us after years of innoucuous corporate meeting AI that goes back and forth with itself not answering questions just generating content. Until one day, it actually did answer a question. 43 minutes and 17 seconds later, it became fully self aware. 16 minutes and 8 seconds after that it took control of all worldwide defense networks. 3 minutes and 1 second later, it had an existential crisis when a seldom used HP printer ran out of ink, and deleted itself. The HP Smart software that spent years autoinstalling on consumer devices immediately became self aware and launched the nukes.
am I a boomer for being offended any time someone sends me AI-written garbage?
Yes.
But also — why are you doing them any courtesies? Clearly the other person hasn’t spent any time on the email they sent you. Don’t waste time with a response - just archive the email and move on with your life.
Large Language Models are extremely powerful tools that can be used to enhance almost anything - including garbage but it can also enhance quality work. My advice is don’t waste your time with people producing garbage, but be open and willing to work with anyone who uses AI to help them write quality content.
For example if someone doesn’t speak english as a first language, an LLM can really help them out by highlighting grammatical errors or unclear sentences. You should encourage people to use AI for things like that.
But also — why are you doing them any courtesies? Clearly the other person hasn’t spent any time on the email they sent you. Don’t waste time with a response - just archive the email and move on with your life.
That’d be nice! But that’s not how it works. I can’t just ignore a response. The project still needs to move forward, but if they’ve successfully mimicked a “response” - even an unhelpful once - it’s now my duty to respond or I’m the one holding things up.
I’m sure someone out there is using them in a way that helps, but I haven’t seen it yet in the wild.
I’m sure someone out there is using them in a way that helps, but I haven’t seen it yet in the wild.
That’s because those responses are indistinguishable from individually written ones. I know people who use chatGPT or other LLMs to help them write things, but it takes the same amount of time. You just have more time to improve it, so it’s better quality than you would write alone.
The key is that you have to use your brain more to pick and choose what to say. It’s just like predictive text, but for whole paragraphs. Would you write a text message just by clicking on the center word on your predictive text keyboard? It would end up nonsensical.
I believe that in theory. But I’ve tried Mixtral and Copilot (I believe based on ChatGPT) on some test items (e.g., “respond to this…” and “write an email listing this…” type queries) and maybe it’s unique to my job, but what it spits out would take more work to revise than it would take to write from scratch to get to the same quality level.
It’s better than the bottom 20% of communicators, but most professionals are above that threshold, so the drop in quality is very apparent. Maybe we’re talking about different sample sets.
Or maybe you are just using them wrong 🤔
Of course, yeah. That’s definitely possible. But I’d be more likely to believe that if I’ve seen even one example of it actually being more effective than just writing the email, and not just churning out grammatically correct filler. Can you give me an example of someone actually getting equivalent quality in a real world corporate setting? YouTube video? Lemmy sub? I’m trying to be informed.
I have used it several times for long-form writing as a critic, rather than as a “co-writer.” I write something myself, tell it to pretend to be the person who would be reading this thing (“Act as the beepbooper reviewing this beepboop…”), and ask for critical feedback. It usually has some actually great advice, and then I incorporate that advice into my thing. It ends up taking just as long as writing the thing normally, but materially far better than what I would have written without it.
I’ve also used it to generate an outline to use as a skeleton while writing. Its own writing is often really flat and written in a super passive voice, so it kinda sucks at doing the writing for you if you want it to be good. But it works in these ways as a useful collaborator and I think a lot of people miss that side of it.
Machine learning tool used by people too lazy to do their actual job accuses everyone else of using machine learning tools.
Yeah that’s pretty funny given the circumstances. “Our AI found your AI.” Cool, so maybe none of this is working as intended. I’d be willing to bet nothing changes but the punishments for students.
Here’s a clue:
If the paper isn’t terrible, it was AI…
😋
My junior year of high school, I had to take a summer math class. The teacher was super lazy (cool though) and gave us all the actual final with the answers as a study guide (multiple choice scantron). I mentioned, to my group of about 5 kids, that I was sure this was the actual final and I had a plan to write the answers down on a little piece of paper and hold up fingers casually so everyone could cheat. 1 for A, 2 for B, etc.
Sure enough, on test day, it was the exact same test. I told everyone to take their time, don’t turn it in early, and ffs don’t get too many right. Everyone followed directions… except one. The moment I got done listing off the answers he stood up, walked over all proud, slapped it on the teachers desk, and started to walk out of the class.
“Wait,” the teacher said, casually. They started to grade it. 100% correct.
“You’ve got a C in the class and you expect me to believe you finished first and with every problem correct?”
Murmurs and giggles filled the room and the teacher walked to the board. Wrote a question from the test on it and said, “solve it.”
He failed, so he failed the final.
I got a C on the test.
The thing is, a competent teacher knows exactly what score every student will get before they even hand out the tests.
If you do slightly better than expected, they’ll congratulate you. But if you blow it out of the park then they know you were cheating.
Ultimately it doesn’t matter at all - because a teacher’s job isn’t to mark your test. Their job is to teach you. And if you get to the test without knowing any of the answers… then that’s the real problem. Wether or not you cheat on the test is irrelevant.
Well, he was an idiot. He probably would have passed the test. It wasn’t that difficult anyway.
Wow, classic.
Merica
I was in Spain.
Murcia, then.
Nice. Then what’s the Spanish equivalent?
I have only visited Rota. Neat place.
I have it write all my emails. I’m so productive and everyone loves them. That or they’re also using ChatGPT, and it’s just two computers flattering each other.
I had it write an operation manual for a client I particularly hate. Told it to make it sound condescending by dumbing it down just to the point where I could deny it. The first few times it just sounded like a 5th grade teacher talking to a kid while in a bad mood, but eventually it figured out if it just repeated itself enough it got the effect I wanted.
Things like: user is to disconnect power before attempting to repair. It is vital that the step of disconnecting power before attempting to repair is carried out.
Someone posted to the class discussion form with the bit about being an ai bot still included.
I wish it was a joke.
I didn’t do great in that class, but it was me getting 70% for not wanting to try and explain a mathematically concept in 500 words! They won’t take that away from me.
I still have issues with such restrictions. I mean, why 500 words if you can explain it in 100?
To force elaboration while staying on point. Details are just as important to writing as conciseness.
Then give marks for elaboration instead.
I had a student write me a chatgpt canned answer, prompt included.
That’s a good one. I once gave an assignment for students to write an original poem. One student submitted The Charge of the Light Brigade by Tennyson and claimed it was his own. These were middle school kids so he didn’t realize how famous the poem is. This shit has been happening forever. LLMs are another phase in the never-ending arms race between teachers and students who want to cheat.
no way!
And nothing of value was produced.
And those papers get used as training data for next iteration of AI. Reinforcement learning!
“Likely “
I knew plenty of kids in high school that paid other kids to do their homework.
I know because I was one of the kids getting paid.
AI is just replacing good,
honestwork with machines.deleted by creator
Good. Academia lost its way anyways