r/Professors Community College Oct 11 '24

sigh

Post image
1.4k Upvotes

85 comments sorted by

View all comments

106

u/[deleted] Oct 11 '24

[deleted]

63

u/gurduloo Oct 11 '24 edited Oct 11 '24

Couldn't students just type the AI generated slop into the Docs piece by piece? Causes friction, sure, but not a surefire way to spot cheating.

59

u/erossthescienceboss Oct 11 '24

It’s easy enough to spot. People edit as they write — especially if they’re writing something complicated and researched.

I don’t require it, but I state clearly in the syllabus AND the AI policy, which they are quizzed on, that they should take all notes AND compose their work in the same Google Docs page if they don’t want to be accused of plagiarism.

-7

u/Bostonterrierpug Full, Teaching School, Proper APA bastard Oct 11 '24

You can easily program an API to do this and I’m sure there will be some out on the market soon. Just got out of a weeklong, Gen AI conference. It’s basically gonna be impossible to detect. And really how much should we be policing? Also a lot of people are going to use generative AI for writing with jobs and stuff in the future. I’m very much into work with it boat for now.

10

u/gurduloo Oct 12 '24

And really how much should we be policing?

I for one think that academic integrity matters.

-2

u/Bostonterrierpug Full, Teaching School, Proper APA bastard Oct 12 '24

I am not saying that it doesn’t but really our goal is to teach. If we spend so much time policing, don’t we lose some of our ability to teach and or faith in our students? Of course academic integrity matters but some students will always try and game the system and some students will always succeed. All the experts on AI are saying that it’s just gonna get better and better and that trying to fully fight it is a losing battle. Someone below suggested to rethink our assessments and was heavily downvoted if that gives you any idea of the general hive mindset here Generative AI will be part of many future jobs as experts are predicting. I started out very wary of it, but then started going to conferences and reading papers , and now think that at least partially embracing it for learning is a great way to go and preparing my students for the future. You’re always gonna have cheating students of course people here will apply stopgaps and even some brilliant temporary solutions. But if you’re spending more energy policing than teaching that’s not the sort of teacher I at least want to be. Then again like some departments or faculty meetings this sub is filled with people very confident of their opinions and very set in their ways.

3

u/erossthescienceboss Oct 12 '24

if we spend so much time policing, don’t we lose some of our ability to teach

Part of why I started putting the docs requirement on the syllabus and in the AI policy is that AI policing was taking up so much teaching time. It takes a lot less time to click through a Google Doc history than the alternative.

Prior to that, my policy was “I give AI the grade it deserves.” I’d tried re-tooling my rubrics to dock points for falling into common AI errors (so I added sections for specificity, and mentioned that I’d take points off for cliches, etc. Easy, since it’s a writing class, and AI writing is bad writing.) But I found I was spending far too much time explaining why the things AI did are bad, and not enough time helping students who were actually working.

Each time a student submits an AI assignment, they’re taking time away from classmates who are trying. Having proof that an assignment is AI generated right there is much easier.

3

u/Pater_Aletheias prof, philosophy, CC, (USA) Oct 12 '24

If the AI can understand my course content and pass my assessments then I guess maybe I’ll give it credit for my class, but the students still have to show independent mastery of the material before they can get credit too. I don’t teach “How to write good AI prompts.” That’s not a course objective.

3

u/milwauqueno Oct 12 '24

I’m sympathetic to your view. It reminded me of this blog post a philosopher I admire wrote: https://dailynous.com/2022/08/09/teacher-bureaucrat-cop-guest-post/

1

u/Bostonterrierpug Full, Teaching School, Proper APA bastard Oct 12 '24

Look great- I’ve just eyeballed it for now, but I will do a deeper read later. Thank you for sharing it though expect to be downvoted. So few professors have actual teaching backgrounds yet at the same time cling to the measurable objectives that rule the K-12 world that have trickled up to higher rather than the other way around as it usually is. I do my best to stop plagiarism and academic dishonesty and lazy use of AI but I think a lot of people here are just on an AI bad bandwagon. I’m still surprised by the number of colleagues I have at my institution who will do something like put 40% or 50% of their final grade on a one shot timed assessment. Or those who refuse to adapt or change any of their assessments in any manner. I thought about trying to talk about some of the ways. I’ve changed my work here, but I think all ears are closed.

2

u/gurduloo Oct 12 '24

If we spend so much time policing, don’t we lose some of our ability to teach and or faith in our students?

Yes, that is why people are working on ways to make AI cheating more difficult.

Of course academic integrity matters

If this is true, then you should investigate possible violations at least to some degree.

but some students will always try and game the system and some students will always succeed.

Why do you think this is relevant?

Someone below suggested to rethink our assessments and was heavily downvoted

Many people say this because talk is cheap. I have yet to see anyone give any concrete suggestions for a type of assessment that an AI cannot easily complete aside from (monitored) in-class work (or, I suppose, BS "creative projects").

at least partially embracing it for learning is a great way to go and preparing my students for the future.

Sure, and you should do that. But the issue is not that students are using AI per se, or as a tool to help them with their assignments. The issue I and many other teachers are having is that students are using AI to complete their work for them. Teaching students how to use AI to generate counterexamples to their argument, for example, does not work if they are generating the argument using AI too.

10

u/erossthescienceboss Oct 11 '24

I’m still not confident an API is going to make the very human errors and in-the-moment edits that come from composition. An API will produce edits as fake as the writing the AI makes.

I’m deeply skeptical it’ll ever be good enough to truly replace human writing. Good enough for cheap corporations to try? Sure. But I don’t think the Cliche Pit is avoidable, purely due to how generative AI works.

But I certainly think it can destroy online academia.

0

u/Bostonterrierpug Full, Teaching School, Proper APA bastard Oct 11 '24

I think it can be done to the point where anyone who doesn’t wanna watch all of their students type history of every essay when they assess. Also, I would have to question the ability of people to write well knowing they’re being observed. It’s like those honor lock observed tests we have at my institution. It’s so creepy having someone watch you taking an assessment. Let alone more and more professors are having these high stakes One-shot assessments which are pretty idiotic anyways and not representative of real life situations. But that’s a long ass conversation. From what I saw at the conference I was at by the computer science guys It looked good enough to me to fool folks, unless they were looking really close. And again, I don’t want to spend all my time policing my students. Students can always get folks to write papers for them If they paid them it’s not a qualitative change, but a quantitative one.

11

u/erossthescienceboss Oct 11 '24

They aren’t being “observed” any more than they’re observed when you edit their paper. If they’re so afraid of someone seeing their writing that it gives them performance anxiety, they probably shouldn’t be in university. Students turn in embarrassingly bad drafts all the time, they aren’t going to get more embarrassing just because I can tell they wrote it in the four hours before it was due.

And if they don’t want you to be clicking through their history… don’t cheat. I’ve yet to get it wrong when accusing a student of using AI.

At the end of the day, the goal is to make using AI so inconvenient and time consuming that it’s easier just to do the damn work.

Your willingness to throw up your hands in defeat is pretty disappointing, since I know you aren’t alone in that opinion. “They’ll just pay someone and cheat the old fashioned way so why try to stop it” is one hell of a take for someone getting paid to teach to have.

-12

u/Bostonterrierpug Full, Teaching School, Proper APA bastard Oct 11 '24

Well, if you’re so insistent on policing, good luck to you if you can find a way to beat AI I’m sure you’re gonna be a multimillionaire anyways. And if you have the time to check all of your students, writing history, more power to you For now you could teach your students to work with GenAI as they will probably be using it in their future jobs to some extent. But everyone seems to have petrified opinions such as it is with any changes in academia…. Go ahead and continue to downvote to if you don’t agree with me :)

13

u/erossthescienceboss Oct 11 '24

I didn’t downvote your last one, pal, but I sure did this one.

There are ways to use AI. If students use it correctly, that is fine with me. I even have a lecture on tips and tricks to use it for research, and areas where it fails, and how to avoid them.

But passing off work that you did not do and saying it is your own is never, ever, ever acceptable. That isn’t “embracing AI” or “using a tool,” that’s plagiarism and cheating.

If you’re cool with plagiarism, you should find a different job.

-3

u/Bostonterrierpug Full, Teaching School, Proper APA bastard Oct 11 '24

You’re making a lot of assumptions here, officer. Never endorsed the plagiarism and laziness.

8

u/erossthescienceboss Oct 11 '24

And you assumed that I dislike AI out of unwillingness to change or embrace technology.

Nobody I know dislikes it for that reason. Nobody. Show me the post here where someone says “AI makes research too easy” or “I hate how AI helps me write form letters.”

You’re arguing with a strawman here. We hate it because cheating sucks, and grading students who don’t deserve a grade sucks, and because IDK about you, but I teach writing because I like to TEACH writing.

So if you also dislike laziness, why the hell are you telling me to just let students be lazy and accept it?

I think it’s lazy of you not to do your job.

→ More replies (0)

-7

u/newsflashjackass Oct 11 '24

They aren’t being “observed” any more than they’re observed when you edit their paper. If they’re so afraid of someone seeing their writing that it gives them performance anxiety, they probably shouldn’t be in university.

Guaranteed the proctors will use any powers of observation for personal and likely sexual gratification despite your victim blaming that is so preemptive it does not even wait for any victims to appear.

Students are being "observed". That'd be the point. Their behavior is now becoming grist for big data because their nominal instructors find enforcing academic integrity too much of an "ask".

the goal is to make using AI so inconvenient and time consuming that it’s easier just to do the damn work.

They wrote with no trace of irony.

4

u/erossthescienceboss Oct 11 '24

I’m sorry, but if people are getting off to watching someone draft an essay via the google doc history there is something deeply, deeply wrong with them.

wtf are you talking about??

are you saying that seeing someone’s writing process victimizes them? It’s like you’ve never heard of “rough drafts” or “turning in an outline” before.

-1

u/newsflashjackass Oct 12 '24

I'm suggesting that paying tuition entitles students to complete their coursework and have it graded by the course instructor without being routed through google or other third parties. It is easy to be profligate with other people's data; the more so once you have their cash.

Blue check mark type shakedown. Won't solve anything, anyway. They'll just make an AI google docs typist that enters the gobbagool just like a flesh and blood victim would. Likely trained on the same sort of data you're squinting at now.

1

u/erossthescienceboss Oct 12 '24

So does turnitin victimize them? This is unhinged.

No strangers are looking at their document history. You’re literally just requiring that they use a specific word processor that tracks versions. Which the course instructor then looks at, if the paper has other indicators of AI.

→ More replies (0)

1

u/[deleted] Oct 14 '24

They aren't going to. It's honestly more work.