I was falsely accused of using ChatGPT for my work

A reader writes:

I’ve recently taken on a new role that’s a professional downshift so that I can ultimately pivot toward more fulfilling work. It’s fun, varied work that I love, but it does mean that I am earning significantly less, and I have been taking on freelance copywriting jobs to help make up the difference. I have a strong reputation in my field and my clients have been universally pleased with my work.

However, despite personally writing every word of my most recent assignment, the final work was run through an AI detector and was determined to have been generated by ChatGPT. This stung — it was an accusation of dishonesty, discounted my years of skill, and feels like the first of what may become many such instances in the future.

I know that AI scanners are unreliable and have been widely discredited — hell, even OpenAI has pulled the plug on their own detector, citing a low rate of accuracy — but I still wonder how I can protect myself against this kind of thing happening with future projects. I worry that I’ll put in hours and hours of work, only for clients to lose trust in the integrity of my work and/or skip out on invoices, having been convinced by a faulty program that they’re getting ripped off.

Any suggestions for reassuring clients and proving my work is, in fact, human-generated?

That’s infuriating.

Anyone who’s using an AI detector needs to be aware that they’re notoriously inaccurate. You can run pieces of writing through them that were created decades ago, long before AI existed, and get told AI wrote them. One “detector” even claimed the U.S. Constitution was written by AI. And as you point out, OpenAI, the company that created ChatGPT, shut down its own AI detector because of low accuracy. They’re ridiculously problematic.

So, you could start by asking your client which AI detector they used and explaining their well-documented inaccuracies. (Here are some links you could use: 1, 2, 3) You could say firmly that as a professional writer whose reputation is your livelihood, you take allegations of using AI very seriously and you hope they’ll give you the opportunity to show how baseless the assertion is.

Then offer to show them your version history. Google Docs, Microsoft Word, and many other writing programs keep a version history that tracks every change you made and when you made it, which will make it clear that you wrote through a normal, messy, human process with revisions and that whole chunks of fully formed text weren’t simply pasted in.

If they don’t backtrack once you calmly educate them, is this even a client you want?

{ 250 comments… read them below }

  1. Keymaster of Gozer (she/her)*

    First rule of ANY computer system: Garbage In = Garbage Out. AI and AI-detacting systems have not solved this issue at all and there can be a good case made for them making it worse. For the record I’ve worked in IT for decades and do NOT trust computers. Love them, but don’t trust them.

    Someone willing to shove your work through a piece of (IMO dodgy) software and conclude that it’s fake is about as credible as a lawyer who looks at a person’s name and goes ‘eh, sounds dodgy, they probably are guilty’.

    1. AI skeptic*

      100%. As someone with a long career in IT, I agree with everything you’ve said with one addition.

      I don’t trust computers because they are designed by people who sometimes don’t know what they are doing. Every time I hear “our computer system does X” or “we didn’t convert that data from old system” I hear people blaming technology for decisions that people made about the system. Current AI is simply highlighting the flaws in our existing data and analysis.

      1. Putting the Dys in Dysfunction*

        As someone with a long career of being a user, I’ll add that all too often the people who design a system are not the people who will be using the system, and there is little or no testing by the latter. Then, when the inevitable poor functionality comes up, it’s too late to make the necessary changes.

        1. Mockingjay*

          Corollary: The people who buy the system are not the people who use the system, who in most cases won’t even talk to the end users before purchasing an overpriced Shiny New System.

          1. I Have RBF*

            THIS!!

            I have to work with a lot of web-based, clicky-clicky software apparently designed for drooling idiots or overly busy managers. Everything is GUI based, which means any upgrades have to be done one at a time, for hundreds of systems. It’s highly inefficient, and even though these systems are Linux under the GUI, the only way to interact with them and update them is by the slow, clumsy and annoying clicky-clicky web interface. You can’t script clicks, and upgrading often takes 10 clicks, and waiting. But management just loves that it’s all GUI, and “simple”. There is such a thing as oversimplification to the point of inefficiency, and a lot of this stuff is past that.

            /rant

            1. Freya*

              I’m going to hazard a guess that the GUI aren’t designed to be Accessible, either

              (I go nuts when forced to use an interface that is mouseclicks-only, no reasonable tabbing from field to field, because the easiest way to do repetitive tasks without making my wrist sore is to use the keyboard)

          2. Chirpy*

            THIS!! The people at corporate have no idea what functions store level people actually need, and when someone at store level submits a suggestion to the ideas team, it gets shot down because they don’t even understand the problem that the store level person’s suggestion would fix.

            1. rebelwithmouseyhair*

              This, yes. I was being required to use software that didn’t have a spell-check. I’m a translator, and perfect spelling is a base requirement.
              The software also didn’t let me go back to alter a previously translated sentence, or even check what I put so I could translate a term the same way each time. But it was nifty because it meant that each (poorly translated) sentence would then appear in the right place on the website, without needing anyone to work on the layout. Since the layout guy didn’t speak English, this was seen as a major plus. We the translators had been saying to just let us check his work but that was not on apparently.

        2. Seeking Second Childhood*

          The amount of time I’ve lost tracking down source files because someone didn’t port over the field for author…

    2. Clisby*

      Long time computer programmer here. I highly recommend the AI Weirdness blog by Janelle Shane. In a TED talk, she said something like “Computers are great at doing what we tell them to do. They’re not always great at doing what we want them to do.”

      1. MigraineMonth*

        “The good news about computers is that they do what you tell them to do. The bad news is that they do what you tell them to do.” –Ted Nelson

      2. Kit*

        Janelle is really excellent and also deeply aware of the flaws with LLMs and the modern ‘AI’ trends, I can cosign her work as being both informative and entertaining.

    3. Anonynonybooboo*

      Adding a link below that will go through moderation, but: Navy Seals fooled AI by walking around in boxes and pretending to be trees.

      With decades in IT under my belt, I’m at “Decepticon Toaster” level of wary about AI.

        1. JustaTech*

          That was delightful, thank you for sharing!

          I remember a presentation by one of the folk on the Mars Rover team (early rover, not a current one) who said that an alien could tap dance on the rover and it wouldn’t notice because it was programmed to look at rocks.

    4. PCs lie don’t trust them*

      I literally just had to have a talk with my team on not trusting Microsoft when it says “azure groups have been added successfully”. I have come across several tickets where my team said he work was done only to find out MS admin center lied. I had one ticket where I had to add all the groups one by one because otherwise none of the groups were being added.

    5. Meep*

      I don’t mean to be impolite to OP, but a lot of people who are accused of using AI to write… lack basic grammar/spell-checking. OP would be better off getting Grammarly and rereading their work before submitting if this keeps happening.

      1. A robot, apparently*

        Definitely willing to entertain that possibility, but it’s unlikely that grammar and spelling were substantial issues. I have over 15 years of full-time copywriting and editing experience, and falling flat on the basics has never come up in my reviews. Plus, I imagine there are more productive ways to raise proofreading concerns!

      2. LemonLime*

        @meep

        What? Uh…she’s a copywriter…I really doubt she struggles too badly with basic grammar and spelling. That’s actually insulting.

      3. Arrietty*

        AI typically has perfectly fine spelling and grammar; that’s one way I can tell when someone used AI to write their job application (comparing it to their covering email, which they DIDN’T write with AI and therefore is riddled with errors). It’s not high quality writing by any means, but it’s not egregiously poor grammatically.

        1. AGD*

          I work on a college campus coaching undergrads, and THIS. One of the giveaways of ChatGPT, along with the content being repetitive and not adding up to a whole lot of real meaning, is that the grammar and spelling are 100% standard. Real student work that is excellent shows evidence of a lot of thought, and usually has a few typos or missing words.

          1. StephChi*

            Grammar and mechanical errors in their work is one of the ways I know when my high school students have used AI to complete assignments. I can also tell they’ve used it when they write at one level when they turn in handwritten assignments, but when they type them they’re written as if the student has a master’s degree. I know my students’ writing, so I can tell when they’ve written something themselves, or if they’ve had someone else do it, or cut and pasted from the Internet/used AI.

            1. allathian*

              Yup.

              That said, given that many kids learn to type at the same time as they learn to write by hand, I saw some research a few years ago that said that provided a kid is a reasonably competent typist, typing is so much easier on the brain than writing by hand is that their writing is up to *two grades* better when they’re typing than when they’re writing by hand. Writing by hand is a useful skill and it certainly improves manual dexterity, but the cognitive load when you write by hand is higher than when you type. But certainly if a middle schooler hands in a text that looks as if they had a Master’s degree in English, they’ve undoubtedly cheated.

              1. Nightengale*

                I wish kids were taught typing here at the same time as handwriting

                I work with many many children with disabilities affecting handwriting and it is an ongoing and constant battle to get them permission to type stuff and typing instruction. Since I started fighting that battle on my own behalf in the 1980s before all this tech was readily available, the chronic resistance has gotten really tiresome.

                Relatedly, I often feel like typing is an even stronger language for me than talking.

        1. allathian*

          Everything is claimed to be AI these days. And depending on the definition, I guess it is. Grammarly’s been taught by showing it a lot of samples of both correct and incorrect text, so it’s got a language model to base its judgments on.

          1. Zephy*

            NB I have done zero work to verify this, but:

            I recall seeing an article a year or so ago, right around when ChatGPT was widely unleashed upon the world, about students catching academic dishonesty charges for using Grammarly because it’s “AI.”

      4. JSPA*

        This seems…out of left field? Projection? Based on niche experience (e.g. GPT being used in desperation by students with limited English language facility, then mis-tweaked to make it their own)? A pitch for grammar software?

        There are some “tells” for AI generated writing (mixed styles, overly-intense / hackneyed / derivative style, lack of style, “hallucinated” facts, overly-bold or overly- hedged assertions) but “grammar and spelling errors” are not one of them.

      5. Keymaster of Gozer (She/Her)*

        That’s a rather off the wall comment, and rather impolite to the OP. Their very job is writing!

      6. rebelwithmouseyhair*

        Apart from being unkind, it’s absolutely not true. I was tasked once with proving to a client that we had not used Google translate (which is basically AI for translation). I told the project manager that it was perfectly obvious that the translator had not used it, because there were too many spelling mistakes. Google makes mistakes that are true howlers but all the words are spelled properly. AI does also produce sentences that are grammatically correct, even if it’s total nonsense.

    6. Still trying to adult*

      So true.

      AI exists based on models; representations of the real world, but not the REAL real world.

      Any model of anything has its limitations.

      Hundreds of years ago our collective model of the universe was that Earth is at the center, and everything else revolved around it, in circular patterns.

      Then science happened, and we have better, more accurate models of the universe, earth, Sun, moon, etc motions.

      Yes, you should push back on this, with Allison’s examples, and question the validity of the AI ‘detector software’ they are using.

      I would find their decision personally and professionally insulting. Tho Iwould also be of two minds here: Challenge them to justify the statement, or just simply walk, as Keymaster says.

      Damn big red flag that they’re accusing you of this.

  2. Kittybutton*

    I would love to see Alison do a series where she prompts ChatGPT to give advice to letter writers…and then comments on the quality of advice and shares her own

    1. online millenial*

      Given that generative AI is built on stolen work, exploited labor, and ecological devastation, I sincerely hope she *never* does this.

      1. Respectfully, Pumat Sol*

        Yes, same. Generative AI is “fun” but at the expense of many who have had their labor stolen. I can’t get on board.

      2. AnonInCanada*

        This. Every word you type in a Google doc or a Gmail app will eventually find its way to Google’s Gemini bot to “aid” its AI. It definitely needs it. But it’s still creepy how technology invades every aspect of our lives.

        1. But what to call me?*

          I hadn’t realized google docs was doing that, though I suppose I should have. In that case, does anyone know of any other free and reasonably convenient ways to make frequent backups of documents in progress without keeping a flash drive plugged into my computer at all times?

          1. Volunteer Enforcer*

            There are alternatives to Google Drive provided by other companies – Drop Box and Microsoft OneDrive. Caveat: I know Microsoft offer Co Pilot as AI but don’t know if OneDrive docs feed into this.

            1. Elizabeth West*

              Copilot has been pushed into Microsoft 365 now; it’s on your ribbon and in your document and Teams and I bloody hate it. I refuse to use it as a matter of principle and am actively engaged in begging my IT person to turn it off. :(

              1. Liz*

                Copilot with Data Protection (used by many businesses) does not use your data for training, nor does it store a history.

            2. Unions Are Good, Actually*

              Proton Drive is an option, too, both for document creation and file storage. It is paid, but that’s how it is when you don’t want to be the product yourself. Full disclosure; I am a Proton user with a paid account.

              They do seem to have an AI assistant, Scribe, but are committed to not selling or sending data to third parties, which is in line with their longstanding privacy priority. It also doesn’t train on Proton user data.

        2. AMH*

          I don’t trust AI or companies to use data to train those models fairly but its worth noting that Google has specifically said they aren’t using Docs to train AI.

          1. AnonInCanada*

            > that Google has specifically said they aren’t using Docs to train AI

            Sure, they’re not. Also: I just won $10 trillion in the lottery (°)-(°).

            It’s getting to the point where we’ll have to pull out our old DOS PCs, connect them via phone lines to one another like the good ol’ days, and run ancient software that at least we know for certain isn’t calling the mother ship at Microsoft, Google, Apple, Meta etc. and letting them know more about us than we know ourselves.

            1. AMH*

              I mean, everyone has to balance their own risk assessments and trust in big companies and act accordingly, I just think it’s important to not claim definitively and without proof (& against company statements to the contrary) that something is being used to train AI.

      3. Lacey*

        Same.

        Also, the fact that it’s built on stolen work is WHY the detectors can’t tell the difference.

        The OP’s previous work has likely been part of the harvested work used to make Chat GTP possible. Certainly the U.S. Constitution is was.

        1. A robot, apparently*

          OP here: I do have enough work out there that it’s definitely not impossible. Weird to think that every new piece of work I do can be used to build a case that I didn’t do that work.

      4. Quill*

        Yeah seconding that this is not a cute, fun gimmick to explore. The recent explosion in use of AI generative language models has a lot of water and energy cost, on top of the issues surrounding intellectual property that we have known about since the last generative AI fad for images.

        1. just some guy*

          Also dependence on sweatshop labor to train its “inappropriate content” filters. People get paid $2 an hour to read the worst things on the internet and annotate all the ways in which they’re bad, and then they’re abandoned when they burn out.

      5. Arrietty*

        It’s also incredibly resource-greedy, and its increased use will contribute significantly to climate change.

  3. Princess Peach*

    As someone who has quite a bit of professional experience with large language model AIs, I have some thoughts on why this is happening.

    1. Many people cannot pinpoint what makes “good” writing. It’s why they hire someone to handle it in the first place. (It’s not a personal failing; they just put their energy into a building a different skill set.) That means they may not be great at recognizing the hallmarks of generative AI, so they outsource that too.

    2. Some people are frightened of AI, but have little understanding of it. I’ve spoken with multiple people who don’t realize that “AI detectors” are also AIs. They’re trying to protect themselves from a thing they don’t want, but they don’t have the background or experience to do it effectively.

    3. Some people are very eager to replace paid humans with AI. If they can “prove” that the freelancer is using a chatbot, can they cut that budget line and use ChatGPT themselves?

    Going forward, could the LW include a line in their contract about not using generative AI? That might assuage some concern. Getting a couple testimonials from happy clients describing how they’re glad they picked the LW instead of using an AI might be useful too.

    1. A Simple Narwhal*

      I was wondering too about clearly mentioning ahead of time that they don’t use AI in their work. If they can be very up front about it and preemptively offer/ask what they will need to prove it’s not AI (such as version histories, etc) it might save some headaches in the future.

      It sucks though, they’re essentially being accused of cheating.

    2. Pastor Petty Labelle*

      I like this. Put a line in the contract that says you do not use generative AI in any of your work. That way the client knows up front you know AI is being used, but not by you. What they are paying for is you, not some machine.

    3. Nicosloanica*

      Ooh I like the contract line idea. It could be up-front in your bids and also on your invoices. That would at least make your stance clear so people aren’t wondering if you think it’s fine. I wouldn’t say that in every field but some, like translating and copywriting and art, where AI is already in use, it would be good (I have a professional notetaking sidehustle so I feel this).

      1. darsynia*

        I’d be wary of a contractual requirement not to use AI, not because of wanting to use it, but because I’d want to know how the use of AI is being determined. If that’s not spelled out, and you have it in your contract, a simple ‘we’ve run your work through various AI detectors and have determined you’ve violated your contract. We won’t pay you’ might be a very real possibility.

        1. learnedthehardway*

          Good point. I would mention it as part of the “selling points”, but wouldn’t put it in a contract, for exactly the reason you mention.

    4. New Jack Karyn*

      “Some people are frightened of AI, but have little understanding of it.” –It’s me, Hi, I’m the problem, it’s me.

      1. Cardboard Marmalade*

        I’m sure it’s woefully out of date at this point, having been published in ancient times (5 years ago), but for anyone feeling this way, I’d like to recommend the book You Look Like A Thing And I Love You by Janelle Shane. It’s accessible and helpful, but also I thought it was a pretty fun read (I disturbed they other patrons in a cafe while reading it once because I was cackling so loud). I definitely feel like it gave me a good grounding from which to go on and read more tech-heavy research/journalism about AI.

      2. Hroethvitnir*

        This article is great! I’m a moderately competent end user with no backend experience and only the vaguest understanding of what, exactly, is being called “AI”. This article on LLMs is super interesting and did clarify some things (from someone who isn’t innately anti-machine learning but understands exactly how they don’t work):

        https://matt.si/2024-02/llms-overpromised/

    5. The golden typewriter*

      Once as a test, my brother wrote a poem, and then asked Ai to write a poem about the same topic. After some deliberation, we correctly identified it. There’s something about human writing that just has a soul…
      Although to be fair, it’s hard to find soul in a business report. I can see why there are people who feel like they need to check if Chat GPT wrote it. OP musta felt like they got slapped.

      1. Somehow I Manage*

        My friend was reviewing peer-written papers for a professional organization we both belong to and he said that one paper was very obviously generated by using ChatGPT. Why? Because there was no personalization or personal voice in the paper at all. That’s another example of the “soul” of the writing.

        1. MigraineMonth*

          Another way to tell is that they’re often bullshitters.

          You know that really confident guy who always has the answer to every question, but never seems to actually *get* to that answer in all their declarations that they know it? Or who says completely reasonable things and completely ridiculous things with the same level of assurance? It’s not that he’s trying to lie, he just doesn’t care *at all* if what he is saying is true or not.

        2. EditingBadWriters*

          I’ve edited many people who are awful writers and actually do that naturally. They have other tells, though.

    6. 3-Foot Tall Inflatable Rainbow Unicorn*

      As someone with a writing background, I have an even more cynical thought on why this is happening:

      1) They want to reduce the payment to LW

      2) They want to not pay LW at all

      I will bet cash money that the next thing coming from this client is some variant of “I paid for writing labor, I have completely unreliable evidence you did not do the labor I paid for, therefore, I will not pay.” with optional “But I will still use the writing you sent me.”

      1. A robot, apparently*

        OP here: I must say, I did not get the feeling that the client was trying to get out of the contract. I was believed, reassured, and paid in full when I made it clear that AI is in no way part of my process. I think they had either been burned before, or were trying out a new tool without sufficient background.

      2. Wilbur*

        This sounds like a cost saving idea from someone who has not used AI and does not perform this work. “Why don’t we just use AI? I bet OP is doing that already.” People always throw out ideas whenever something is in that Hollywood Macguffin territory-mainstream enough for people to know about but ineffable to the general public. Nanobots, gene editing, additive manufacturing, AI, etc.

    7. nnn*

      I feel like they wouldn’t need to prove the freelancer is using a chatbot to cut that budget line and use ChatGPT themselves? They could just…use ChatGPT themselves.

      Even if they have to complete the contract with the freelancer, surely there’s nothing stopping the employer from “writing” another thing themselves?

    8. A robot, apparently*

      OP here: That’s a fantastic idea. It’s clear that I do need to have this conversation with my clients. I do hate that it feels a little “doth protest too much” to bring it up when nobody asked, but it makes sense to get ahead of it. I’m a VERY from-scratch copywriter, so I can make it clear that I don’t even use AI for outlines or research.

      1. allathian*

        Yup. That said, someone on my team took an “AI in comms” training and they pretty much said that nobody should write a 500-word executive summary of a 10-page or longer report anymore because AI does it much faster and catches the main points. A few minor tweaks are sufficient. I though that was rather interesting.

        1. Aqua*

          It may also capture the exact opposite of the main points, or capture points that weren’t there, or not capture points that are particularly relevant to your context

    9. Thomas*

      “Going forward, could the LW include a line in their contract about not using generative AI?”

      That seems like a terrible idea – you just gave clients an easy way to refuse to pay by putting your work through “AI detectors” until one says it was AI-written.

      If anything I’d go the other way – contract wording that more or less states you are paid for the results and how you got them is immaterial.

  4. quercus*

    Maybe not the direction you want to go with them, but why should they care how you came up with the deliverable? This isn’t a college exam or licensing test, after all. If the text is good, then they should be happy. If it’s bad, they should be unhappy and get a different consultant.

    And if they really can’t tell the difference between your work and spicy autocomplete “AI”, then they should probably stop paying you and just run ChatGPT themselves. (And this is just a variant of the common ‘why should we pay you for something so simple we could do it ourselves’ complaint to consultants/freelancers, so OP may already have some experience dealing with that).

    1. T.N.H*

      For one, you can’t copyright work generated by AI. There are also many companies who state affirmatively that they don’t use it, so this would violate their own policies to readers/customers.

      1. Nicosloanica*

        My boss would also object to paying a human full price for something a software program generated for free (even if the human “looked it over” first).

      2. londonedit*

        Yeah, I work for a publishing company, and our legal position is that we won’t even look at anything that’s been generated by AI, let alone use or publish it. I had an author send me some copy ideas a while back and they just happened to say ‘I ran it through Chat GPT’ and I had to say nope, sorry, cannot accept any of this, you’ll have to go back and write something completely different.

    2. Fierce Jindo*

      I’m not at all defending this company, but there are lots of reasons not to want someone working for you to use AI. These include that the algorithm now owns your proprietary information and the massive ethical and environmental problems with AI.

      Of course, the conpany’s AI detector probably has all of those same problems, including giving your info to the AI detector company, so they’d be on shaky ground in this case.

    3. Cetetera*

      I’m quite pro-AI in general, but one of the issues with LLMs is that [i] you can’t really be sure it’s not copying some extant text word-for-word (or close enough for legal action) and [ii] it seems like the legal burden of said copying is sitting with the end users right now instead of the firms hosting the LLMs (a massive mistake, IMO), although TBD how that shakes out in the long run. If I were running a firm, I would not want to assume the potential legal liability for using ChatGPT or other generative AI products in their current form.

      AI algorithms definitely aren’t the way to try to go about trying to prevent this, however.

  5. br_612*

    The AI detectors are causing a HUGE problem for college students. Especially if they use Grammarly, which some of their colleges are telling them to use. It’s a whole entire nightmare for them.

    I’m a writer myself, in an industry where many things are very formulaic and use a lot of symmetry in paragraph construction (so the reviewers know exactly where to find the information they’re looking for). And the other sections of the documents I write that are less formulaic are heavily cited. I’m guessing most of my writing would pop as AI generated, until and unless someone reads closely enough to realize I’m setting up scientific or logical arguments a computer couldn’t (which would depend on them knowing the limits of AI, and I think it’s very clear a lot of people don’t).

    1. Nonsense*

      Oof, yeah, Grammerly basically went in the toilet overnight. I have a couple writer friends and I mentor college students, so I’ve had a front-row seat to just how awful its suggestions have become. I flat out told my mentees to stop using Grammerly and trust their gut, because they’re right about 70% of the time while Grammerly was coming in at less than 15%.

      1. Lacey*

        Yes. One of my friends is a professional writer and one of her clients wants her to use like 90% of grammerly’s suggestions (I guess it tracks how many you use?), but she can’t because the suggestions are SO bad.

      2. Quill*

        One of the reasons I’m glad I got a free install of Word out of my last job is that Google docs’ autocorrect now looks like it is being run by some sort of letter-association slime mold.

        1. Lizzie (with the deaf cat)*

          Lord Running Clam, the Ganymedean slime mould in Philip K Dick’s novel Clans of the Alphabet Moon, would like a word.

    2. Minimal Pear*

      Yes, I remember a few months ago there was this young woman whose case about this exact thing blew up! I think Grammarly uses some AI when it suggests wording, or the writing style it recommends is a lot like AI writing. So her school told her to use Grammarly, she did, and it pinged as AI. I think she was pursuing legal action?

      1. Hyaline*

        I mean at this point Grammerly has a generative AI component and will basically write your essay for you (poorly). It’s not just suggestions and improvements anymore.

        1. NervousHoolelya*

          I’m a college writing center director, and I’ve been counseling the students we work with to ONLY accept small, discrete recommendations on a case-by-case basis, and NOT to use the “make this paragraph sound more X” options. For the populations I work with, those small recommendations tend to be fairly accurate (but the more sophisticated the writer is to begin with, the less accurate the suggestions will get!). The full-scale rewriting is straight-up generative AI, and it will trip the detectors.

          There are a LOT of problems with the detectors, but I test them regularly because the faculty I work with refuse to believe me when I explain that they are problematic. Alison’s links are over a year old, and the situation has changed a bit since then. I don’t think any one detector is accurate, but a piece of writing that trips five different detectors does raise some questions when working with college students. It shouldn’t result in an automatic accusation of academic dishonesty, but it SHOULD prompt a conversation asking the student to verbally explain the concepts in the paper.

          None of that helps the LW, though, because professional writers are operating at a much higher level of written sophistication than the vast majority of college students.

          1. Unions Are Good, Actually*

            I used to work as a reference librarian at a university, and mostly provided on-the-spot research help to undergrads. A lot of them would ask about Grammarly, and I would always have to tell them to a) ask the Writing Centre and b) that I couldn’t recommend any particular app or tool I’d never used. I always tried to send them off to the Writing Centre so they could, you know, actually learn how to write… but they always wanted to just use an online tool.

      2. Soup In Arms*

        I was just coming here to mention exactly that! If I remember correctly, she didn’t even use the things Grammarly had suggested, but it still put an “AI footprint” there so she got dinged for it.

    3. Excel Gardener*

      What a mess. I think we really need to pivot away from the use of detection tools. Unfortunately, the solution is probably going to have to be something like requiring students (and in the workplace, professional writers) to write their pieces in cloud software that tracks changes and can verify the drafting process was a human one rather than a copy and paste or even a manual typing in a LLM response verbatim. But of course, that’s on employers and schools to implement, not on students and freelancers.

      1. Bored at work*

        Some colleges already do some version of copy-and-paste detection. It’s not foolproof as obviously you can type over what you’ve copied and pasted to make it look like it was written organically but I prefer this to AI-detection by far.

      2. Hyaline*

        I can say wholeheartedly that, teaching writing, I do not need AI detection “tools” to tip me off to student use of generative AI. I’ve used them to help colleagues gut-check and learn the patterns that crop up frequently in AI writing, but they’re far from a slam-dunk and they “prove” nothing. Honestly, the answer for colleges is mostly in improved pedagogy and in some cases going back to old-school methods and *gasp* trusting professionals who teach students when they say “this student did not author this piece for XYZ reasons” (after all, we could and did catch plagiarism before TurnItIn software).

      3. Orv*

        Some professors I know have resorted to having students write essays longhand, in class, in blue books, the way it used to be done. It’s a lot of extra work but it does ensure no AI was involved.

        1. NervousHoolelya*

          While it may ensure that no AI is involved, it’s a massive accessibility violation.

          1. Orv*

            Other accommodations have to be made for people who are unable to write longhand, of course. This does raise the issue that those students might cheat using AI, but nearly everyone has accepted that individual disability accommodations come with a heightened risk of cheating and there’s not much to be done about it.

          2. Artemesia*

            You can arrange for students who cannot write longhand to write on prepared computers that don’t have internet access and are sterilized. I did that for qualifying exams for students re-taking exams 25 years ago.

            In an age of massive cheating, doing exams in class in blue books is often the only way to assure the student has actually mastered the material especially in advanced courses. There is a reason high stakes exams like bar exams are not take homes.

            1. Liane*

              Or you have the longhand writing done by real, live, human proctors dictated to by the student needing accommodations. This is one of the things I did for blind students in my work-study job for my university’s department that handled accommodations back in the 1980s.

              1. Nightengale*

                As someone who is sighted and cannot handwrite very much – these are not equivalent accommodations for many of us. I type fluently. A manual typewriter would work for me, it doesn’t have to be a computer. I cannot dictate fluent text to a human scribe.

            2. Arrietty*

              Out of interest, why does the book need to be blue? (Sincere question, I suspect this is a divided by a common language situation.)

              1. Bumblebee*

                A “blue book” is a little notebook thing that students buy at the college bookstore. They are invariably blue and seem to be the same exact product sold at every university! So, colloquially, they are “blue books.”

                1. Stuff*

                  Every school I’ve been at (so, three) has had them available in both blue and green, with green being the more popular choice. Interesting that elsewhere blue ones are preferred.

              2. TeaCoziesRUs*

                Think 8-10 sheets of A4 / letter sized lined paper bound in a flimsy paper notebook. It’s cheap, has enough pages for one very hard essay or a few shorter essays, and takes up very little room.

          3. Hyaline*

            Yeah, we managed this accessibility concern before and we can can do so again. Campuses already have testing facilities for the many students who require additional time or reduced distractions.

          4. Ellis Bell*

            Where written exams are required, yet some students have a computer access arrangement, you simply disable cheating avenues, make it offline and disable spell checks etc. It’s not difficult.

            1. Orv*

              It depends. Ever since COVID, we’ve had students where the accommodation mandated by DSP was that they be allowed to take the exam from home, so they weren’t exposed to potential infections. It’s not really possible to provide a sterile environment in that case.

              1. Orv*

                Err, sterile in the cheating sense, that is. I just realized that sentence was ambiguous. :)

              2. anxiousGrad*

                A couple of options even then: you can proctor them over Zoom and/or make the test timed on Canvas and time it so that they don’t really have time to look through their notes. They could still put a cheat sheet behind the computer, but it reduces the ways they can cheat.

                1. Banana Pyjamas*

                  You could also administer the test on Moodle. It allows you to see if they click out of the test, internet connectivity, how long each question is taking, etc.

                2. Orv*

                  That’s true. There are some serious privacy concerns involved in proctoring over Zoom, though, especially if the student shares a home with someone else. So while it’s done fairly often, it’s officially discouraged.

          5. learnedthehardway*

            Even for students who do not have any accessibility issues, they just don’t have the writing skills or speed. I have seen this with my teenager, who really struggled to get his chemistry tests done this year, because he just does not write quickly. He was incredibly frustrated, because he knows the material, but simply couldn’t get it down on paper fast enough. Thankfully, his teacher knew that he knew the material, and had made allowances for the class (because everyone was having issues), but I was really surprised to realize that it’s an issue for a lot of kids.

          1. Orv*

            These are in-person exams. (We don’t allow remote exams anymore except when there’s a disability accommodation.) So if they want to memorize an entire AI essay so they can write it out longhand later, I guess the more power to them. That sounds like more work than just writing it. ;)

        2. rebelwithmouseyhair*

          The school I got my master’s from has always had students take exams in the same conditions as in the 19th century, using pen and paper, or pencil, rubber and paper when the teacher was lenient. I said that it didn’t test for research skills, and the teachers said that the end results were the same, whatever conditions the exams were taken in, so they opted for the cheat-proof way.
          I don’t like it because then the teacher’s appreciation of your work also depends on your handwriting. Everyone has their own distinctive hand, and so the teacher knows whose copy they are reading from the get-go.

          (Just wondering whether students nowadays have as many handwriting quirks as we all did, given that they don’t write by hand nearly as much…)

      4. Ellis Bell*

        It can be very useful to be able to see previous drafts, I think this is a great idea. My old newsroom’s very creaky software did this years ago because it was quicker for the subs and editors to see previous versions than to ask us questions. As an English teacher, a huge issue I have is students who won’t put the effort into drafting and redrafting, (you can tell from the finished product that they haven’t redrafted, but they don’t realise how obvious it is) so I think it’s a bonus.

    4. Salsa Your Face*

      I know of students at a particular online school who are finding themselves between a rock and a hard place on this one. Evaluators have started cutting corners by using Grammarly to review submissions for “professional language”–if it reports too many corrections, the assignment is rejected and kicked back for additional editing. But if the students run their submission through Grammarly before submitting it and accept too many of its corrections, their submissions get rejected by AI detectors.

        1. MigraineMonth*

          I would be LIVID if I found out that I was paying college prices for my writing to be graded by Grammarly.

      1. Paint N Drip*

        Ooooooh this is infuriating!!! I can see where those students are just.. out of luck, even when they’re trying. Seems like lazy teaching TBH

    5. Ginger Cat Lady*

      In grad school about 18 mos ago, I got called into the dean’s office for a talk about “academic integrity” after one of my papers came up as more than 30% plagiarized on Turnitin or whatever they were using.
      It was a literature review. The only parts marked as plagiarized were the citations and the places where I quoted authors appropriately. Because it was a lit review, there were a lot of citations, that part alone was more than 30% of the length of the paper.
      I was so pissed to be called on the carpet and lectured about integrity when neither the professor OR the dean bothered to open it and see what was marked as plagiarized.

      1. Hyaline*

        Wow that is painful–none of those people know how to actually use TurnItIn! It’s color coded with links to the original source material for goodness sake!

        FWIW I use TurnItIn frequently with my (very underprepared, very clueless) college students to help them understand plagiarism vs appropriate use because the color coding is so clear!

        1. Banana Pyjamas*

          That surprises me. We were required to use Turnitin in high school. Body paragraphs were required to follow a format:

          Assertion
          Evidence, being a direct quote from your source material, including citations
          Elaboration
          Second evidence
          Second elaboration
          Conclusion

          Turnitin was constantly flagging the evidence with citations as misuse. Usually at least 30% of any assignment was flagged. Luckily my teachers actually read the assignments rather than opening academic inquiries. As someone with anxiety though, it was horrible.

          1. Freya*

            For one particular uni assignment, I was STRESSED about whether I’d done something wrong… because I got 0% flagged. Not even the bibliography/citations got flagged, and I’d cited the textbook!

      2. Myrin*

        That is infuriating! How did they react when you pointed this out? I hope they were at least contrite about it!

        1. darsynia*

          Our ‘Dean of Students’ decided that since I was crying after losing my father to a heart attack, and someone else saw me crying (alone, in a corner of the courtyard at night), I had to sign a thing to swear I could be expelled if I was ever seen crying in public again.

          This was a religious college.

          1. Bumblebee*

            That is illegal. I’m sorry that happened to you!

            Also, I taught a higher ed class last semester and used turn-it-in just out of curiousity. Although I told it not to flag any references or direct quotes, it did exactly that! There may be things I can do to make it work better, but that alone made me decide I would rather err on the side of trusting my grad students than subjecting them to robot graders of any kind!

      3. Hawkwind1980*

        This is why I think anyone who has a policy of using a certain percentage as the marker for whether a student gets dinged for plagiarism is ridiculous. I’ve given an annotated bibliography assignment for which I expect the typical SafeAssign plagiarism percentage to be around 50% because the citation for a particular source in a particular format is going to be identical for anyone doing it correctly. Sometimes it turns out as 100% plagiarized if the student did the citations but not the annotations. Guess what? The latter submission still earns a 50 because it was half of what the assignment asked for.
        Essays that are effectively a copy-paste of a mishmash of articles are another story.

      4. Artemesia*

        This is gross incompetence and should have been escalated above the Dean and Professor and perhaps to the school paper.

      5. anon24*

        My last English class everything had to be run through Turnitin and I couldn’t have more than 20% similarity or my papers would be rejected. I also had to have a certain amount of quotes and couldn’t go above a certain word amount. I was sliding the line of my papers being rejected at 19% similarity, I had maxed my word count, and the only things Turnitin flagged was the required direct quotes, my bibliography, and for some unknown reason every usage of the word “the”.

        1. Hawkwind1980*

          Your situation as described is why no one in my department has an “allowed” percentage of similarity. What we have instead are directives about how much of a paper can be quotations, paraphrases, and summaries (usually a total of 20-30%), but that’s not the same as a plagiarism detector’s percentage since a correctly done citation should be identical no matter who uses it.

        2. Sola Lingua Bona Lingua Mortua Est*

          Everyone’s using the same indefinite article. Must be cheating, can’t be a limitation of the English language!

        3. Quill*

          The fact that it flags “the” makes me wonder if turnitins’ methodology is actually constructed by anyone or if it’s some kind of statistics soup that they hope they can throw a lot of numbers into without human review, and get actual analysis out of.

      6. Dana Lynne*

        Retired English instructor here.

        This is so wrong. I am so sorry that happened to you. What lazy admins or profs you had. Yikes.

      7. Fíriel*

        I think I was above 30% on TurnItIn once in undergrad because it was an assignment with several short questions, and so every student in the class had copied all the questions into the document before we started, and so it flagged *saying the questions we were answering* as plagiarism from whichever student answered them first. And then there were citations and quotes on top of that.

      8. Copier*

        yes something similar happened to me on Turnitin! I’d done my undergrad in a writing-heavy field and postgrad was much more sciency and they clearly didn’t understand what the plaigiarism results really meant

    6. Red Reader the Adulting Fairy*

      Yes, Jesus wept. I had an instructor inform me that he was positive an entire paper I submitted was produced by AI, and I was like “Well, it wasn’t, but I don’t know how I can prove that as I don’t have an extensive preparation system of notes and outlines for a 3 page paper in a 101 class, so what can we do here?” and he goes “Uh, rewrite it by Friday.” So I rolled my eyes and did, and shocker, got 100% on the second round.

    7. noncommittally anonymous*

      Yes, there was a horrible case of a professor at a small school in Texas(?) who declared that everyone in a particular class had used ChatGPT to write a final paper and failed the entire class, preventing them from graduating, because he used an online AI checker. Amusingly or infuriatingly, he kept calling it “ChatGTP” throughout the article. I’ll see if I can find the original press coverage.

  6. Smallbusiness*

    I guess it’s time to more thoroughly track my steps and versions and probably invest in document control software.

    1. I strive to Excel*

      FWIW both Google Docs and Microsoft Word have track changes built in (I’ve used Google’s before when I accidentally overwrote a huge chunk of info I wouldn’t have been able to get back easily, it was such a relief). You might have to specifically get in and turn it on, but that can at least be a start.

      1. Observer*

        Yes. Pretty much any professional level writing / editing software has that ability. It’s pretty much table stakes at this point.

        You may want something more, but this is a really good starting point.

      2. A robot, apparently*

        OP here: I did use Word, so it’s good to know that these features were there all along! Thanks!

    2. DisneyChannelThis*

      Google docs has it on by default. If you click the top left arrow around a clock button it will show you dates and times the document was edited.

      In word if you have it sync to onedrive, I know you can click the title of your document at the top, a dropdown appears and version should be an option\.

      Also if you save v1, v2, v3 as you go in a folder, you can screenshot the date/time column as proof it wasn’t a copy paste out of AI.

      1. 653-CXK*

        Yes…whenever I have a new version of a file, I always have v1, v2, v3 and so forth, in case future updates go kablooey.

  7. Peanut Hamper*

    This is one of the many (many, many) reasons I’ve switched to doing all my rough drafts in plain text files and then using Git to keep track of all the version changes. Also makes it easy to find something I deleted if I want to go back and pull it back in again. It’s nice to have that history, but only have the latest draft in front of me to work on.

    1. a trans person*

      I love using git as a solo user for writing text. No merges or anything complicated, just a linear revision history. I’ve been doing that for my personal projects for years. Wish it were more accessible to non-programmers.

      1. Orv*

        There are GUIs that make it somewhat easier, but git does provide enough complexity that you can get yourself into perplexing problems.

        I always found Mercurial somewhat easier to use, but it has decidedly lost to git at this point.

        For that matter for really basic stuff with no merges, older solutions like SVN or even RCS work just fine.

        1. Peanut Hamper*

          It helps if you have an online repository to push to. If you don’t like Github (because of Microsoft), there is always codeberg dot org, which is non-profit.

          If you are not doing branching, you can keep yourself out of trouble pretty easily. It’s mostly just

          `git status
          git add -A
          git commit -m “Commit message”
          git push remote branch`

    2. Disappointed Australien*

      A friend of mine uses it in their project management job purely to track their numerous text documents. Sadly “track changes” in document editors absolutely sucks, especially merging multiple variations when several people suggest edits. There are tools for this but none work effectively on giant blobs of XML. Copy the actual content into a plain text file and compare.

      Project management, BTW, is one of those “document X, version 27, revision 18 (by Sam), approved by Chris, Bob and Sam (the other Sam), last updated 12/6/24” jobs. Times 800 documents.

      (also, far too many people enjoy going through and manually changing every occurrence of a text style, thus breaking both the document style and any attempt to compare versions or autoupdate the table of contents. If some level 3 titles have this style, and other things that might be level 3 titles have a mixture of similar things, what style should be used?)

  8. T.N.H*

    You could also tweak your writing slightly to ensure you won’t get flagged by AI detectors. There are a few big things they look for that you can deliberately avoid.

    1. Molly*

      Really?? I mean, if the Constitution gets flagged, it seems like anything could.
      Also, just what are these “big things” to avoid? I think it close to cruel to state these flags exist, without naming the flags.

    2. Fluffy Fish*

      Suggesting a professional writer change how they write to increase the chance they wont get flagged by notoriously, documented, inaccurate ai detection is not it.

      The problem is ai and ai detectors, not OP or writers in general.

      1. T.N.H*

        I am a professional writer! Depending on the type of writing, they could make a few tweaks to lower their AI score without completely changing how they write.

        1. Fluffy Fish*

          I am as well. I would never change my writing to appease crappy software or a client that most likely is trying to get out of paying by claiming AI can do it.

          It’s hard enough to convince people that writing is a skill and not something anyone can do. We should be educating non-writers about the problems with AI and AI detection, not changing what we do. No matter how small or big the tweak may be.

      2. Quill*

        Also many documents produced for work purposes have specific formats and specific legal meanings of words. Changing enough to avoid detection may mean changing enough of the wording to no longer be legally binding or technically accurate.

        For example, I’m confident an AI checker would flag any portion of an MSDS document (chemical hazards safety sheet) because there are a billion of those publicly accessible, and they are generally presenting the same information in the same sentence formats with the same small pool of words such as corrosive or irritant. Every one of those words is EXTREMELY specific in what it means from a legal and regulatory standpoint.

        1. Peanut Hamper*

          Yes, this. MSDS (which covered what you needed to include but not in what order) eventually became SDS, which were very specific about order. AI simply doesn’t know or recognize this.

  9. Justin*

    I’m currently hiring and we have a little AI thing that tells us what percentage of the main qualifications they have. But it’s always wrong! So I have to read the things myself anyway.

    Our HR doesn’t like it either.

    Seems like money is being wasted. Oh well.

    1. Excel Gardener*

      This is the problem with LLMs, they seem much smarter and more human-like than they actually are if you only interact with them occasionally as chatbots. So people keep trying to use them to do things humans can do and realize they’re not humans.

    2. Banana Pyjamas*

      They AREN’T a myth of the internet! So many managers here don’t use a tool like that, I was starting to think they might not be a thing. Since job responsibilities are often listed as present participles I’ve been considering redoing my resume in the past progressive to improve my AI match. So for example if the job responsibilities say “managing interns” my resume would say “was managing interns.” Right now I don’t think I will, since I think it would come across strangely to any humans who read it.

  10. Crencestre*

    Any chance that this client is hoping/plotting to either emotionally blackmail you into canceling or refunding your fee (thereby getting your work at no cost to the client) or getting you to drastically lower you usual fee for this work?

    1. Fluffy Fish*

      That’s my assumption.

      Or alternately trying to whip up some internal justification to not pay for professional writing moving forward.

      1. 3-Foot Tall Inflatable Rainbow Unicorn*

        I’m absolutely convinced this is what’s really happening.

    2. A robot, apparently*

      OP here: I gotta say that I really, really didn’t get that feeling. I was believed, reassured, and paid in full when I made it clear that AI is in no way part of my process. I really think they had been burned before and were trying to do some due diligence. There just isn’t enough information out there about what a flop AI detectors are and they got the impression that they had a reliable tool in their hands.

  11. The golden typewriter*

    Ooh, this is a touch subject for me, because I recently read a picture book I suspect was written by AI, and it made me so mad for multiple reasons. Anyways, AI work that is being passed off as human is a big problem, because there are people who do it. I would tell them that you would never place their trust in Chat gpt, because it can be crazily inaccurate. Maybe demonstrate for them what would happen if you imputed “write a [teapot marketing] report for [Walt Disney studios]”

    1. I strive to Excel*

      I recently discovered that a scented wax company used a probably-AI-generated trace of art from a small online game I play. This is a major company too, the wax melts ended up in Walmart for public consumption. Yay :(

  12. Having a Scrummy Week*

    Any writer worth their salt knows that Chat GPT doesn’t spit out the best writing.

    1. The golden typewriter*

      Ha! The sad thing is, I see more and more news articles, websites and books that smell like chatGPT.
      Either that, or we all need to retake English class ;)

      1. Paint N Drip*

        No you’re right, especially the ‘churn’ online news articles that get passed around like mono

        Although if everyone took an updated class in middle life I wouldn’t think that was a bad thing! :)

      2. Orv*

        I sometimes see web forum comments that have the obvious structure of ChatGPT writing, now, which makes me wonder why they bothered.

        1. Zephy*

          They’re bots or plants. Some forums require users to engage with the community at a certain level before they gain full access to the site, as an anti-spam measure. Nice thing about ChatGPT is you can tell it to write a post in English about whatever and you don’t need to actually be able to read or write English yourself to post it. So, suppose you’re a Russian content farm trying to access an audience for whatever purpose…you see where this is going.

      3. learnedthehardway*

        Knitting patterns, too – which probably means any kind of technical manual, come to think of it. You have to be really careful now to not get an AI-generated knitting pattern, if you’re buying on a platform you’re not super familiar with.

  13. Hyaline*

    One missing piece here–was the client otherwise happy with the work? I’m wondering if perhaps not, as they went to the trouble of putting it through an AI detector. Maybe it’s worth starting there–was the client happy with the work, and if not, why was the client not satisfied? If they wanted something “different” from what you gave them, maybe their thought process (though incorrect!) was to consider that it was produced by AI, including following their hunch to an AI detector that they probably didn’t realize was bogus.

    Another angle, too: when it comes to writing, the AI problem is beyond merely frustrating, even beyond ethically problematic, and into legally dicey–you can’t copyright wholly-AI-produced work. So if your work is going to be copyrighted, your clients may have a sticky wicket of a problem–they might want assurance that the work is not AI-generated, but the only “tools” to do so suck. I know you’re taking this as a personal affront, and I would, too, but the ugly thing is, and I say this as a writer…I think we all need to get used to answering the question.

    1. A robot, apparently*

      OP here: I’m hesitant to think that they weren’t satisfied. Apart from edits to fine-tune some language around details that were unknown while I was writing the project, I got nothing but positive feedback and an invitation to work on future projects. As I said in my letter, my clients have been universally happy with my work.

  14. Somehow I Manage*

    Oh this makes me so angry on your behalf, OP. While sometimes I’d advise someone to just let something go because the reaction can make things worse, in this case I’d absolutely go back with any type of receipts you have showing document changes. Also the resources shared above related to the unreliability of AI detectors. And if I were in the situation, I’d probably lead with “I take my work and my professional reputation very seriously, and any allegation that my work is not actually my work is something I want to refute.”

    I’d love to request an update to this, please. I’d love to know the company’s approach if you do defend your work.

    1. A robot, apparently*

      OP here: I should have been clearer in my letter. I did defend my work, and was believed and paid in full. I did appreciate that they were willing to hear me out and honor the contract, as I had no way to definitively prove that AI had come nowhere near my work. I just think they had possibly been burned before, were trying to do their due diligence, and simply didn’t realize what a flop these detectors are.

      I really like your script for future interactions where I might not be so lucky with clients. Thanks!

    1. Angstrom*

      “If she weighs the same as a duck, she’s made of wood?”
      “And therefore…?”
      “A witch!”

  15. Butter and Lollipops*

    There are plenty of situations where imo it’s fine to use AI for an initial draft or outline for something, in the same way that you might use Google.

    This doesn’t sound like one of them though, I would be bothered by the accusation also. I wonder if this client is trying to cut expenses and hoping AI can somewhat adequately replace you

    1. A robot, apparently*

      OP here: I honestly did not get the feeling that they were trying to get out of paying. They believed and paid me when I categorically refuted any use of AI. I think they were just trying to cover all their bases and just didn’t know how unreliable these tools are.

      I actually don’t even use AI for outlines. I feel like if I’m being paid for my experience, I’m also being paid for my full attention and personal approach.

  16. ragazza*

    I’m a freelance writer and while this hasn’t happened to me, too many other writers have reported similar experiences. Some have suggested allowing clients to see the process on Google Docs or even screen-recording themselves as they write (!). I would never allow a client to see my writing process as I’m not an employee, so it’s none of their business, and that suggests I will take their input into how I write, which I absolutely will not. I would take such an accusation very badly and probably drop that client. (I’m also older so I could point out I’ve been a successful writer for about 30 years, wayyyyy before AI.)

    I think the people making these accusations aren’t professional editors or don’t have much knowledge of what makes good writing or what goes into it, so they use those tools. But if you’re using them to definitively tell you if someone used AI, you’re clearly not qualified to do your job, IMO.

    1. nnn*

      My writing process is dumping everything I can think of on the page in disjointed point form (sentences, words, links, a clever analogy I want to work in there somewhere), then annotating it with a bunch of notes like “make this less stupid”, then figuring out the optimal order to put the elements in, and then making an outline (by which point I’m also 85% done writing the thing).

      Proves beyond a shadow of a doubt that it’s not AI, but I’m not sure if I’d want anyone to see it…

      1. ragazza*

        Exactly! The process can be messy, and inviting clients in to see that is just asking for trouble.

        1. A robot, apparently*

          OP here: The idea of giving a client access to a beat-by-beat look at my writing process gives me hives! Why not let them get a good look at my dishes before I scrub them, while we’re at it, LOL.

    2. Spacewoman Spiff*

      Out of curiosity, how do you price your projects? (Asking because the idea of recording yourself working, or sharing a document with the entire history of your versions and edits, feels off to me–I don’t do freelance work but I *do* write a lot, and every once in a while a piece just comes out correct, or near-correct, the first time, and I can imagine a client being annoyed if that were the case with a project I did for them. Or even making them suspicious! When the fact is that…when you’ve trained/worked for years on a skill, you’re going to get good at it, and sometimes that’ll happen. I’m reminded of people annoyed by how much a plumber charges for a fix that takes them 20 minutes, not realizing that they’re paying for all the experience that makes the plumber so quick at the repair.)

      1. ragazza*

        I do it per project based on an intuitive sense of how much effort it will take and the value. I don’t do hourly anymore because clients aren’t paying me for my time, they’re paying for my experience and expertise. It might only take me 15-20 hours to write a white paper I’m charging them $4500 for, but if they get even one or two clients from it, that’ll pay for my fee. Pricing yourself hourly means that you make less as you get faster and more efficient. When I divide my hours worked by my earnings, it comes out to about $200-230 an hour, and most clients would choke at that rate. (If that sounds high, remember I have to pay for everything from health insurance to office supplies to taxes and social security too, and I’m also not actively writing for 40 hours a week–more like 20-25. The rest is marketing myself, emails/admin, etc.)

        1. Spacewoman Spiff*

          Thanks so much for responding, this was really helpful! I was thinking an hourly charge would make no sense–like you wrote, the better you get the less you’d make–but didn’t have a clear sense of how pricing worked.

        2. Despachito*

          This.

          I recently had a gig in an area I specialize in, and I was able to complete it in 20 minutes. I will sound my horn now but I think most people would not be able to complete it at all, and for those who would it could take them several hours or even days (some pretty creative thinking involved). If I charged just those 20 minutes I would be an extreme AH to myself, and it would sort of depreciate the expertise I had to put in.

          It is like the joke about the car mechanic opening the hood of a broken car and hitting the engine once with a wrench. The engine immediately starts purring, and the mechanic asks the client for 100 bucks. The client says “so much money just for one hit with the wrench”? The mechanic answers “the hit itself was 1 buck, the remaining 99 bucks are for knowing the exact spot where to hit.”

    3. Ellis Bell*

      Nah, you would never let an external person see the actual drafting process in all it’s messy glory. What you do is deliberately write and title a “first draft” then second draft, and so on, all of which are appropriate for external eyes to see, and charge them for the extra work if they want to be part of the drafting process. You could also polish up the actual drafts for requests like this, after the fact, again, charging them for the time you take to do so.

    4. Hyaline*

      Yeah, my professional editor seeing my track-changed document makes me feel vulnerable enough. I’m not interested in letting everyone and their brother (twitch-grimace-twitch) watch me write live or even see much of my process.

  17. The Coolest Clown Around*

    Maybe this is a controversial take, but does it even matter? Either they’re happy with the work product, or they aren’t. If there aren’t privacy concerns related to running the specific information through an AI generator, then why does this affect how much value they get out of the end product? The issue with AI for copywriting is, generally, that it produces inferior work, as far as I know… so who cares where it comes from if it’s of sufficient quality? If someone I hired used AI as a starting point and edited it into compliance with my standards, then why is that worse or better than writing the exact same text from scratch?

    1. Ginger Cat Lady*

      It matters because their reputation matters, and accusations like this can destroy a writer’s reputation.
      It matters because they need the income from writing, and false accusations can hurt them financially.
      It matters because the writer is human and their skills matter.
      Should it matter to the client? Maybe not. But to this letter writer, it absolutely matters and that is the perspective we should be addressing.

      1. The Coolest Clown Around*

        Sorry, I was unclear in my question – I understand why it matters to the author if the client is accusing them of lying about their work and/or refusing to pay their invoices. I’m more confused about why it matters to the client, provided the work product is satisfactory.

    2. Hyaline*

      One potential concern: copyright is an issue. Judges have ruled that AI-generated work cannot be copyrighted. Exactly how much AI is permitted to still copyright is kind of a moving target, so “using AI as a starting point and editing into compliance” might fall short as AI generated the basic framework.

  18. Jeanine*

    Good lord I didn’t even think of this possibility. Being accused of cheating when you aren’t!! AI is going to be the death of us. We got along without it up until a year or so ago and now it’s in EVERTHING.

  19. nnn*

    One of the more frustrating things in the world today (in general, not specific to OP’s situation) is there are people who want to use AI and people who don’t want to use AI, and there are employers who want their employees to use AI and employers who don’t want their employees to use AI, and people who think AI-generated content is good and people who don’t want to see AI-generated content, but they aren’t all matched up with each other!

    We need some way for everyone to rearrange themselves so AI-focused employers can hire AI users to produce AI content that is clearly marked so people who want AI content can find it, and employers who don’t want AI can hire creators who don’t want to use AI to produce human content for audiences who want human content.

  20. MeginMarketing*

    That’s incredibly frustrating! As a freelance writer myself, I can see how you feel threatened by the accusation.

    For what it’s worth, I’ve found myself on both sides of the ChatGPT conversation. In the past week I’ve had two conversations with clients where they were asking me to use some AI writing tool. I (hopefully, calmly) told them that I flat out wouldn’t use AI to write things. Brainstorm? Maybe. Fact check? Also maybe. But it doesn’t not “write” with the same tone and quality as an experienced writer. In the same breath, they were telling me that they’re really underwhelmed by their previous content. How did they generate it? You guessed it – AI.

    I know that AI is just part of the landscape these days, and I actually think there are some legit use cases for it in the writing process (brainstorming, finding references, even fact checking) but I am NOT looking forward to having more of these types of conversations. Writing is already an under appreciated skill that most people think anyone with a keyboard can do, and now to have ChatGPT (and I suspect, bad LinkedIn advice) fueling that fire is frustrating to say the least.

    1. nnn*

      I’d seriously reconsider using it to fact check, because it literally doesn’t have access to facts, but it’s very good at producing things that sound credible to people with only superficial to intermediate knowledge of the subject matter.

      A recent example was someone asked something like “Which year had the coldest July on record?” and it gave an answer with some details (something along the lines of “1966 had the coldest July on record, with an average overnight low of 11.3 degrees”, although I just made up the numbers in that example.) Then they asked “How far back do your weather records go?” and it said “I’m sorry, I don’t have access to weather records”

        1. Orv*

          It can be really enlightening to ask it about yourself, if you’re known to any degree. My wife is a writer and it hallucinated several awards she’s never won, including some I’m pretty sure don’t exist.

          1. Nightengale*

            huh yeah I just tried this for the first time (posted below)

            It has me in a different field (both licensed fields) practicing in 2 states where I have never held a license and didn’t pull up any of the stuff I am actually known for in my field.

    2. Hastily Blessed Fritos*

      Fact check is probably the worst possible use of generative AI which rather famously has no notion of truth or falsehood and fails to answer basic factual questions. Finding references isn’t much better since it will just make up nonexistent but plausible sounding and correctly formatted options. Summarizing, or allowing natural language queries of a fixed and controlled data set, are better use cases.

      1. JustaTech*

        There’s a YouTuber/Podcaster I watch who does this sometimes in the middle of an episode and it drives me wild!
        Simon Whistler – he does like a dozen shows where he’s the reader but has writers (who are properly credited and thanked).
        He’ll have a question about something tangential to the script, so he used to ask Siri, but now he asks ChatGPT and while the response sounds a lot more like human speech, it is often just plain wrong. Siris was wrong often too, but it was much easier for him (and the audience) to spot that Siri was wrong – often because it would say “I don’t know”.

    3. Quill*

      Yeah I would push back on ever using it to fact check. It does not work like google, in pulling whole chunks of text from websites, it works like your phone’s text autocomplete, going “let me guess which word is next”

      Examples of plot summaries of blockbuster movies, for example, often come up with not just things that never happened, but change the name to a different famous movie halfway through.

      1. Nightengale*

        wow I just tried chat GPTing myself for the first time

        I’m a medical doctor in a niche specialty related to mental health
        it thinks I’m a psychologist in a state where I haven’t lived since I was 2 weeks old and have never held a professional license.

        I have written/co-written several journal articles and am considered pretty knowledgeable within my field in certain areas. These did not come up on the search.

        I asked it where I held a medical license and it mentioned two states I have never practiced in, before trying to explain that a medical license allows me to practice psychology.

        I guess the good news is, if it can’t find me, it can’t come looking for me??!

  21. Ellen Ripley*

    Similar to the version history, you can save drafts at various points during your writing process as proof you did the work. My advisor recommended we do this even before ChatGPT existed in order to have proof of our work.

    Sorry you’re going through this, this sucks :(

    1. Fíriel*

      Yes! I am grateful frequently for the professor who made everyone learn to save versions so they would have evidence in case there was a plagiarism accusation (look, see – you can tell I wrote this good essay because I wrote a earlier, crappier version!)

  22. Morgan Proctor*

    Hi, I’m a professional writer. My day job is writing, and my side hustle is also writing. Much like the LW, I pick up random freelance copywriting jobs for extra cash.

    LW, you do NOT have to prove your writing is not AI. Please, please don’t offer to show version histories. You also don’t have to educate this client on the inaccurate nature of AI “detectors.” That information is freely floating around out there, and your client can google it themselves.

    (Also, showing a version history wouldn’t prove anything. Anyone could paste in AI-generated text into a doc, and that would show in the “version history.” Please don’t do this, LW, you don’t have to prove anything.)

    You can simply fire them as a client. But they still have to pay. Don’t let them get away with not paying. I suspect this client probably knows you didn’t use AI, and is using the detector to try to get out of paying. Don’t let them! If you have a larger network of freelance writers, let them know this client does this, so others can avoid them.

    1. A robot, apparently*

      OP here: Thank you for validating the indignity of sharing a beat-by-beat of the writing process with a client. The idea gives me hives and feels like a time suck.

      Honestly, apart from this moment, they’ve been an excellent client, who believed and paid me when I refuted the suggestion. I think they had been burned before and just didn’t know how much of a flop these detectors are. I just worry about this coming up in the future with less reasonable people.

      1. Morgan Proctor*

        Yeah, sorry, you got some bad advice here. I think it illustrates the general disrespect writers get, even from well-intentioned people.

  23. Yes And*

    In grad school, a paper I wrote was flagged by automated plagiarism detecting software, for its too-close resemblance to other papers in its system. The assignment was a case study with the structure of the response prescribed by the assignment, so of course all papers submitted in response to it were going to have substantial similarities.

    Fortunately the professor knew what they had assigned and ignored the plagiarism software, but it was nerve-wracking at first and infuriating afterwards.

    1. Skytext*

      I had this happen to me! I was doing an online masters program, we had to write papers that they ran through anti-plagiarism software. If you scored I think 24% you failed? So I run my second paper and get a percentage, not enough to fail, but still. So I checked and it was due to MY OWN NAME AT THE TOP OF THE PAPER! That plus a couple phrases that are common in accounting (the field I was studying). So of course then my third got an even higher percentage, because now I had TWO previous papers with, you know, my own name on them. This, AI, it all sucks!

      1. LJ*

        That seems like a very misconfigured plagiarsm checker. I remember Turnitin offered (at one point, maybe they still do) a service where you can pay to “pre check” your papers against Turnitin, but it was explicitly stated that using this service wouldn’t add your assignment to their database

        1. Skytext*

          It WAS Turnitun! I didn’t remember the name until I saw it mentioned on here in other comments.

  24. Box of Kittens*

    I would also add this to your onboarding process for new clients, where in an initial meeting, you mention that you do not use AI to write articles. Another commenter mentioned putting this in your contract, which may be a good idea (although you may want to be careful with the wording if you use Grammarly or something like that).

  25. veebee*

    I’m also a copywriter and fully sympathize with your situation!
    I think besides proving how inaccurate AI checkers are, the other thing you can do is ask for half of payment upfront and half upon completion. That, and/or charging a proposal fee. I’ve found this helps weed out the clients who are already possibly wanting to undervalue or undermine the work I’ve done.
    I’ve had clients give me feedback that something “sounds like AI wrote it.” And I have to ask them what that actually means to them—is the tone robotic? Does it feel too long? Does it feel formulaic or predictable? Because that’s the advantage of working with a human, I can make edits based on how the client is feeling, even if they can’t exactly describe it.

    Good writing is so devalued right now, and it’s a struggle!!!

    1. Ellis Bell*

      Winner, winner chicken dinner. Weeding out the cheapos is definitely going to reduce these issues for OP.

  26. PotsPansTeapots*

    Ugh, no advice to add, just commiseration from someone who works in content marketing. I did some editing at my last agency and we flirted with using AI checkers, but stopped after it became clear it was producing way too many false positives. It didn’t stop us from getting in trouble once or twice with a client using their own.

  27. Deuce of Gears*

    Oh man, sympathies.

    I never thought my habit of writing rough drafts (or partial drafts) either LONGHAND or using a MANUAL TYPEWRITER would ever…be…hypothetically…useful?!

  28. Mockingjay*

    I’m a tech writer and my team aren’t allowed to use AI tools, mostly because we work on government contracts with controlled info, usually manufacturer proprietary info on the widgets we procure and install.

    But my team has discussed AI usage and done some cursory research. It’s coming; soon there will be a “clean” AI tool sold to a government agency. There are some benefits; one tool my coworker looked at did a great job at organizing an outline and sections for a large document (which could free up time for content development and revision). But AI can’t replace context – the minutiae we pick up in meetings and in emails, and in lab and site work – the why we do things the way we do.

    Our goal is to have “business rules” for AI use before it is implemented, to limit copyright issues, ensure content is protected and vetted, and above all to protect our jobs.

    1. allathian*

      I also work for a government agency, even if not in the US. We have an AI policy where using AI with confidential or proprietary data is prohibited.

  29. Coverage Associate*

    A couple of weeks ago, I really thought that opposing counsel had provided one of those hallucinated legal citations you read about in the papers. There were too many errors in the citation for it to be a copy and paste or other common human error. We searched for it various ways in 2 databases and through google and couldn’t find it.
    But all I said in my response to opposing counsel was that we were unable to find the cited authority. In the law, that is enough to point out the mistake while only accusing the other side of a bit worse than typical human error. An intern found the citation in a third legal database through their school account before the response to opposing counsel went out. Considering how obscure the source turned out to be, I don’t think it would have been embarrassing for us even if the original version went out. Opposing counsel would have had to provide a copy of the authority in response, which would also have revealed how obscure it is.
    In 2024, legal citations can be pretty messed up and a lawyer can usually still find what was intended if they’re willing to run a few searches. It’s not like the old days where the other side can be totally lost if the citation is off by one numeral or something else totally within typical human mistakes, which, yes, even lawyers make.
    Anyway, I guess my point is that I would always begin by asking questions before accusing someone of dishonesty. (Saying we couldn’t find it was an implied question.)

    1. Donn*

      In the old days, as a beginning legal staffer I went to our local courthouse library to look up the cases in a brief. online research was in its infancy, and my small firm didn’t have it.

      One citation was a completely different case. I guessed there was a digit missing, say volume 12 instead of volume 121. So I checked some other volume numbers, and fortunately found the right one on my second or third guess.

  30. Brain the Brian*

    I have a somewhat unique situation where it’s extremely obvious to me that some of my coworkers are using AI in some capacity to handle their writing, but it doesn’t bother me in the slightest. These coworkers are non-native English speakers who struggled for years (decades, in some cases) to write in fluent English, and then suddenly in the last year, it’s like someone flipped a switch and their grammar became near-perfect overnight. One by one, several people with whom I work quite closely went through this shift, and the timing vis-a-vis when AI language models were in the news makes it pretty easy to connect the dots. It saves me a ton of time editing their reports (instead, we can focus on substantive / content issues), so I’m really not bothered at all. I hope they don’t stop using it, honestly!

  31. Agile Phalanges*

    I once got accused of plagiarism by a college professor BECAUSE of the version history info. Damned if you do, damned if you don’t.

    I had created templates way back when of MLA and APA formats (I’ve gone to a few different schools over the years), so I could just open them up and have the header, title, bibliography, etc., all formatted, and just had to fill in the appropriate text in each place, as well as write the paper, of course. It wasn’t an official Word Template (.dot), just a regular old Word document that I would Save As and then begin editing.

    That prof looked at the history and saw that the file had been created a couple of years prior to my class with him and accused me of plagiarism. This was before AI was ubiquitous, and I begged him to run it through a plagiarism checker, because I knew I had written it. He eventually let it go without reporting me higher, but I was freaking out about it, and rather pissed at the accusation.

    1. anon24*

      Wow, this is wild (and also something I need to be aware of). The instructor in one of my first ever college classes provided an MLA template for us to use and I’ve used it as the starter for every MLA formatted paper I’ve written in college since, for the same reason as you. I often write my rough draft in a separate document, paste and merge formatting, and then revise from there. I’m always careful to keep the document with the rough draft and I usually have a third document with random notes on how I want to write the paper.

  32. Jam on Toast*

    As someone who works in higher Ed, where inappropriate use of generative AI is HUGE issue right now, consider encouraging your company or department to develop an AI use policy or standard. this not only creates transparency, it also ensures that everyone at the company is using it ethically and responsibly. If a client challenges you, being able to say as a company, this is our policy: we use GenAI in situation X and Y, but not Z (ie for ideation, summaries and grammar but not for original content creation or with any proprietary or confidential information.
    The policy may cover how you want people at work to acknowledge if and how AI was used during a document’s creation, ie footnotes with the question prompt or (as is becoming he norm in higher Ed) in the citations.

  33. Despachito*

    I think that what basically what you wrote can serve your purpose very well.

    I’d bet that many of your clients blindly believe the results of the AI detector and are not even aware about the false positives it can give. It could be a kindness if you explain it to them, and let a bit transpire your indignation that they believed you were cheating.

    (When I had an angry client it paid off to calmly walk through the process with him to let him show me exactly what he had a problem with, and usually it came out that he misunderstood a part of the process and he had no reason to be angry.)

  34. sp*

    I have a friend who’s a PhD student in AI research who has researched these “detectors”. His results showed they’re about as good, or worse, than a coin flip. Total garbage. I understand the frustration LW, this stinks. If anyone’s interested in the paper, search for “RAID: A Shared Benchmark for Robust Evaluation of Machine-Generated Text Detectors
    Published in ACL 2024, 2024”

  35. porridge fan*

    Surely if you can use AI to produce “an article about subject X in the style of periodical Y”, you can also use it to “an article about subject X in the style of a rough first draft”.

    1. Katie Impact*

      You technically could, but making small iterative changes to an existing document is one of the things that current generative AI is particularly bad at, so it would likely be pretty obvious upon inspection that the “draft” and the “finished article” are entirely different documents.

  36. Deejay*

    One “detector” even claimed the U.S. Constitution was written by AI.

    “Dear God! A Terminator wrote the Constitution?”

  37. Despachito*

    I also ask myself what is the point of AI if its results cannot be used at all?

    I mean, it is the final result that matters, and if AI’s output is good enough to be used why is it a problem? If I, as a client, am getting an excellently written piece of work I am satisfied with, isn’t THAT what should matter rather than the process how the author created it?

  38. Last tiger of Tasmania*

    Unfortunately, AI detectors seem to think that any grammatically correct writing is generated by AI, particularly formal and technical writing. What do “writing humanizers” do? Add spelling and grammatical errors, add disfluencies, make your writing worse.

    Even more unfortunately, this is the approach I’ve begun to take too. I find myself skimming through Amazon reviews looking for ones with obvious grammatical errors so I know they were written by humans. It’s frustrating!

    I am genuinely a fan and advocate of AI and I use it in my professional writing work — not to write content for me, but to brainstorm ideas, come up with outlines, get synonyms, rephrase things when I’m stuck in a certain structure, etc. But it is difficult to see how it’s only compounding the general public’s misunderstanding of the value of good writing.

    As writers, maybe the answer is to take ownership of AI as a tool and try to educate others on what it can and can’t do.

  39. Anonymous For Now*

    I would gather all of the documentation suggested by Alison and show it to the client to prove my work wasn’t generated by AI and then I would fire the client, but only after I was paid .

    Anyone who is so ignorant as to trust these AI “detectors” (or so shifty that they may be using that as an excuse not to pay me) and then accuses me of cheating is not the sort of person with whom I would want to have an ongoing professional relationship.

  40. Ashley*

    In the future, as an addition to what’s recommended here once someone brings it up, I’d also incorporate some version of “check-ins” during your process if possible to further avoid it in the first place (like showing an outline, a portion of a first draft, etc). This may not always be viable but where possible, it combined with keeping version history for your files provides pretty strong protection both from having accusations happen in the first place and once they do.

Comments are closed.