my coworker is using AI to do her work (badly) by Alison Green on January 6, 2026 A reader writes: I am a senior project manager in a nonprofit. Over the summer, I been working on a series of focus groups and was assigned Lola as an intern. Our office aims to provide meaningful training to our interns, as nearly 80% of them are hired after their internships, so I assigned Lola to write one of the focus group reports. She was present at the focus group itself and was given the audio recording and transcript, plus a report template with guiding questions in order to complete the assignment. I had a previous experience with an intern producing reports with AI, which required tons of rework. So when I first assigned this to Lola, I explained that AI is not a reliable tool for this type of work, as it often provides incorrect answers, fabricates quotations, and can hallucinate during data analysis. Well, Lola turned in the report and it was clear she had used AI: the report included multiple “quotations” that were not things that had been said at all, and the analysis was incorrect. I told my manager we were going to miss the deadline since we would have to redo the work. In my feedback to Lola, I didn’t call out the AI use specifically, but I pointed out every quote that was made up and highlighted where the analysis was wrong. This was easy for me to do because I had attended the focus group too, but I pointed out to my boss that if that hadn’t been the case, I might not have recognized the problems with her work. At the end of her internship, Lola ended up being hired as a junior project manager. I hadn’t been the only one working with her so I assumed she had improved. But she was recently assigned to work with me again and was responsible for a series of interviews and reports. She received a detailed protocol for the work, including a template with guiding questions (again). When she sent me the reports, I could immediately see that she must have uploaded the interview transcripts into AI, entered the guiding questions one by one, and then copied and pasted the answers. The report was confusing and lacked key information. I didn’t feel I should mention the use of AI in my feedback since I was only 90% sure that that’s what she had done, so once again I just flagged the incorrect info and places that needed further elaboration. She then sent me a revised version – and I’m again 90% sure that she just used my comments as prompts for AI and then adjusted only the specific parts of the text I flagged, without checking its consistency with information included elsewhere. I was really pissed off at this point, as we had already missed a deadline because of this, but I just told her she had to be more careful about storytelling, reader-friendliness, and consistency and declared her task closed. I felt asking to revise text again was a waste of my time; she would have continued working with AI anyway and I was fed up with providing encouraging feedback to someone whose work shows no real skill beyond copying and pasting. I need to talk with my manager about Lola at this point, and I believe it should also be reported to higher-ups. This is a serious quality matter, as we could have our project budget cut if an analysis is poor quality and AI use is identified. I truly believe Lola’s employment should be reconsidered, or at least she needs better training. But I’m not her direct manager; she’s just been assigned to assist me in my work, and I’m unsure how much standing I have to take this on directly, either with her or with anyone else. How should I deal with this massive and inappropriate use of AI from a colleague? You can read my answer to this letter at New York Magazine today. Head over there to read it. { 218 comments }
Alton Brown's Evil Twin* January 6, 2026 at 12:57 pm Exactly! “Lola, we’ve told you several times not to use AI on this kind of work because of (reasons). But lots of evidence suggests you’ve done just that. Did you use AI? Why else would there be fabricated quotations in this report? This is a serious breach of our policy and you can’t do this in the future. Do you understand?”
Anonym* January 6, 2026 at 1:01 pm Yeah, the only other explanation is that she *deliberately* fabricated information, which is arguably worse. Both scenarios are a serious problem, on top of the actually quality issue, which itself demands action. Good luck, OP.
Green Great Dragon* January 6, 2026 at 2:21 pm Yes! Have a list of questions ready in case she denies using AI, all starting ‘Then why did you…’.
KateM* January 6, 2026 at 2:48 pm Maybe you can start with that? “Why did you write in your report that X said Y when there is nothing like that on the recording?”
Puggles* January 6, 2026 at 1:17 pm But the LW did not specifically bring up using AI. It’s time she says something about using AI, not just the errors that it caused.
fhqwhgads* January 6, 2026 at 1:24 pm When Lola was an intern, OP told her explicitly not to use AI because it doesn’t work for this sort of thing. Lola appeared to have done so anyway but OP chose to point out the wrongness, rather than the “this is obviously AI”. Now that Lola’s hired, she’s taken the second tack again. So, yes OP did tell Lola not to use AI for this sort of thing, up front, during her internship. It sounds like it just hasn’t been repeated since.
Pastor Petty LaBelle* January 6, 2026 at 1:40 pm She told her it was not reliable. This is not the same as saying don’t do it. Sure most people would get the hint. But if you have a clear do not use, don’t make people guess that. Say it outright.
JD* January 6, 2026 at 3:39 pm Exactly, I have learned with many of this generation you must be explicit. Don’t tap dance around. The problems with this are the following X, X and X. AI has the exact same issues which is why it is not allowed to be used for these reports. If you are using AI stop for these reports. The entire report has to be done again. Then stop doing her work over, have her do it.
Kt* January 6, 2026 at 5:09 pm Even one step more explicit: everything you said plus, “If AI were good at this, we would not need a human in this position.”
General von Klinkerhoffen* January 6, 2026 at 6:49 pm This! If all Lola does is feed LW’s prompts (and corrections) into an LLM, why should they pay her a human’s salary? The employer could get a better work product faster by eliminating the middleman altogether. Don’t make yourself redundant!
Ms. Elaneous* January 6, 2026 at 9:30 pm If AI were good at this, we would not need a human in this position. Excellent!
Freya* January 6, 2026 at 10:10 pm I read a thingy which said that LLM AI is great at “what would an answer to this question look like?” and awful at actually answering the question.
linger* January 6, 2026 at 9:18 pm So far, you’ve been hinting in a passive voice, But the errors continue, so you’ve got no choice: Just tell Lola, “NO A.I., NO, Lola!”
linger* January 7, 2026 at 2:14 am (or if you prefer a different earworm) Her name was Lola, she was an intern, Submitting work without a care With fake quotes sourced from who knows where. So when you hired her, to do the same task, There needed to be closer monitoring of her work. And now you’re shocked to see / such mediocrity It’s evident she’s been using A.I. habitually. It’s a low bar that she’s slid under, Making obviously nonhuman blunders. It’s a low bar, Low bar she’s setting: You need to make clear she can’t simply fake here, At this low bar, she’ll lose her job.
Beth* January 6, 2026 at 1:43 pm Yeah, OP seems to have been really passive on this. Why close the task when the work wasn’t done to satisfaction? Why revise the text for her, instead of either telling her it’s confusing/inaccurate/doesn’t fit the protocol and asking her to redo it, or sitting down with her and walking her through how she’s expected to do it? Why not name that this looks like AI, with all the problems that come with AI? If this was a mid-level colleague, I’d have more sympathy with OP. But Lola was an intern and is now an entry-level employee, and it sounds like OP is the senior employee supervising them. OP needs to be more direct in their feedback and more hands-on in their training. If Lola is really set on using AI instead of doing the work, that will give OP evidence to bring to management that this isn’t working out; if she just doesn’t know how to do the work without AI, it might actually address the problem.
Tio* January 6, 2026 at 1:51 pm Yeah, it sounds like OP went light on Lola the first time it happened. What I would have done is sit down with Lola and go through the report. “You have this quote here, but it’s not real. Why did you put this in there?” “Oh…. it was AI.” “You were told that AI can’t be used for this sort of thing. It also appears you didn’t bother checking over the report after you generated it with a tool you were told was unreliable. Why was that done?” “(Reasons/stammering/excuses)” “Because of this, now this entire report needs to be revised and we’re going to miss the deadline and (other consequences.) This is a serious performance issue. If this happens again, (insert consequences for Lola).”
sal* January 6, 2026 at 3:16 pm I wonder if anyone who has had one of these convos can back up that people are willing to own up to hallucinations having been from AI as quickly as the hypothetical Lola does here? (Lawyers own up pretty quickly in court but that’s because the case the LLM cites literally does not exist and there is no other reasonable way that that mistake gets made in legal practice–it was not a thing until LLMs.) I would think Lola would go with a more generic “I guess I made a mistake,” but maybe I’m just thinking like a lawyer.
daffodil* January 6, 2026 at 3:20 pm The thing about people who ask an LLM to think for them is they are not usually very practiced at thinking on their feet when questioned.
Tio* January 6, 2026 at 3:39 pm Often, yes. Especially since it sounds like OP wasn’t setting a hard line about AI. But that’s part of the way I structured that conversation – the way it is asked, the person either has to act like they wrote it themselves, and THEY were the ones writing outright untrue things like that, or admit that they used a tool they weren’t supposed to. Either they lied about using AI or they lied about what people said in a meeting? AI usually feels like the lesser of two evils to admit to. And that’s specifically why you should ask them WHY things were put in or written a certain way – it leaves little room for someone who didn’t write it to come up with a plausible explanation, and if by chance they DID write it, you can discover just how the heck they got so far off track. There are, occasionally, people who will double down into full gaslighting territory, where they say something bizarre like “I didn’t write that” or “I don’t know how that got in there” but that would just be grounds for firing to me.
Beth* January 6, 2026 at 4:15 pm Even if they don’t own up to it, having to answer questions like “Why did you include this made-up quote?” or “Why did you cite this nonexistent evidence?” or “Why did you structure this in this non-template-compliant way?” is awkward enough to discourage most people from making the same mistakes again. OP doesn’t need Lola to admit she’s using AI. She just needs Lola to stop submitting subpar, unusable work–whether that means not using AI or editing the AI work enough to bring it up to standard.
FuzzBunny* January 6, 2026 at 4:25 pm College prof here with way too many students who try to use AI for absolutely everything. Some will admit it very quickly, but I’m always amazed at how many will absolutely double down and refuse to acknowledge it. Like, they turned in something with hallucinated references, but rather than admit that the references don’t exist, they will come up with all sorts of convoluted claims about how the sources existed at the time, and obviously got deleted from the internet in the last two days in such a way that nobody can find any record of them.
NotLikeAHam* January 6, 2026 at 10:03 pm If there were hallucinated facts in the document, the two answers to “how did that get there” are either ‘AI made up the quotes’ or ‘I made up the quotes.’ There’s not really other realistic scenarios, considering the reports are written off of transcripted conversations. Both answers are bad, but I personally think AI is less unflatterimg than “I literally do not know how to pull quotes.”
Distracted Librarian* January 6, 2026 at 3:17 pm This. And this conversation should have happened when she was an intern. And OP should have reported their experience with her then so she wouldn’t be hired. But better late than never.
HojiBerry* January 7, 2026 at 12:15 am Because the other person can say “it’s not AI!” whether or not it’s true, but the work being bad and inaccurate is plain to see.
Eldritch Office Worker* January 7, 2026 at 10:05 am And then you follow up. “Well if it’s not AI then I have serious concerns about you inventing quotes”. Naming that it’s identifiable as AI is the best first step. Even if they deny it, they should know they aren’t fooling anyone.
Clorinda* January 6, 2026 at 2:55 pm As far as Lola knows, this is okay. She used AI during her internship and she got hired. She used AI on a project, had to do some revisions, and then it was accepted. That’s two big pieces of positive reinforcement that AI is a useful and effective tool for her, so why should she stop now?
sal* January 6, 2026 at 3:18 pm as far as Lola knows, *this is get-away-with-able*. She in fact was told during her internship that AI does not produce reliable results here, so I wouldn’t say that as far as she knows it’s okay. And she also f***ed up both times with missed deadlines and lots of critical feedback, so you have to be some kind of willfully ignorant to call those work experiences “two big pieces of positive reinforcement” IMO.
An Australian In London* January 7, 2026 at 8:30 am Lola is an intern. This is probably Lola’s first job. What Lola is learning is that anything goes, there are no consequences, all workplace direction is like the pirate code (more like a set of guidelines), and AI is just fine. I question the judgement of everyone involved in this mess. Lola at least has the dubious excuses of “has never been properly managed, including here”. LW and Lola’s manager do not have that excuse. Just imagine if everyone started using their words in the workplace. 90% of questions here would evaporate.
Just Thinkin' Here* January 6, 2026 at 2:56 pm Seriously. Could have nipped this in the bud 6 months earlier. This person shouldn’t be managing new or young employees.
Dawn, higher ed* January 6, 2026 at 3:20 pm Maybe, but as a college professor, I can say that the brazen commitment to AI use that some students have is so outrageous that it can be hard to realize the scope of the problem. They are maliciously compliant about it and deny AI use when asked. OP is probably dealing with this for the first time and as they noted, Lola had worked with others during her internship, so improvement was a possibility.
Manager in CA* January 6, 2026 at 7:30 pm I think people hesitate to accuse people of using AI without proof, thinking it’s like accusing someone of cheating or lying. But it doesn’t have to be. Just say “I think you’re using AI but it’s not working for this task and you need to stop trying it”. I hate AI work so much – it’s nearly always wrong for the situation.
NothingIsLittle* January 7, 2026 at 10:38 am I think a big part of the problem is many companies internally pushing AI use, so how can you criticize someone’s use of AI except through the outcome not being what you need? You cannot use AI for this project either gets someone lying that they didn’t use it or claiming the employer endorses it, often it’s a frustrating conversation without actually reducing their use of AI. My university explicitly encourages use of AI without clearly outlining what that means, so a lot of people have run into trouble discovering what the limits are. It’s obvious to me that the AI output shouldn’t be the final version without a scrutinous review, but it was not obvious to many students. Who then claimed they obviously weren’t using AI and don’t we know that plagiarism checkers are notoriously bad at accurately identifying AI, etc, etc. It’s necessary to define the acceptable use of AI before blanket banning it, because people will try to use it anyway and having a guide for its use is easier to discuss than whether someone used it. At least in the experience of my colleagues who teach, giving students guide rails on AI use and requiring they document how they used it results in much better end products and over-all less reliance on AI.
Too Old to Say Rizz* January 8, 2026 at 6:35 am LW, you need to be a whole lot more engaged when you’re managing people, especially interns. Yes, she is exhibiting poor judgment and her manager and yours need to know. But also: 1. You “explained that AI is not a reliable tool for this type of work” once and literally never mentioned it again. 2. You built a schedule for client work for which you’re responsible that didn’t allow enough time for corrective coaching, leaving a critical path item to a brand new intern the first time, and to a brand new employee with a history of underperformance the second time. Both missed deadlines are on you, sorry. 3. You did not directly raise what’s going on to your management in a clear way. I’m surprised your own management isn’t more concerned about the missed deadlines, maybe you’ve picked up some overly passive management habits from them. No judgment, but this is a great time to correct them if so.
Madame Desmortes* January 6, 2026 at 12:36 pm Whatever Lola does… Lola owns… and right now little Lola wants FIRED. All filk aside, maybe she gets one last warning that is explicit about “no chatbots, no way, no day, not for anything ever, not even Grammarly, just don’t” but that’s it — any more AI use and she’s out.
Aetheling* January 6, 2026 at 2:18 pm Anyone with a tech background who’s reading this will recognize at once that I do NOT have one, but I’ve a question for everyone who’s more computer-savvy than I am: Is it possible to tell, by going through the computer that Lola used, whether she did access AI while she was writing those botched reports? Whether or not she did (and yes, it sounds as if she did!), there’s clearly a problem with her work; she’s either doing it badly herself or she’s outsourcing her own assignments to AI with predictably poor results. But it would be interesting to know just how bad her judgment really is, and whether she’s disobeying clear instructions in order to do less work than she’s being paid for. (In any case, it sounds as if she’s due for a PIP, some strict supervision, and termination if these blunders continue.)
Tulp Bloem* January 6, 2026 at 2:43 pm Easiest way might be for IT to check her browser history and downloaded programs. But that feels unnecessary when there already is strong evidence of poor-quality work.
Madame Desmortes* January 6, 2026 at 3:55 pm For those with command-line savvy, the cross-platform tool Plaso is quite impressive for “what can be found out about my behavior from my computer?” If you try it, do the CSV output and use a spreadsheet program to poke around. I haven’t used Plaso in donkeys’ years, but I believe the CSV output has a column for data source, so you can filter to browser-related rows. https://github.com/log2timeline/plaso
Not That Kind of French Broad* January 6, 2026 at 2:52 pm Yes, it’s fairly easy to check her computer for that kind of thing, especially if she was using a company computer and she’s not good at covering her tracks, which I bet she isn’t. The document history can also have tells in it. Probably the evidence won’t add up to 100% certainty, but it’d be pretty close.
MigraineMonth* January 6, 2026 at 9:36 pm I was thinking of document history. If the entire report was written in less than a minute, and there weren’t any revisions until the feedback when it was entirely rewritten, that’s pretty obviously not written by Lola.
lost in pair a Dice* January 7, 2026 at 11:35 am it’s all logged in session cookies. everything is. Depending on the IT org all outgoing traffic could be monitored/tracked etc. I’ve managed those systems, because if someone something or a site needed to be blocked/unblocked you had to go check on it, and checking the history logs would tell you. even using duckai (duckduck go’s AI bot), for queries where it doesn’t store things on the server. stores the last few things you did locally in a cookie.
been there* January 6, 2026 at 5:21 pm our rigorous company firewalls prevent chatbots from being used on corporate devices. That didn’t stop one enterprising colleague from emailing their data to their personal email and using a chatbot on a personal device before sending it all back to their work account. That came with the added bonus that data breaches are well covered in the employee handbook even if the use of chatbots isn’t mentioned yet
Kiriana* January 6, 2026 at 7:39 pm Oh BOY. The only experience I have working somewhere with strong filters on the computers, sending anything to your personal email was a huuuuuuuuge no-no. (We also weren’t allowed devices with us, or pens or paper – we could have smart watches in airline mode and that was all. It was very locked down. I guess you could have written a really short note on the watch but at that point most people would be able to remember it anyway.) I’m not sure it was even possible, I think email addresses had to be white-listed.
Australien Meat Puppet* January 6, 2026 at 7:35 pm There’s also the question of whether Lola has permission to send the transcripts etc off site (and then who she sent them to etc). I wonder whether LW is in an organisation where LLMs are favoured for some tasks, or management are keen on them. It might just be bad management, but it may well be another case of “AI is wonderful, LLMs are kind of lie AI if you squint, therefore no criticism of LLMs is allowed”.
Farewell bear facts* January 6, 2026 at 12:36 pm I am surprised that anyone would give a junior employee or intern a brief and template, then wait until after they turn in the finished work to give any kind of feedback on it. No, the AI use is not OK. But I’m surprised by the lack of guidance and input being given to Lola, especially when important deadlines are involved. Your template may seem clear to you, but I’d never just send someone junior away to work alone like this without any oversight.
sgpb* January 6, 2026 at 12:38 pm Yes, I agree with this. How can you be upset that you missed a deadline when you didn’t build in any time to review the intern’s work before the deadline?
Guacamole Bob* January 6, 2026 at 12:57 pm With most junior employees, I’d plan for appropriate review time to revise the work, but not to completely re-do it from scratch. And I’d only set aside enough of my own time for review and maybe a final edit, not to have to re-do the work myself. I assume OP knows how this kind of work goes with junior employees, typically, and has reasonable timelines in place if the person is competent.
MigraineMonth* January 6, 2026 at 1:40 pm OP has probably already figured this out, but with brand-new employees (and especially interns), try to check in early enough that you could redo the work entirely if necessarily. You have no idea yet if they’re competent, so it’s best to hedge your bets. An early check in has the additional benefits of keeping the intern/junior employee from wasting time if they’ve misunderstood the assignment *and* I believe will increase the odds of catching AI use. AI is usually used to generate a finished-looking product, not the intermediate stages of analysis, outlines and drafts; it can be illuminating to ask for the in-progress work and see if it lines up with the final draft.
I am a translator* January 6, 2026 at 3:55 pm One scenario I can imagine is that reviewing human-produced work takes significantly less time than reviewing AI-generated work, so it’s possible they built in enough time based on past experience, not having planned for all the additional labour the AI would require. I know that when my employer first had me beta-testing AI translation, I didn’t anticipate that every single abbreviation would be incorrect differently every single time, and as a result it took far longer than I thought possible.
MCMonkeyBean* January 7, 2026 at 8:49 am Yes, it could be that the timeline is just a bit condensed in the retelling of this story, but I would expect much more time for review to be built into a deadline–especially for work coming from an intern! The AI use is a huge problem, but I would imagine AI isn’t the only reason you may sometimes find an intern’s work is nearly unusable.
I don't work in this van* January 6, 2026 at 12:42 pm Yeah, I also think feedback like “be more careful about storytelling, reader-friendliness, and consistency” may not be tactical enough, especially for someone who is just moving up from intern-level.
Nowwhat465* January 6, 2026 at 1:02 pm Agreed! Interns and new, relatively young employees need a LOT of feedback when starting out. My DR was hired straight out of college. Now that he’s in his second year, I can give him broader feedback about finding his voice, keeping messages concise yet still detailed etc. His first year? “You messages need to include X, Y and Z otherwise we can’t send them out. Never use ABC for research, always go with DEF. Send me a draft when you get to portion QRS so I can make sure you’re on the right track.”
Ama* January 6, 2026 at 4:00 pm I am an editor with formal training as a writing tutor and I wouldn’t have known what to do if given that feedback. I’m hoping OP was just summarizing and gave Lola actual concrete examples.
March* January 7, 2026 at 6:02 am Not tactical or clear enough. “Be more careful about” anything, no matter how well up on the subject you are, is meaningless unless the parameters and criteria are also made explicit. Speaking of which, OP, does your company have *explicit* (and I mean Captain Obvious level explicit) rules in place when it comes to the use of LLMs and other regurgitation apps? Because if not, those should be a priority – if only because you’ll then have a clear course of action in place to get Lola to start doing her own work instead of feeding it into the algorithm vomitorium.
londonedit* January 7, 2026 at 9:01 am Totally agree…so far it seems like the OP has suggested to Lola that AI is ‘not reliable’, and she’s said ‘be careful about’ various things, but in both cases the outcome was that as far as Lola was concerned, her work was accepted in the end and she didn’t have to think any more about it. It’s possible Lola doesn’t care, but it’s also possible that she now thinks the process is simply that she does the work however she likes, and then the OP/someone else accepts that work and then tidies it up for her. I’m an editor, and I have to say I’m not sure I’d fully understand what was meant if someone told me to ‘be careful about reader friendliness’ etc. It would definitely sound like a gentle suggestion for the future, too, rather than pointing out something I’d done that wasn’t acceptable. It’s possible that in the first instance Lola, someone very very early in their career with not a lot of experience, thought the OP meant ‘yeah we try not to use AI for this because sometimes it’s unreliable’ and decided ‘OK but I know how to use AI so I’m sure it’ll be fine’. And then in the second instance Lola thought the OP meant ‘this is fine, just try to watch out for [vague suggestions] next time’. All of which is to say that the OP needs to be very clear with Lola that they don’t want her to use AI in any capacity for this work. And the OP needs to explain why, rather than ending up going ‘OK, fine, I’ll just sort this out’ – which leaves Lola thinking she’s handed in her work and that’s the end of it.
Sunny D Bop* January 6, 2026 at 12:55 pm This! I work with a ton of project engineers. I’m one of the people who approves things. They have a template they use. Corporate recently updated it, so I had to coach the project engineers on some of the sections. Some of them are close to retirement. I can’t image leaving a new person by themselves. Even if she was an intern, interns often don’t do a whole role by themselves. In my experience, we create projects for them to do. We don’t train them fully. I’m also bothered with waiting till the deadline and giving very vague feedback.
Insufficiently Festive Cheap-ass Rolls* January 6, 2026 at 1:15 pm I don’t agree – if Lola has all the materials that others have used to do the task plus previous experience in where she specifically went wrong, then there’s little else LW can do. Lola seems determined to use AI even after having been warned that it can’t do the task and her love of it caused the department to miss a deadline. How do you coach someone out of a refusal to learn from their own experience?
Farewell bear facts* January 6, 2026 at 1:25 pm “then there’s little else LW can do” I disagree. They could discuss the process of how to put the report together – for example this might involve reading through the transcript, tagging or highlighting or coding responses and grouping them into themes and noting what to summarise, then sketching out a list of what points to include under each section. They could ask Lola for her reflections on the focus group findings, what was interesting or surprising, and what she thinks will be important to include. They could ask to see and feedback on her in-progress drafts.
sal* January 6, 2026 at 3:21 pm For me? No. People who go to college (which I, perhaps unwarrantedly, assume Lola did) learn how to do this. It is not your first boss’s job.
MCMonkeyBean* January 7, 2026 at 8:53 am No, people do not go to college and learn how to do this specific report and teaching new employees things is a very normal part of many jobs.
Molly Bloom* January 6, 2026 at 1:25 pm Little else LW can do? That is not the case. The commenters just above gave excellent advice on more specific feedback that LW can give to Lola.
GrooveBat* January 6, 2026 at 1:30 pm There was plenty LW could have done. In the first instance, the department had a deadline to meet, assigned an inexperienced intern to produce a report, and LW didn’t build any review time into the schedule. In the second instance, the department assigned someone with a clear history of using AI to produce sub-par work, and LW didn’t think it was necessary to remind Lola about the AI policy. It’s not a question of coaching; it’s a question of poor supervision and bad planning on LW’s part. Why coach Lola on the output when it was clear it was AI generated? Why not just say, “It’s clear you used AI for this and I want to remind you our policy prohibits that.” either after the first submission or before the second assignment? This is not to let Lola off the hook; she was warned once about using AI and did it anyway. But I feel like, because LW spent so much time providing comments and coaching on the output, rather than just reiterating the policy, it was almost an implicit condoning of the practice.
MCMonkeyBean* January 7, 2026 at 8:56 am I think it’s too far to say it was an implicit condoning of AI use by addressing only the output–in fact I believe there have been other letters where Alison gave the advice to do exactly that! But it is certainly time to directly address the likelihood of AI being a big part of the issue now.
Sunny D Bop* January 6, 2026 at 2:55 pm This is out of touch with what is realistic to the work force. I work with interns, freshly graduated coworkers, and project managers. This is beyond AI use. LW should have followed up multiple times or followed up with Lola’s boss. Give very specific feedback. “Story Telling” is very vague. Something like “You need to stick to these metrics listed here. Remember they are trying to accomplish xyz. What you wrote won’t help them. They need actionable steps. For example…” If LW is to the point of thinking Lola should be fired or cannot give her the proper coaching, he needs to have a conversation with his boss so that they can address it with her boss.
Guacamole Bob* January 6, 2026 at 1:06 pm Writing up a summary of the discussion from a meeting you attended, with access to the full transcript and a template, seems like a very normal task for a junior employee to me. It’s more complicated than just taking meeting notes for an internal meeting, but still a pretty clear-cut and reasonable task. I’d expect to have to give feedback and make some edits, but I wouldn’t really expect to review partial work on something like this. If this was supposed to be a glossy client deliverable full of findings from the series of focus group meetings with graphics and such, then I agree about not waiting to see a full draft before doing a review, but if “report” means summarizing the focus group discussion in a structured way, I don’t see that OP was too hands-off here.
Farewell bear facts* January 6, 2026 at 1:26 pm I think it’s wise to review partial work on something like this because you need to see if the person writing is showing a good sense of what to include, how to interpret the findings and so on. Lola doesn’t have the track record to be left to just get on with it.
Nobby Nobbs* January 6, 2026 at 2:09 pm “Read this thing and summarize it” is a school assignment. I’d expect her to maybe need some feedback on what parts are the most important and what parts can be glossed over, since that could be workplace-specific, but the concept of “write a summary” is not over-challenging.
Boba Feta* January 6, 2026 at 9:09 pm “ but the concept of “write a summary” is not over-challenging.” You’d be surprised. (Cries in higher-Ed)
linger* January 6, 2026 at 9:30 pm And the concept of “write a summary” is exactly what is marketed as a task suitable for GenAI. (Albeit with wildly variable actual results.)
Kiriana* January 6, 2026 at 7:55 pm Yes, and I think the claims that OP didn’t build in any review time are likely very far off reality. Long before LLMs I had a similar situation where myself and one other person were on the financial team for a department of a non profit (basically we would provide the information required to process financial transactions to the overall payments team at head office) and it was built into our workflow to double and triple check that information was correct. Most of the time this work took was taken up by double and triple checking! What we found though was that there was one specific employee who made *so* many mistakes that it significantly increased the time it took to finish our work. When we alerted the manager to the ongoing problem she took home a binder of files that employee had worked on and that evening it took her *five hours* to go through and correct everything that had been done that day. There can be an absolutely huge difference between the amount of time you schedule for fixing mistakes made by 99% of employees and the amount of time you need to fix mistakes made by one employee who’s somehow really determined to screw up as much as possible. I don’t think it would have occurred to me that it would take five hours just to handle that binder of files from scratch, let alone double check that someone had done it correctly – and this wasn’t a complicated task either, literally just data entry.
Glitsy Gus* January 6, 2026 at 1:09 pm Yeah, I don’t really think the AI use is the problem, per se, unless there are privacy issues, which OP doesn’t mention. I think you are closer with the lack of guidance with the interns. AI is clearly not giving her good information and wasting time, but the actual problem is that she isn’t taking the information she is ending up with and editing it to make sure it’s correct or usable. This would be a problem no matter how she is ending up with the crap reports. Had she been given more guidance and oversight during the intern portion, whoever was seeing the draft of the work and giving her guidance on a more day-to-day level should have stepped in much earlier (it isn’t clear if OP was the one that should have done this, or if there was another intern manager that should be providing this oversight). “Hey, where did you get this quote? It isn’t in your source material. Let’s go over your process.” Now, in going over it with her, Manager discovered that she did get it from Chat GTP, or whatever, that is when Manager had the opportunity to point out, “Yeah, this is why we don’t use this for these reports. This information is just wrong, and you can’t turn it in like this.” If she still has bad information, either because she still tried to use AI, or because she is not bothering to fully review the source information, or randomly citing information without checking relevance, the why it’s wrong doesn’t matter so much as the fact that it is just wrong and unusable. OP, this didn’t happen while she was an intern, so you need to do it now. Don’t soft-pedal your reviews. Don’t close it and call it done. Talk to her about it. If you think she used AI, ask her if she did. You can even start with, “so, did you use AI to write some of this? I ask because a lot of these errors here are ones that have come up before when people tried that.” It doesn’t need to be a major accusation. You are flagging a problem, not accusing her of murder. If you’re wrong, and Lola is just really the worst data aggregator ever, well, that is a different problem. Either way, you can’t fix a problem until you name it. Also talk to her manager, this needs to be something they keep an eye on, and hopefully they can help her learn the best way to get good information into these reports.
Ms. Elaneous* January 6, 2026 at 1:19 pm Don’t blame the OP! Yeah time to review, but sounds like it was a train wreck. Accurate quotes are not above intern level.
Molly Bloom* January 6, 2026 at 1:27 pm Glitsy Gus did not blame the OP for anything. They gave advice on what the. OP can do differently so they get a better outcome. Being able to hear suggestions without interpreting them as blame is an important skill set.
Guacamole Bob* January 6, 2026 at 1:29 pm I agree with this. If the issue were that Lola picked three quotes about a minor and irrelevant point in the discussion and only one on the main theme, or that her writing was kind of stilted, or the summary at the top was twice as long as it needed to be, or she used names instead of “focus group participant 1” as it said in the template, those would all be very normal, coachable things from a junior employee. Making up quotes and delivering a nonsensical report is not a problem of how OP assigned the work. I do agree that OP should have been more definitive in giving feedback on the intern project, either to Lola or her manager, that the work didn’t pass muster.
Sarah With an H* January 6, 2026 at 2:40 pm When someone has specifically been told not to use AI and then uses AI anyway, it *is* the problem. I suppose the larger issue is that Intern is untrustworthy/has questionable critical thinking, but the behavioral issue is that she is using AI as a shortcut for a task that can’t and shouldn’t be done with AI. Maybe without AI she would look for other shortcuts, but this one is SO accessible and seemingly easy to use that I think it is worth naming it as the main problem
Davey* January 6, 2026 at 4:42 pm Intern was explicitly told to not use AI and did so anyway, more than once. How is that not the problem, when it’s the root cause of all else that has gone wrong?
GrooveBat* January 6, 2026 at 8:50 pm Intern was not “explicitly told” not to use AI. The intern was told that AI was not reliable for this type of report. That’s not “explicit.”
Curious* January 6, 2026 at 9:24 pm If a professional employee — even at the intern level — is told that a tool gives unreliable results, but uses that tool anyway, without checking the reliability of the results, then that is a professional whose judgement can’t be relied upon, even at the intern level.
Anony Mouse* January 6, 2026 at 4:53 pm As someone who wants to put a massive padlock on AI to keep people from getting to it… I also agree that the intern using the AI isn’t the problem. However a discussion of the AI use *is* critical in the response. One of two things has happened: 1) Lola dumped everything into AI and never reviewed it (or did so briefly for general clarity) before turning it in. 2) Lola dumped everything into AI, fully reviewed it, and did not catch the errors before turning it in. The problem is still the shoddy work, but I’m weirdly more concerned if Lola is taking the second approach. If she’s reviewing the work and not catching/finding errors, then may lack the critical and analytical skills to do the work at all. Therefore even if you could guarantee she’d never use AI again, that doesn’t mean she’d improve. The first approach feels worse because it indicates a degree of laziness, but there’s a small chance she thought she was working “smarter, not harder.” In that case, I would want to know if she used it because she thought it would save her time, or if she thought genuinely thought that AI would produce a better/more accurate result. Then again you would still have the problem of whether or not she’s right and that AI *does* actually produce a better/more accurate result than her best efforts.
MigraineMonth* January 6, 2026 at 9:59 pm I think a lot of young people genuinely believe that AI can do their jobs (or hobbies, or schoolwork) better than they can. Indeed, at the superficial level, AI work often looks a lot more polished than the work produced by employees. This issue is that AI frequently doesn’t have the fundamentals right. That image looks airbrushed and photo-realistic, but the model has nine fingers on their left hand. The article summary is grammatically correct, smooth and uses advanced vocabulary, but it doesn’t accurately summarize the article. The AI-generated code automatically downloads an external library and then sorts your data, but it’s the slowest possible sorting method and it turns out that external library is actually a computer virus. A senior employee can take the output from a junior artist/writer/developer and refine it to a polished level, but AI slop is generally a fully polished turd. A junior artist/writer/developer can learn to produce better work than an AI. A junior who just learns to write AI prompts (or is fired and replaced by an AI agent) will never produce senior-level work.
Bird names* January 7, 2026 at 1:59 am All of this, especially your last paragraph. I suspect many people who rely heavily on it are either overloaded with work, not confident in their work output or both. If you don’t develop the skills to assess your work, it’s understandable that the AI output might seem appealing since it has the right look. Nevermind that the content is bogus.
Anony Mouse* January 7, 2026 at 9:22 am I initially had a longer post which I cut short, but I did actually wonder if she genuinely thought AI could do her job better so I’m glad to know I’m not the only one wondering that.
fhqwhgads* January 6, 2026 at 1:53 pm I’m surprised they offered her the job without asking for feedback from everyone she worked with during the internship…
NothingIsLittle* January 7, 2026 at 10:46 am As a Jr Admin I was told to generate meeting minutes from transcripts and recordings, and it was by far one of the worst tasks I’ve ever been responsible for. I didn’t have the necessary context to understand a lot of what they were discussing, so it was a complete crapshoot whether I was actually catching the important bits or not. I’m glad I didn’t have today’s AI when I was assigned to take minutes, because I almost certainly would have plugged the transcript in! I was totally overwhelmed and lost with no support (org problems) and was directed to unuseful “guides” when I had questions. All that being said, I would have checked that the output was accurate and cannot fathom why Lola didn’t, other than that she knows it won’t actually result in consequences (ie OP has explained what’s wrong, but Lola was still hired after her internship and I doubt she’s impacted by the deadline).
Pastor Petty LaBelle* January 6, 2026 at 12:40 pm Why are you flat out not telling her not ot use AI.? Saying AI is not a good idea to use is not the same as saying do not use it. Without a specific prohibition anyone who wants to use it will just think you don’t like it because you don’t know how to use it. Even just pointing out the errors doesn’t convey the same message. You are hoping she gets the hint. She won’t as you said she will just keep using AI. Because she doesn’t know you don’t want her to use AI at all. If you are managing her on projects it is your job to manage her — as in make clear your expectations. Including saying you are not to use AI. Then if she still does, its a conversation not just sending back a draft for rewrites. You ask her did you AI? Then you point out all the errors that you are pretty sure are AI (this isn’t a court of law, you don’t have know beyond a reasonable doubt) and ask her to explain. The thing with using AI is people cannot explain their work. Because they didn’t do the work. Once it is clear she can’t explain then you can say this is why we don’t use AI. Now redo it without AI. And you need to make it clear if she continues to use AI after you told her explicitly not to use it that you will be reporting it to her supervisor. Which is part of your job in managing someone on projects – to let their supervisor know where someone is failing to follow directives/meet expectations. Because the supervisor can’t address it either if they don’t know about it.
Verunica* January 6, 2026 at 1:09 pm Yes but by not explicitly addressing it again, OP is signaling that continuing to use AI comes with little consequence for the offender.
fhqwhgads* January 6, 2026 at 1:55 pm Well it’s interesting because we’ve seen SO MANY letters that were about “I think this employee is probably using AI, the work is crap” and the response was “focus on the work being crap, not the AI, the bad work is the point”. And this OP did that. But Lola apparently didn’t make the connection that “this tool you’re using as a shortcut is not working well for you, so you should stop”. So someone needs to actually tell her to stop.
catkins* January 6, 2026 at 5:16 pm Yeah, it’s getting to a point where I don’t think this is the best approach anymore because people aren’t connecting the dots. But then there are problems with accusing people of using AI too, like they can just deny it if there’s no hard proof.
Pastor Petty LaBelle* January 6, 2026 at 1:20 pm She was told its not reliable. That is not the same as do not use. Someone who wants ot use it will dismiss any concerns about reliability. Insert But It Might Work Us meme.
sal* January 6, 2026 at 3:24 pm Maybe I come from a “Guess” culture more than I thought I did, but if my boss tells me “this tool is unreliable,” and does not ever follow up with an affirmative directive to use it anyway, I take that as “do not use,” because when does my boss want work produced by means of an unreliable tool? I would assume never.
Kiriana* January 6, 2026 at 8:06 pm I’ve seen occasional discussions over the last year or two that younger generations in English-speaking Western countries (most people in the discussions were American but also applies to UK, Canada, Australia, NZ, etc) do seem to be much more Tell than we older generations expect – for example, I’ve heard people say that they tell their kids “I wouldn’t clean the kitchen using those green cloths, they don’t absorb well” and then get surprised when their kid cleans the kitchen using the green cloths, because they think they were actually telling their kid not to and the kid thinks they were being given a suggestion. (I can’t recall the specific example that was used, so I made one up that fit the general vibe of the conversation they were relaying.)
lanfy* January 8, 2026 at 5:13 am My concern is that that’s not so much Guess vs Tell, as straight-up not taking on board new information. Sure, you didn’t explicitly tell the person not to use the green cloths (or the AI), but you still gave them information that should have got them to that conclusion themselves. Why didn’t it?
Davey* January 6, 2026 at 4:48 pm LW writes, “I’m not her direct manager; she’s just been assigned to assist me in my work, and I’m unsure how much standing I have to take this on directly…” That’s LW’s dilemma, i.e. the standing to say something, and to whom. Frankly, LW should let Intern’s direct manager know what’s going on, and the direct manager should take it from there.
Ms. Elaneous* January 8, 2026 at 12:40 am Or maybe OP can just tell Lola’s manager not to assign Lola to her ever again… Please send me someone else! It’s not unheard of.
Three Flowers* January 6, 2026 at 12:41 pm Occasional college professor here. I get the hesitation to call out the AI use directly because it’s really hard to prove. But I think you can take an approach of saying to Lola (and your boss), “either there are significant gaps in your/her understanding of how to do this work and what the important takeaways from these meetings are, or you are/she is using AI with exactly the consequences I warned you would happen. Either way, this work doesn’t meet standards and needs to improve,” plus whatever specifics apply. Also, if she’s working from AI-driven transcripts of the meetings, you may have a garbage-in-garbage-out problem, because those things make stuff up regularly.
Another One* January 6, 2026 at 1:20 pm I wonder if maybe the feedback needs to be/include- I’m seeing errors that look like AI or like you are fabricating data for some reason. (The reason doesn’t matter.) And provide other explicit feedback. Including this is what you did well but this is what is problematic. And plan the next time, she’s assigned to handle something for you that the entire timeline has to shift. If she has a manager/senior who should be reviewing her work before it gets to you- a talk with Manager and Lola that the timeline will be you need it Day 5, Lola will give it to Manager on Day 1, you will have it with revisions by end of Day 2, maybe Lola gets a chance of more revisions on Day 3 but if by the end of Day 3 it isn’t client ready- it gets passed off to someone else to complete. Because it feels like AI is a red herring for the real issue which is Lola isn’t/can’t do the job she was hired for and it’s impacting you and the business. The use of AI is a symptom of the problem. But the problem needs to be acknowledged now or it’ll only get worse.
Lily Rowan* January 6, 2026 at 1:28 pm Your first point is right on — if she didn’t use AI, she made up the quotes herself and that is also a very serious problem!
MigraineMonth* January 6, 2026 at 1:53 pm Yeah, this is an interesting one because AI use both is and isn’t the crux of the problem. On the one hand, the work would be problematic and below standards whether or not Lola used AI to create it. On the other, if Lola used AI after specifically being told not to, that’s a huge issue, and if she’s just feeding the criticisms into the AI instead of doing any revisions on the report herself, providing further critical feedback is a complete waste of time. Same with giving her additional training on data analysis, writing, etc. I think you’re right that focusing on the fabricated quotes will be illuminating. She’s either lying about using AI or she’s lying about what people said, and neither is a good sign in an employee.
Insufficiently Festive Cheap-ass Rolls* January 6, 2026 at 1:22 pm I like this script, because it puts the onus on Lola to understand the job and the meeting takeaways instead of just focusing on AI.
jsv* January 6, 2026 at 2:18 pm I disagree with this approach. Avoiding calling out obvious AI use because you don’t know for sure just teaches them that no one will be able to tell if they use AI.
A Significant Tree* January 6, 2026 at 2:45 pm With the type of feedback the junior colleague has gotten, and the end result (task marked completed), they have definitely learned this lesson: “If I use AI to write and then fix my reports per feedback, I’ll get it done.” If the junior colleague keeps relying on AI generated junk and it keeps being accepted, however grudgingly, they aren’t building the necessary skills to understand what makes a GOOD work product and they will continue to be terrible at the job. Junior colleague needs both types of feedback: Don’t use AI for this, it generates too much garbage. Do figure out the point of the report and what it should/shouldn’t contain.
yeah, also a librarian* January 6, 2026 at 5:28 pm One alternative is: you’re turning in work that isn’t usable, and the feedback I’ve given you doesn’t seem to be helping; walk me through your thinking in this section and we can figure out where there problems are happening. I see writing tutors use this with some success.
clever nick name TBD* January 6, 2026 at 11:14 pm I’m seeing AI transcripts from work meetings but I’ve not quality checked them yet. OP said there were recordings too, but like you say if Lola was using AI to process an AI transcript there could be multiple levels of hallucination!
Workerbee* January 6, 2026 at 12:42 pm “How should I deal with this massive and inappropriate use of AI from a colleague?” Which is it for the OP? The entire letter they are 90% sure but apparently repeated examples aren’t enough for OP to say anything to anyone directly or explicitly. Then at the end they’re coming in all self-righteous. OP had better prepare themselves to answer why they waited so long to call out such massive and inappropriate use, and why they failed to provide proper instruction in their feedback.
Rusty Shackelford* January 6, 2026 at 12:44 pm If you’re concerned that you can’t prove she’s using AI, phrase it this way: “These are the errors I’m finding. They’re the type of errors that are common hallmarks of AI. But even if AI isn’t being used, these are significant errors that indicate an inability to read or comprehend the data.”
whistle* January 6, 2026 at 2:46 pm Yes – this is how I address it with students. I had one break down in tears last year because they felt so bad their writing sounded like Ai. They turned in a beautifully written essay that said absolutely nothing of substance. They are capable of writing that well but I had never seen such empty content. I still don’t know if they used Ai or just had an off day, and I graded the essay per the rubric, which did not result in high marks.
sal* January 6, 2026 at 3:28 pm right before LLMs really got going, my HS-English-teacher spouse had a student who was deeply in love with the sound of their own voice and was parented that way as well, and I remember my spouse coming home with an essay and asking me to read it for a second opinion because they had an impression but needed a gut-check. The impression was “this is, notwithstanding technically fluent writing, word salad of the highest order that says effectively nothing” and I provided the gut-check. I believe there were several talks with the parent/s; poor spouse.
Distracted Librarian* January 6, 2026 at 3:21 pm Or point out the fabricated data and ask how it happened. The employee has only 2 real options at that point: admit the AI use or admit to making the errors herself. Either way, there’s a problem she must correct.
Nightengale* January 6, 2026 at 2:19 pm OK I did not need the mental image of a state trying to use AI to automate and evaluate CPS referrals
Madame Desmortes* January 6, 2026 at 4:24 pm Not generative AI yet as far as I know (and I may be wrong), but “predictive analytics” in CPS? Absolutely. Went about as well as you’d expect, where it’s been tried. https://arstechnica.com/tech-policy/2023/01/doj-probes-ai-tool-thats-allegedly-biased-against-families-with-disabilities/ https://web.archive.org/web/20180805043326/http://www.chicagotribune.com/news/watchdog/ct-dcfs-eckerd-met-20171206-story,amp.html
MigraineMonth* January 6, 2026 at 1:57 pm Elon promises us that Tesla’s Optimus robots will raise and educate our children (in addition to providing free medical care, ending poverty and walking the dog). Considering that Optimus robots run Grok AI, I just hate to think what it would be *teaching* our children…
Elizabeth West* January 6, 2026 at 3:14 pm Seeing as Grok has allegedly been generating CS@M, I think it’s time to unplug it.
Kiriana* January 6, 2026 at 8:12 pm Is that the one that costs $20k and currently can open doors but if you need something more complicated some guy can remote in from San Francisco and control the robot for you? Speaking of teaching children, they announced last month that El Salvador has partnered with xAI to introduce Grok-based AI tutoring for every student in the country, across 5000 schools.
MigraineMonth* January 6, 2026 at 10:07 pm Oh good, I see no possible issues with AI teaching children, especially Grok, which Musk keeps tweaking to be more in line with his white supremacist views. /sarcasm
Cmdrshprd* January 6, 2026 at 5:02 pm The AI can watch the kids, heck my cat, will watch my kid for me. the hard part is getting them to intervene when they do something they shouldn’t that will likely injure them. The cat won’t do anything other than sit on the kid.
MigraineMonth* January 6, 2026 at 10:09 pm Depending on the respective sizes of the child and the cat, that could be effective for preventing the child from doing anything dangerous. I once had an absolute unit of a cat that could effectively babysit a child up to the age of 3.
snoopythedog* January 6, 2026 at 12:48 pm As someone who works in a similar field- conducting focus groups, creating reports, and has been working as a company to incorporate AI into our processes- I get it. This is a huge problem that *must* be addressed head on. 1. As an intern she was told specifically not to use AI and then obviously did. The first failing here is that you didn’t address your suspicions head on with her then. Calling her out- giving examples of how AI is messing up the work AND reminding her she specifically went against what you told her to do- would have been beneficial to her learning and growth. Obviously you can’t go back in time, but keep it in mind for the future. It’s ok to say (kindly) “here’s all of the things wrong with what you just produced AND I suspect you used AI to do it because of x, y, z. Do not use AI and fix these mistakes. And escalating the situation to the higher ups. 2. Now- address it head on. Escalate it. Lay out the problems with her work and why you suspect AI has been used. If it goes against company policy, bring that up. Even if it doesn’t go against policy- she’s producing shitty work, jeopardizing timelines, budgets, and client’s trust in your company- those are all bad things that should be flagged early in her career. If she’s not your to manage, that’s fine, lay it out so the person managing it can deal with it. Now you know any work she does for you requires extra oversight and careful fact checking. Track her impacts on your projects (timelines and budgets) and clearly document and communicate it to your superiors. We’ve been experimenting with AI for qualitative work and it sucks. You have to constantly fact check it. So far, it works best as a basic summary tool to guide your initial analysis and as a fact-finding tool to find specifics that you know exist in the data. It hallucinates, gets sentiment wrong, and summarizes large volumes of information inaccurately. So far, it hasn’t saved us time, and has wrecked our budgets by getting things very wrong that we have to go back and manually fix it. Additionally, with qualitative information, if she’s feeding the data into an open source AI, you could be violating confidentiality and data security requirements and promises.
Mockingjay* January 6, 2026 at 1:30 pm Agree. As technical writers, my team and I have discussed AI tools and usage in depth. We have only a few tools we are permitted to use (we work with proprietary and restricted info). We experimented with these tools and our conclusion is that it takes an experienced person with a lot of project background and nuances to get anything useful out of it, and even then it needs a lot of rework. Someone new to their career will produce the same bad product as Lola. (So get an AI usage policy in place – for everyone, not just Lola.) What may be the bigger/real issue: lack of training. Feedback on a draft is not training. Filling in a template is rote, not training. OP, have you briefed Lola on the project background? She needs that for context when interviewing and writing up the results. Sit with Lola during an interview and ask her afterwards what key points or concepts she picked up, and identify anything missed. Train her in the entire process. It takes time to develop an employee, which can be frustrating in the short term, but worth it in the long run.
NothingIsLittle* January 7, 2026 at 11:20 am Given my professor colleague’s experiences, it would be better to just let Lola make the first draft with AI, but make her document that she used it and edit the outcome for accuracy before submitting it (and build time in after the deadline given to her to check it to confirm she’s done so). You’re absolutely right that it almost always takes more time, but if it eliminates the mental roadblock of starting for some people, it can still be worthwhile. Again, you absolutely must check for accuracy, but it’s often easier to arbitrate guidelines on using AI than using it at all. I have ethical reasons that I avoid using AI, but with how many companies are outright encouraging its use it’s becoming difficult to police without explicit policies for different types of processes. And given that Lola just plugged OPs corrections back into AI without checking that the outcome was internally consistent… I think she needs explicit rules for when and how it can be used as opposed to just when it can’t.
I should really pick a name* January 6, 2026 at 12:50 pm I’m curious what her explanation for the incorrect quotes was.
Terrie* January 6, 2026 at 2:53 pm Right? AI or not, my immediate question would be “Where did these quotes come from?”
Distracted Librarian* January 6, 2026 at 3:23 pm Exactly. Because there are really only 2 options: AI or she made them up.
sal* January 6, 2026 at 3:33 pm I would imagine a comparatively-unembellished “Oh, I guess I made a mistake/wrote it down wrong”, no?
Mx Burnout* January 6, 2026 at 12:54 pm I am dying to know what else is going on at this workplace such that this person seems so deeply afraid of naming AI as the problem, or even a potential problem. It’s okay to name it! It’s not a court of law!
DramaQ* January 6, 2026 at 1:12 pm AI can hear you! You don’t want to anger our future robot overlords do you? lol
Tea Monk* January 6, 2026 at 3:59 pm Yup. Remember if you don’t create machine Jesus, machine Jesus will remember and be mad at you ( apparently a real belief)
Another One* January 6, 2026 at 1:37 pm One of the things that came up several times during update season was how much communication- or rather people’s difficulty communicating- creates issues/problems in the workplace.
I should really pick a name* January 6, 2026 at 1:41 pm Sometimes the suggestion is to address the quality of the work, not the method used to create it. It sounds like the LW may have taken that as a hard and fast rule.
Arcadia_Commons* January 6, 2026 at 1:57 pm The weird thing is the LW did name AI as a problem BEFORE THE INTERN STARTED THE ASSIGNMENT, but didn’t bring up AI again when the intern turned in the assignment. The even weirder thing is that the intern apparently used AI for assignments (at least) twice after the OP warned her about AI. Something is weird about that internship. Either the intern is not a good fit for the role or the intern is not being provided with proper oversight. But, based on this snippet, it doesn’t seem like this intern is learning much from this work experience.
Paulina* January 6, 2026 at 3:19 pm Both really, but fundamentally a lack of proper oversight (since proper oversight would help them determine she isn’t a good fit). I notice that LW is just one of multiple people who Lola was doing work for, and that LW wasn’t consulted on hiring her post-internship. This to me suggests that Lola has an official manager who doesn’t actually manage the projects she’s on, and then there are multiple others that she’s supposedly doing work for, and this structure has produced a gap in responsibility. How they log things may have something to do with this gap as well, if the tracking process limits how they record issues. Meanwhile LW did extra work to get the project closed, and without actual communication to Lola’s official manager this may look like Lola’s work is just fine. There doesn’t seem to be any set way for LW to report the problems with Lola’s work.
Parakeet* January 6, 2026 at 2:45 pm There have been a few letters where a supervisor isn’t sure whether bad or mediocre work is AI-generated, and Alison has suggested focusing on the problems with the work product rather than the AI use per se. It’s not all that surprising to me that a supervisor is focusing on outputs (and given those past letters and responses, I’m surprised that it’s surprising to so many people!). A few commenters, like Rusty Shackelford and Three Flowers, have given good example scripts for threading this needle.
Bluefox* January 7, 2026 at 8:54 am C-suite tripping over themselves in singing the praises of AI and how everyone should learn to use it? I know that’s what my company heads keep doing…
Old Lady* January 6, 2026 at 1:40 pm We are at least in danger of having a population of new workers that never learned to think for themselves.
Big Hair No Heart* January 6, 2026 at 3:21 pm I mean, OP is in no position to fire the AI user (since they aren’t this person’s manager), but yeah, I’m really surprised the intern was brought back and is still there.
Distracted Librarian* January 6, 2026 at 3:24 pm I suspect that happened in part because OP didn’t report her AI use and poor performance to the appropriate person.
sal* January 6, 2026 at 3:35 pm I’m inclined to not judge OP too harshly for that, because I feel like the first response would be, “Do you have proof?” And OP does not. And maybe doesn’t have the juice at their workplace to get it. I could see deciding to forego this conversation being the right course, workplace-social-capital-wise.
Red Shirts Alert* January 6, 2026 at 6:39 pm Well, I did report two different newbies for critical errors & poor work product despite substantial training by me of that specific part of their work. In both c the manager chose to ignore the issues despite concrete proof & granted permanent status. I was frustrated but had no power, so eventually I refuse to cover or fix bad work product. Poor coworkers + bad managers are a horrible combo.
Cranky* January 6, 2026 at 1:03 pm The feedback is much too soft. Regardless of if AI is being used or not, she is engaging in data fabrication. That is a huge deal. It needs a swift response that it is utterly irresponsible and unacceptable at any legitimate organization. Doesn’t matter if the fabrication is because she thinks Chat GPT can do her job for her or if she is doing it herself. She is turning in reports with fabricated data claiming it is her work.
Ms. Elaneous* January 6, 2026 at 1:30 pm Cranky ++++ Regardless of if AI is being used or not, she is engaging in data fabrication. That is a huge deal.
MassMatt* January 6, 2026 at 1:50 pm This. I get that LW is not Lola’s direct manager, but the feedback (“storytelling”? Really?) was entirely too mild. Whether it’s AI or not, Lola is doing terrible work, and not taking the (admittedly mild) feedback to improve. Why was she hired? Is the hiring manager AI bot? I know talented and hard-working people coming out of school struggling to find decent jobs in their fields and meanwhile it seems no one dare hurt Lola’s feelings. Major side-eye also that an intern/new employee was given evidently critical assignments without adequate feedback or time for review before hard deadlines. And LW says their company hires 80% of these interns?! Processes seem very out of whack.
Properlike* January 6, 2026 at 3:04 pm Yes. I know plenty of young people who don’t fabricate data and who can write without AI, or who use AI as a tool for very (limited) specific tasks and know to double-check everything it gives you. Someone who would be excellent at this job missed out because no one wants to hurt Lola’s feelings. Unless Lola is the CEO’s daughter, I’ll revert back to something Alison reminds us about working with interns: It is a kindness to establish very clear expectations and boundaries when people are new. Also, if you don’t, you become “the manager who’s not managing” and that never goes well. I’m afraid it’s on OP to start naming things, because it looks like Lola is not suited for this job at all, AI or not.
Resume Please* January 6, 2026 at 1:06 pm AI aside, that seemed like a big assignment for an intern, and without adequate review time if it went sideways (which it sometimes does with people new to the workforce who aren’t even full employees.) Beyond that, honesty is needed. “These (examples) look similar to AI hallucinations and other hallmarks of AI usage, which we should never use for these assignments.”
Sneaky Squirrel* January 6, 2026 at 1:07 pm There are times where it’s reasonable to not name AI-use as the problem if you don’t have the proof. In student writing, accusing someone of plagiarizing from AI would be a serious offense with serious repercussions. Here, you don’t have to have proof about Lola’s use of AI to talk to her about it. You can let Lola know that you suspect her work was produced by AI, explain to her the errors that lead you to believe this, let her know that AI use is unacceptable and that her quality of work needs to improve.
Velawciraptor* January 6, 2026 at 1:09 pm The most common trope in Alison’s advice is to communicate directly. And you’ve failed to do that here. You have to tell Lola directly that it’s obvious that she’s either using AI or has no idea how to do the work and is choosing to make things up. You have to tell that to your manager and hers as well. Stop tiptoeing around the issue–this is not a situation that requires proof beyond a reasonable doubt. You’re using common sense to make reasonable inferences from the evidence in front of you–now address the problem directly.
Ms. Elaneous* January 6, 2026 at 1:12 pm Lola the AI sneak – I don’t think there are words strong enough for reprimanding Lola. Lola, if you used AI this is in violation of company policy, and you’re fired. If this is your own work, you should be fired for incompetence. I would print out her work and take a red sharpie to it (document document document), with misquoted does not address question, whatever she’s doing it’s incompetent. Some people can improve, but only after they believe anything they are doing is wrong.
Thane of Caldor* January 6, 2026 at 1:34 pm “ I don’t think there are words strong enough for reprimanding Lola.” Really? That sounds like an overreaction. OP needs to speak up first
AnonAgain* January 6, 2026 at 1:13 pm This just makes me angry for the other kids who would have worked so hard and learned so much at that internship, and the other potential employees who would have done so well using their actual brains and skills in this job. And they don’t get the chance because Lola is… Beautiful? Someone important’s daughter? Insert other unfair reason here? Why is everyone afraid to introduce this young person to the concept of consequences?
Student* January 6, 2026 at 1:35 pm I don’t think the OP is afraid to make the intern face consequences. I think the OP recognizes they lack jurisdiction to make the intern face consequences. The problem with a lot of jobs nowadays (in my part of the working world) is that your manager with hiring/firing power is not actually managing, assigning, or otherwise supervising work tasks. You get matrix-managed by a co-worker who has no actual authority over your work hours, pay, project load, workflow, or any disciplinary actions. The matrix-manager also doesn’t usually have resources or clout to provide training or obtain other basic resources that should normally be part of a manager’s job. Maybe they have a budget over a specific project, if you are lucky. So, the manager that provides you with a performance evaluation, determines whether you get promoted vs fired, validates your time card, makes lay-off decisions has no idea what you do on a day-to-day, week-to-week, or in extreme cases year-to-year basis. They have no need to know that info about your tasks to do their own job, so they don’t bother – and you have little ability to push back. Your only real recourse in these situations is to attempt to befriend or otherwise ingratiate yourself with the hiring/firing manager so as to secure yourself a spot in the lay-off queue that is more favorable than people the hiring manager has never met. It’s part of minimizing management costs by basically trying to do without any real managers – just some bonus pay to a regular individual contributor who is willing to shoulder the blame if there is actual time card fraud, and redirect blame if there’s anything else going on.
MassMatt* January 6, 2026 at 1:55 pm Everything you describe is a textbook example for a dysfunctional organization that will eventually be filled with Lolas and overworked supervisors redoing their terrible work because they are held responsible and will be thrown under the bus by middle/upper managers that seek authority with no accountability.
Distracted Librarian* January 6, 2026 at 3:28 pm You’ve just described the structure of my role – the only manager in a unit with almost 60 people. However, I seek out feedback, talk to lead workers and colleagues, to try to get accurate information on employee performance. The other side of that coin is that people need to be willing to speak up when someone is not doing their job. This is where OP is messing up. They need to be communicating these experiences to Lola’s manager as well as their own manager–in detail, with examples.
clever nick name TBD* January 6, 2026 at 11:22 pm Are you giving performance reviews to 60 people? That sounds like a full time job all by itself!
Student* January 7, 2026 at 10:33 am Good for you, genuinely. However, I can guarantee you that the quieter people or more junior people on your 60-person team still need management stuff from you that they have no hope of getting. Heck, probably some “lead workers” and “colleagues” that you pay more attention to feel pretty short-changed too. It’s not a matter of whether you are trying hard enough or good enough at your job. It’s just math. Your company didn’t set you up to functionally manage and there’s nothing you can individually do to fix that, right? There’s about 260 work days in a year, which gives you about 34 hours, which works out to just a bit over 4 work days, to manage each individual person on your team annually. And I’m sure your company doesn’t actually allow you that 4 work days of time per person annually, because you have paperwork to do, management meetings to attend, and at least one squeaky wheel on that team who takes up way more than their “fair share” of your time. So, in your situation, you have four work days or less to find and fix a Lola, or go through firing processes for her. Easier to punt her to a colleague you aren’t close to (…like OP…) and hope the issue doesn’t reappear for the rest of the year.
Parakeet* January 6, 2026 at 2:53 pm Bad hires, including for jobs where other applicants might have been good hires, aren’t a new thing and don’t require the bad hire to be beautiful or a nepo baby. There’s no indication that “everyone [is] afraid to introduce this young person to the concept of consequences.” The LW is looking for advice on how to introduce her to consequences while not actually being her manager. We don’t know anything about her relationship with her actual manager or others who have worked with her.
Emmie* January 6, 2026 at 1:21 pm This is deeply concerning, and you were right to raise it. There are two areas I would look into. First, review your company’s AI policy and determine what information may have been entered into an AI tool. If proprietary or confidential material was shared, that could expose the company to risk. You would have a higher obligation to share this with your manager in this case. Second, consider how you address this with her directly. I would be candid and say something along the lines of: “I’m concerned enough about this work that I’m questioning whether it was completed independently or with the assistance of AI. If AI was used, did you review and validate the output?” Starting the conversation this way may prompt her to disclose any AI usage.
GrooveBat* January 6, 2026 at 1:33 pm This was totally running through my head as I was reading the letter.
H.Regalis* January 6, 2026 at 1:46 pm Thank you for validating me, GrooveBat ٩(^◡^)۶ I thought I was going to be the only one.
linger* January 8, 2026 at 3:45 am Already wrote a snippet of Kinks and a full verse+chorus of Manilow filk before getting this far down the comments. With the same starting line for the latter, because obviously. Feeling slightly foolish about that now; you beat me to it, and not by a small margin. But definitely proves you weren’t the only one thinking it :-) (For completeness: Someone upthread has also referenced “Whatever Lola Wants” from “Damn Yankees”.)
Mel* January 6, 2026 at 1:40 pm with lots of meeting notes to share and a deadline up to there she would miss details in each revision but when the AI went too far LW sailed across the bar…
MsM* January 6, 2026 at 2:08 pm I met her at a desk at my nonprofit Where AI is not allowed, but she still did it Oh, oh, Lola No, no, no, no, no, Lola
Thin Mints didn't make me thin* January 6, 2026 at 3:13 pm Girls will use bots and bots will use girls It’s a mixed-up, muddled-up, ****ed-up world If you’re a Lola
clever nick name TBD* January 6, 2026 at 11:24 pm I started to mutter “doot-de-doot-de-doot-doot-doot” then I realized that’s a different song…
Lacey* January 6, 2026 at 1:23 pm I have this problem with coworkers all the time. They use AI, they don’t review it to see if it makes sense, and then I do a lot of extra work to try and beat it into something resembling a usuable project brief. I can, of course, kick it back to them. But I will never get anything better than what I got to start with. Just more confusion and an increasing sense of panic from them. Unfortunately, they weren’t much better before ai. But they did at least know why they made their requests, even if the requests themselves were absurd.
Harriet J.* January 6, 2026 at 1:31 pm As a teacher, I don’t bother going through the hoops of proving AI misuse – it is a waste of time. They are graded on the quality of their work and if it is bad, so is their grade. Sometimes I write “this reads like AI slop” as a comment. I have said to students as I return crappy papers – “I hope this is AI because I hate to think that you write this poorly.”
Ms. Elaneous* January 6, 2026 at 9:40 pm Harriet I have said to students as I return crappy papers – “I hope this is AI because I hate to think that you write this poorly.” Yes! This!
CubeFarmer* January 6, 2026 at 1:34 pm “Lola, you fabricated quotes and the data analysis is flawed. Take me through your report preparation process here.” Then make her show you her work. You are 90 percent sure she used AI, so it’s likely that she cannot take you through her process because she doesn’t have one. Meanwhile, you absolutely need to loop in her manager because, AI or not, her work quality is causing you to miss deadlines and is endangering your program budget.
knitted feet* January 6, 2026 at 1:52 pm Just raise it! You’re seeing a problem, you’re seething about the problem and yet you’re still tiptoeing around the problem. If you don’t want to accuse her outright, you can say you’re almost certain it’s AI and that if it’s not, Lola is still showing a really concerning lack of understanding of what the task requires. But say SOMETHING.
Ellis Bell* January 6, 2026 at 1:54 pm I don’t understand the coyness in asking whether AI was used; it’s not like you’re convicting her of murder without proof. But even if you don’t want to assume anything about her process, where’s the harm in asking how it all came together? Use phrases like “Walk me through your process step by step”, “How did these incorrect quotations get in here?”, “Did you take notes?” “How did you go about the analysis? How did you check your facts?” If you’re getting a lot of blank looks and vague-waffle answers then the coast is definitely clear to go ahead and call it what it is. If she’s already had every chance to explain how she could have dropped the ball, point out that these errors are more commonly made by AI than by people and you need to be clear that she absolutely must not use it.
Immaterial* January 6, 2026 at 2:22 pm the benefit here, is that you can also use this to teach her how she should be approaching the project
Lacey* January 6, 2026 at 3:03 pm There has been a lot of advice (not necessarily here, but in general) telling people not to focus on the ai bit, but to instead focus on the problem the ai has caused. Because if someone fabricated quotes, that’s a problem whether or not they used ai to do it. And I understand that viewpoint, except that people are EXTREMELY unlikely to do that if they’re not using ai. So the problem really is the ai use, but everyone’s afraid to look like a regressive luddite who’s afraid of the inevitable future.
Engineery* January 7, 2026 at 12:19 pm To add to your point: I feel like very few LLM users understand what’s actually happening under the hood. When you ask ChatGPT “Summarize the attached document” what you’re really asking is “Provide a text string that a human cannot easily distinguish from a summary of the attached document.” The LLM has no concept of accuracy; the model merely produces content that is statistically likely to satisfy the human who generated the prompt. If the content is short, or simple, or the prompt indicates the human making the request is capable of understanding the output, then the output will tend to be more accurate. If the content is long and complex, and/or the prompt suggests a lack of understanding or interest in the original document, then the output will tend to predict and reinforce the biases of the prompter, pad the response with fake content that the prompter is unlikely to check, and so forth. So in this case (in most cases!), the LLM wasn’t even doing anything wrong. The LLM was asked to provide something that the prompter accept as an accurate summary. The LLM generated a bunch of nonsense that, according to its training, was statistically likely to satisfy the prompter. The prompter accepted the output as accurate. That is the one and only measure of success for the LLM! The LLM doesn’t know or care that a third party, who did not generate the prompt, would find the content unsatisfactory. So the problem with LLM output isn’t just that it can be “wrong”; it’s that it’s designed to make the sort of errors that the prompter isn’t likely to identify as errors. As LLMs improve, the ability of humans to properly proofread LLM output is further and further reduced.
Richard Hershberger* January 6, 2026 at 1:54 pm It seems to me that the use-by date on not understanding the limitations of LLMs is long past. Three years ago the discourse was about plagiarism. This seems to have faded into the background. In the meantime the bigger issue, for those unconcerned with the ethics, is the quality of the product. By now, this is well known by all but the most avowedly credulous. It simply is not up to the task for anything where quality matters. Yet we still see stories about lawyers filing briefs full of AI slop. At this point it is no longer credulity in the face of tech hype. It is a choice to be lazy, submit slop, and hope nobody notices. I suspect that Lola is incapable of producing quality written work product. She slid through college using ChatGPT and has never actually written a college level term paper. If this is the job, she will need to be willing to acquire the skill set to succeed.
MassMatt* January 6, 2026 at 1:58 pm Or she can just continue on as she is and get promoted. After all, it was good enough to get her hired!
Retired Vulcan Raises 1 Grey Eyebrow* January 6, 2026 at 2:16 pm Lola will continue to use AI because she faces no significant penalties for doing so. Indeed, the AI gloss may fool some of TPTB that she is doing good work – after all, how did she get this job after her internship? The OP sounds like she has insufficient authority to really manage Lola, so she should report to her manager, with cc to higher management, that Lola is delivering crap that is probably AI crap and just produces new AI crap when told to correct her work. Someone with real authority over Lola needs to give her a final warning about AI and make it clear that the next delivery of crap mmeans she will be fired.
Immaterial* January 6, 2026 at 2:21 pm yeah, I’m not sure why OP doesn’t want to call out the AI issue. rhis isn’t a school context where you are trying to prove someone is cheating or even trying to build a case to fire them. Just ask about AI use. If she denies it, you have more info about her and you can still emphasize that it shouldn’t be used. and explicitly call out the inconsistencies if the whole document wasn’t updated. same with the made u quotations. it sounds like there needs to be more of a conversation or back and forth in terms of review and not just notes.
pamela voorhees* January 8, 2026 at 2:40 pm I think the problem is not wanting to say something unprovable. It’s not a school or a law case, true, but OP says “Did you use AI?” or even “I know you used AI,” Lola says “No, I didn’t”, and… now what? It’s hard to address head on because there’s no good rebuttal to “I didn’t use it.” Better to just focus on the problems (and pointedly say that these are common AI errors — don’t bother with confirming it.)
Coverage Associate* January 6, 2026 at 2:24 pm The part of this that raises my eyebrows that hasn’t been discussed is LW saying LW may not have caught the factual errors if the writing hadn’t been so bad. The first few times a new employee is assigned to write a summary, especially one that includes quotes, the manager needs to allow time to fact check. Most of the time, I would imagine that a record would be kept of where in the transcript the quotes came from, even if that record is in metadata or another document. But the first few times, even summarized information should be compared to the source data. Sometimes new employees don’t understand the source data. Sometimes they miss things. Sometimes they don’t understand the assignment, all while following instructions and acting in good faith.
Immaterial* January 6, 2026 at 2:29 pm the benefit here, is that you can also use this to teach her how she should be approaching the project
Sarah Hood* January 6, 2026 at 2:46 pm A big question I’d have is, Does the company have a policy about using AI – specifically: 1. uploading potentially sensitive documentation/reports into it, and 2. if and how AI can/should be used to perform that kind of work. If they don’t, they need those policies, and those policies need to be as clear and thorough as possible. Having said that, IMO if a company allows its employees to use AI to perform that kind of work AND they allow employees to upload that kind of documentation (with sensitive information stripped), then I don’t think there’s anything wrong with using an AI tool to help with putting together/drafting these reports. It absolutely can be a helpful tool. But one (of two) caveats here – and it’s a BIG caveat, one that’s already been mentioned – these tools can and will get things wrong and hallucinate information. People need to not only know and understand this – they need to formally acknowledge (and be held to account) in that knowledge. Which brings me to big caveat #2: Today’s workforce needs compulsory AI literacy training. And when someone has completed that training, they should be held to account if they produce AI-assisted work that their training does not support.
Thin Mints didn't make me thin* January 6, 2026 at 3:14 pm Good suggestion. My organization has a set of guidelines that includes “You are still responsible for the quality of your work” and “You are smarter than the AI.”
Terrie* January 6, 2026 at 2:51 pm If they’re unwilling to call out AI usage on the grounds that they are not sure it’s AI, there’s a simple solution. “I’m still seeing the same issues. I need you to walk me through your process so I can understand where they’re coming from and how we can ensure they don’t reoccur.”
Adereterial* January 6, 2026 at 2:57 pm ‘AI isn’t reliable for this sort of work’ is not the same as ‘don’t use AI for this work’. Its ‘use with caution and check thoroughly’, not ‘don’t use at all, ever.’ Stop beating around the bush and be direct with her – she can’t use AI, or if she does, she HAS to check it first, or only use it for very limited, defined purposes and she still needs to check it first. Hold her to account for that. But the vague feedback is setting her up to fail and causing no end of problems for everyone else.
Retired Vulcan Raises 1 Grey Eyebrow* January 6, 2026 at 3:06 pm The OP may be wary about ordering Lola not to use AI because many companies allow it, or even (mistakenly imo) recommend it. If so, she could ask her boss if Lola can be ordered not to use it, on the grounds that she has demonstrated she cannot use it competently.
nnn* January 6, 2026 at 3:51 pm What others have mentioned about having her walk you through the process is good! A couple of additional thoughts: 1. Since a repeated problem is fake quotes from transcripts, it might be useful to introduce the practice of providing a timestamp in the transcript that the quote was taken from. (These timestamps could be deleted from the final report if they’d be out of place there, but keep them in during the working phase.) That would likely be trivial for someone working with real data to do, and would make verifying easier. In some contexts, it would be useful to explicitly state that you’re introducing this requirement because in the past fake AI-generated quotes have slipped through, and the time involved in verifying and correcting them has resulted in missed deadlines. 2. Sometimes, in some conversations, stating that the problem is using AI derails the conversation, because some people feel the need to swoop in with “AI is good, actually” or “AI is a tool”. Sometimes, a useful script to circumvent this can be to list the problems, and then add “These issues look exactly like the kinds of issues that have historically arisen when people use AI for this kind of work.” Sometimes, the “look exactly like” script can better highlight the problem that needs to be solved. (OP is probably better positioned than any of us commenters to determine whether this kind of script would help in their situation.)
Bada Bing* January 6, 2026 at 3:59 pm What’s confusing to me is that I’ve had the opposite experience. AI has been a super useful tool when doing things like combing through lengthy transcripts. You have to set the correct parameters though and be prepared to check its work. Ironically, in a series of AI trainings I helped conduct we advised to think of using AI as assigning work to an intern. It doesn’t have the instincts and grasp of nuance an experience employee does so you can’t often just take its work and run. As a tool, it’s not going away and there no chance Lola stops using it. I would instead advise her to do some significant training on how to actually use AI in the workplace (including having a checklist to verify the output quality/accuracy).
Lenora Rose* January 8, 2026 at 10:40 am The only reason it’s “not going away” is because companies force it on us to try and recoup some of the billions in debt they went developing it. And because people mindlessly parrot the “it’s not going away” as their excuse for using it. So far, we can all refuse to use the tool that was shoved in front of us if we find it’s not doing the job we want it to do.
SB2* January 6, 2026 at 4:19 pm Is the OP actually Lola’s manager? A lot of the comments assume she is, but that’s not how I read it. I work on a collaborative team where I am a senior technical lead. I have some authority, but nothing formal. This means that I can mentor and coach and provide all the feedback in the world…..but when it comes to their performance and accountability, that’s on their manager. In fact, I have told junior colleagues before that I don’t recommend they use AI when they’re learning how to write technical documents. I have explained to them that in the long ru, they are better off developing the skill themselves and gave them reasons why (It deepens their own technical skills and it’s harder to spot AI mistakes without that experience). I feel fortunate that they’ve listened to me about it, but I can’t actually ban them from using it.
Nom* January 6, 2026 at 5:15 pm Back in my day (2 years ago) my colleagues messed up work this badly all on their own
Dancing Otter* January 6, 2026 at 7:42 pm I’m a big proponent of “Do it over until you get it right.” Mistakes happen (not that I think this was an innocent mistake); fixing them oneself is how to learn better. The deadline’s been and gone. If it takes five iterations before she manages to produce acceptable results, maybe she might, possibly, perhaps, learn something: first, how to do the report/analysis; second, that shortcuts can end up being more work than doing things the right way the first time. As my favorite engineer says, “If you don’t have time to do it right, when will you have time to do it over?”
cncxch* January 6, 2026 at 11:21 pm I had a former coworker who said they could write in French. This was important because in our role we needed to write documentation in French. They had decent spoken French but had never studied French grammar formally outside of beginner classes. They would feed stuff into what I presume was deepl then use AI prompts from some rando competitor of chatgpt for the documentation specifics. Not all AI use is perceptible but when I tell y’all this was WORD SALAD. My favorites were the overly formal tone mixed in with a few (but not all) tu-form instead of vous-form, or just any consistent voice at all. Because they had never studied grammar, they weren’t able to clean up. Then when they got called out on their AI use they had a French native speaker who didn’t know our field proof the word salad, so it was grammatically more consistent but still factually wrong. Of course we were both from the same place so casual third parties at work thought that my writing skills surely were on par with theirs which could not have been further from the truth as I had an undergraduate degree in French. Until the third parties saw me write in French in real time. I don’t miss that period of my life. Writing is a skill set than can be honed, and it is possible to deepl/llm something in the interest of speed (I do it a lot in German) but you need the grammar, writing and editorial skills to clean it up. There is so much noise about people not needing to know how to write any more and like, we do.
SheFromLux* January 7, 2026 at 1:57 am LW not speaking up is a red flag as a professional and I would argue it is already late knowing this came up when Lola was an intern. This should have been a point to raise to Lola’s manager as feedback as the blatant use really led to extremely poor work (manager would be expected to have a chat reminding AI policy but in general, as an intern, it was worth teaching her on ethical use in professional setting, human agency…) especially when the company tends to hire 80% of interns. LW not having control over Lola’s performance means that I would really focus only on the poor work and, even if the task needed fo be completed, again, send feedback to Lola’s manager and call out the AI concerns. It is the manager’s responsibility to set clear expectations about AI use, because it’s not about not using it, it’s when, how and what responsibility Lola still has with the outcome. I assume Lola is still young and learning about work and her generation is being taught AI is the default resource to help through things, and they need to learn that it is a tool but using it is skill like any other and needs to be taught. It would be different if it was a more senior person but I am biased because the people that tend to not disclose they used AI are the type that have a habit of relying too much on existing things or things others did, so they won’t admit they used it and make excuses to the point it promoted “I got it from google” as a better response. I feel it is like dating, bars and tinder: people used to lie about meeting their partners in bars and said “through mutual friends” because it was somehow “shameful” and then tinder came up and people saw that as the now “shameful” option and now everybody was meeting their partners in bars.
Ted Dibiase* January 7, 2026 at 3:31 pm This is everywhere!!! The quality of output from coworkers is bottoming out. You never can be sure if you’re working with a complete moron, or they are mailing it in with AI. Either way, they have become useless. Unfortunately, there is currently no way to speak up about it without some AI zealot barking back saying that you just need to “adjust your prompts” or some other pointless take. We have moved into an every man for himself environment, as the AI output does look good from a glance, but is not at all helpful to the consumer (generally not the manager).