I used ChatGPT to replace a team’s input when they weren’t responding … and now I’m panicking

A reader writes:

I messed up royally. I’m two years in my first full-time role. My job is like in-house consulting. My team is trying to improve our internal processes. We interview people in the process about what they’re struggling with and look for ways to improve it.

I’ve done a lot of these interviews by now, but one of them was like pulling teeth from a lion trying to bite you. The answers they gave were vague and unusable like, “It’s abstract.” When I asked for more details, they’d repeat the same vague answers or say things like, “I could explain, but you wouldn’t understand it.” As we talked in circles, the team became increasingly gruff and dismissive. They’d probably call me pushy or argumentative. I asked for a list of things they would need to give me feedback that wasn’t “it’s abstract” and the list they gave me was wildly out of scope for what my team was doing. At the end, I felt like neither party could accurately describe what the other was talking about. I didn’t get the information I needed, and the tone of the interaction left a bad taste in everyone’s mouth.

I went back to my team with the little I had, but my team had the same questions that I asked and didn’t get real answers to.

I tried to message the two people on the other team who got closest to the information I needed. One of them, the lead, said she’d rather discuss it again with the whole team and to make a meeting, so I made a meeting. Then the lead emailed everyone on the meeting and said this could be an email, not a meeting, so I sent them an email. I spent a lot of time trying to word the email clearly to fix the communication issues we had. After a couple days, I sent them a reminder. The next week, I sent another reminder. They never responded. We’re remote, so I couldn’t stop by their desks.

Here’s where I really really messed up. I had to give a presentation that needed the information I had to get from them. So, in a last-minute panic, I put the email into ChatGPT. Its answer sounded plausible, so I used it in the presentation. The presentation went great.

Now I’m terrified about when my bosses find out I never got the answers from the team, and then I have to tell them I got them from ChatGPT. I know I did the wrong thing. What was I supposed to do? What can I do now? Could I have fixed that awful meeting somehow?

Oh no.

Okay, here’s the thing: this wasn’t a situation where ChatGPT could have helped. Your team doesn’t need to know what ChatGPT thinks could be improved in a particular process; it needs to know what a specific team in your specific workplace thinks needs to be improved — what their pain points and challenges are — and that’s not something ChatGPT could possibly know. At best you’ll have gotten broad, vague suggestions that may or may not apply to their context. At worst you’ll have gotten things that don’t make sense at all and don’t reflect anything the other team ever would have cited. For all we know, ChatGPT offered up suggestions to fix things the other team likes about the process, or things that don’t apply to them. And it’s highly, highly likely that it didn’t identify their actual problems, because how could it know? (They don’t even seem to be able to articulate those problems themselves.)

I get that you were frustrated with the roadblocks the other team was putting up. The right thing to do at that point would have been to go to your manager, explain what was happening, and ask for guidance. Your manager might have been able to suggest another way to approach it, or might have talked with that team’s manager herself, or who knows what — but the important thing is that then she’d be looped into what was happening and could help you decide how to proceed.

I think it’s really important that you figure out (a) why you didn’t do that and (b) why ChatGPT seemed like a reasonable solution — because otherwise I think you’re likely to have significant lapses in judgment again. I want to be clear that I’m not saying that to berate you! What’s done is done. But if you don’t figure those things out, you’re at high risk of stepping in a similar landmine again.

As for what to do now … well, you provided key information that was just made up. Is your team planning to act on that info in some way? If so, you need to do something about that. You can’t let people put time and resources into solving problems that don’t actually exist (or ignore big problems ChatGPT didn’t tell them about). If they moved forward based on that info, presumably at some point the other team is going to hear that your team has solved “their” problems, and it’s likely to come to the surface that they never said those things to you.

I don’t know exactly what you said in your presentation, but is there any way to spin it as having been your best assessment based on limited info, and make it clear that the specifics did not come from the other team? Of course, if you said anything like “I spoke in detail with two program analysts, who identify XYZ as their biggest challenges,” then that’s not going to work. So it really depends on exactly how you framed things.

And a lot of what happens from here depends on how your team will use the contents of your presentation. In a best case scenario, your office is one that collects input from people and then lets it sit unaddressed, and so they won’t use it at all! But given the nature of your team’s work, I doubt that’s the case and you may need to come clean to your boss.

{ 444 comments… read them below }

      1. Megan*

        I believe it can be used ethically and with limits, but it shouldn’t replace humans at any juncture in a process and should be nowhere near a finished product.

        1. Call me Javert*

          At the risk of derailing this, these models are rooted in wholesale theft of other people’s work. There is no ethical amount of theft.

              1. Crooked Bird*

                You win my “I just explained this to my 10-yr-old b/c I LOLed” award for today…

            1. CowWhisperer*

              Don’t use it for ASL instructions or stretches either.

              For the English question of “What’s your name?” in ASL, ChatGPT recommended bowing at the waist then finger spelling the other person’s name. There’s a few issues here – namely that the signer is cutting off communication by staring at the floor and presumably doesn’t know the other person’s name to fingerspell.

              I’d have accepted “Your name what?” with eyebrows down as an answer or the 30 YouTube links that used to appear.

              For “piriformis stretch”, I just got 6 transition words like “Then” “Next” and “For example”.

              1. k.*

                I wish people would cite which version they’re using — GPT-4o handled the ASL question unproblematically. I did have to prompt it about the eyebrows (“How should your eyebrows be oriented?”) but it answered accurately (“When asking a “WH” question like “What’s your name?” in ASL, your eyebrows should be furrowed to indicate that you are asking a question that requires more information. So, keep your eyebrows down and slightly furrowed while signing “What’s your name?”)

                These models don’t produce outputs that are accurate all the time — this is also true of search engines! — and any outputs should be verified by an external source, but people seem really invested in using gotchas to dismiss the tool as a whole.

                In the case of this letter, it’s not that ChatGPT produced inaccurate into, it’s that the letter writer deliberately used it for deceptive purposes.

                1. Dina*

                  But if you have to verify whatever ChatGPT says, because we don’t know if it’s lying this time, why not just start with the search engine where you can verify the sources yourself? Instead of using a glorified text generator to do it?

                2. Calamity Janine*

                  sadly… chat gpt is not a model that creates accuracy. it creates blarney that sounds good.

                  there are ways to use AI repsonsibly but chat gpt and these other “black box” models are pure garbage in, garbage out, made by people who think that the computer must be right (a dangerous and foolish notion anyone who can program a “hello world” will realize). i have been listening to my father tell me how this is the literal worst practices in his field for at least the past twenty-five years. among actual professionals who know what they’re doing, chat gpt is a laughingstock and an embarrassment. it is as sound an investment as dogecoin and NFTs of procedurally generated cleavage.

            2. Lenora Rose*

              In that case the small ethical issue of theft is outweighed by the larger issue of a social system that doesn’t provide for people otherwise starving and go ahead. But we were talking about the wholesale theft of massive quantities of intellectual property (which also, incidentally, increases, not decreases, the chance the original creators will make less money and need to steal bread to eat). There is NO similar countering ethical issue, certainly not one which justifies ChatGPT or other plagiarism machines existing.

              1. kicking-k*

                _Exactement_. More like the Thenardiers, seeing what they can get out of the situation.

          1. Tio*

            So, our company built its own, in-house model due to them freaking out that we were gonna put our data into external sources, and they’ve only trained it on publicly available things like stock photos and previous documentation. It’s pretty decent, too. I know why we hate AI in general but there can be responsible models built still, I think. (Most aren’t though.)

            1. JSPA*

              Interesting! I wonder if there’s a reasonable model for adding to this by having writers and graphic artists and even musicians on staff to provide a range of (non-computer aided, style- and content- and mood- annotated) writing and graphics for the AI to draw on? It’s a bit feudal… but seems an overall ethical way to develop a robust brand style?

              1. Chuffing along like Mr. Pancks (new name for Reasons)*

                I lack the knowledge to speculate as to whether this could work, but it sounds like a healthier direction!

              2. KP*

                Almost like you could just hire those people to do the work in the first place! Crazy idea, using people for creative nuanced work!

              3. Lenora Rose*

                Couldn’t the people just… write, provide graphics, and make music to fit the company’s style?

            2. Lenora Rose*

              This solves the plagiarism and theft issues, and I like that; how does it do on environmental impact?

          2. TheBunny*

            Yes and no.

            AI scrubs what is already out on the internet, it just does it quickly. As long as you are not relying on it for answers as a final answer, or claiming it as your original work (as OP did) them using AI for info is not really all that different than saying you “Googled” something.

            The danger is when people take what it found and use it word for word and not as a reference point.

            1. hello*

              “AI scrubs what is already out on the internet”

              The fact that it is available to view on the internet does not mean it is free to reuse however you see fit, especially for profit.

              1. ceiswyn*

                All so-called AIs do that. It’s not a bug, it’s an intrinsic function of the way they work. There’s an excellent journal article on the subject on the Springer website, under the heading ‘ChatGPT is bull****’

            2. Chuffing along like Mr. Pancks (new name for Reasons)*

              But “what is already out on the internet” is such a jumbled mass of accurate and inaccurate stuff, sense and nonsense, lies and mistakes and satire and straight news and pure product of delusional paranoia. There’s a quip I think sums up AI search tools perfectly, coined by Mark Crislip, a medical doctor who was writing about the unproven/disproven/nonsensical treatments that are collectively referred to “integrative medicine”:

              “Integrating cow pie with apple pie does not make the cow pie better. It simply makes the apple pie worse.”

              When you Google it, the components are spread out below for you to examine. Search AI tosses them all into the blender.

              1. DJ Abbott*

                This also brings to the discussion the state of medical practice. Ask anyone with a chronic health condition – like me – why alternative medical practitioners exist. It’s because the regular medical practitioners are not stepping up to help. We have to get information and treatment somewhere and when the MDs blow us off we don’t just lay down and suffer, we go out and get the information and treatment we need wherever we find it.
                I’m offended by his cow pie example, as it implies that all alternative medicine and treatments are crap. This is not the case. Some of them work better than establishment medicine. And all medical treatments work for some people but not all people. So he needs to get off his high horse, and look around.

                1. Chuffing along like Mr. Pancks (new name for Reasons)*

                  This is probably not the place for an argument, so I’ll agree with you that the state of healthcare is not good in the US and stop there!

            3. kicking-k*

              Apart from the ethical aspects of taking it without permission or redirect to the source, though, there’s the practical aspect of the information often being misinterpreted or useless because the context has been stripped out.

            4. Lenora Rose*

              What’s already on the internet includes a lot of false information, though, and while a google search that calls up individual web pages will include, Say, pages called “Facts about X”, “Common myths about X”, “A new study confirms X does not do Y”, and “X totally does Y!!!” A reader with any serious literacy can sift the info there and figure out that the only place saying X does Y is a site known for crackpottery, and read the study and the myths to confirm it almost certainly doesn’t.

              AI will smush together all four of those sources and produce “X is claimed to do Y” as a fact, even though most of the sources are actively debunking that claim and only referenced it to refute it.

              1. k.*

                I mean, there are a lot of confidently incorrect claims being made in this comment section from people who appear to have no actual recent experience with building or using AI tools. I think I’m less optimistic than you are about the literacy skills of the general public. (Besides, anyone with decent literary skills should also be able to figure out how to interrogate or verify a factual claim being presented by an LLM using some pretty simple strategies — an LLM doesn’t remove your critical thinking or research skills.)

        2. AnnieB*

          I used to use with caution as it really could be great if you cast a critical eye over the results. Now, I think with the impact it’s having on the environment and the issues with using material other humans created without consent, I’m a 100% no and am expressing the same to my students.

        3. Presea*

          I’ve seen it have practical and ethical use as basically a search engine for a very niche problem. It saved a human probably dozens of hours of compiling information manually and allowed them to skip to the part where they processed that information instead. But

          I think the important points here are that 1. its output was not considered a finished product, 2. its output was fact checked and further processed by a human as an inherent part of the reasons the info was required, and 3. the environmental impacts were offset by the fact that it could do the work orders of magnitude faster than a human would, since its not like a human doing the research would have 0 impact on the environment.

          1. Jackalope*

            Except the human still exists and is still using roughly the same number of resources they would otherwise. The only way that using AI is going to reduce the number of resources used instead of the current catastrophic overuse is if something of the human beings that would otherwise have done that work are terminated so they aren’t doing it instead, and I am hopeful that we will never get to the point where that’s considered an acceptable solution to this problem (although I’m sure that some of the people pushing the new AI systems would be fine with it).

            1. Presea*

              Not in this particular case. The person literally didnt lose any hours or work, she just got to essentially skip a step of that particular project, getting it done in a couple of ChatGPT prompts as opposed to literally dozens of hours of googling and calling human people, very useful on a small team

              I’m not arguing that generative AI isn’t likely to cause an automation crisis or that its always good, just that it’s a tool with potential positive use cases even if those are rare compared to people over abusing it

          2. Rainy*

            But it’s not attached to the internet, and if you were using it as a search engine on its own bank of (stolen) text, you were just getting garbage.

            If you were putting in a bunch of data, congrats, it just took all that data and added it to its own bank. Hope there wasn’t any PII in there!

            1. Chalky Duplicate*

              Not necessarily?

              When you’re using the paid commercial APIs, part of what you’re paying for is a guarantee that none of your requests will be reused as training data.

            2. Presea*

              I can’t go super into detail, but none of those concerns applied to the use case I’m talking about. They just needed some very niche but 100% public data from state governments compiled together in a very specific way. Think related but subtly different laws and regulations that might differ in wording and execution between jurisdictions, that almost nobody would ever need all together in one place.

          3. Mel T*

            I find it helpful for answering weird little questions as well. I had a student ask what peanut butter tastes like. He can’t eat it due to alleriges. ChatGPT did a lot better job of putting that into words that I could. Ha! It is also good at creating fake data in context for assessment questions that I can use to help students interpret and graph said data. This is especially true when it relates to science topics that I can’t easily test and collect data for in class due to limited time and equipment.

            1. Squid*

              …how hard is it to describe the taste of peanut butter OR find someone else’s description of it (which ChatGPT likely looted for the one you got).

        4. Fierce Jindo*

          The environmental cost is not ethical. The energy and water usage are absolutely staggering.

        5. Misquoted*

          This. I’m a technical writer, and believe me, I’m not open to fully embracing it. But I’ve used it to brainstorm lists of things or to phrase statements about non-specific topics. It’s a starting point and always needs work.

          1. Jess*

            Ditto and ditto. I’ve used it for those times where I’m staring at a sentence or paragraph that I *know* I need to rearrange but can’t quite figure out how – it’s unlikely that I’d use the result verbatim, but it can give me a fresh starting point for looking at it.

          2. Ace in the Hole*

            I find it helpful for providing a basic starting point on tone, style, structure, etc. for some types of documents. But it cannot be relied on for actual information.

            Personally, I think of it like cloud-watching: I “find” the shape of my ideas by looking at something that gives a vague impression. With art, I often start my creative process by making ink blots or random marks so I can “find” images in them. Chat-gpt is just the text version of this process. The actual text it produces is meaningless garbage, but looking at it helps me imagine what I actually want to write.

      2. Polaris*

        At least “not without very careful consideration” and “extremely rarely”.

        Even with what I do, doing something like this would be an act of malicious compliance and burn through years of capital. (As in, I’m being asked to put a dollar value on a Llama Grooming Service Contract, but the Llama Grooming department is refusing, in writing, to answer my questions but is still demanding an answer – well, do that to me with emphasis, and I’m going to make you an absolute idiot in front of our boss…)

      3. Caramel & Cheddar*

        Seriously. I urge people to look into the environmental impacts of these tools. We talk about the “ethical” implications of putting people out of jobs, but those pale in comparison to the fact that we’re wasting astounding amounts of resources to produce garbage prose that a human has to fix anyway. It’s helping accelerate the way we’re killing the planet.

        1. Pescadero*

          While AI uses a lot of energy for training – individual queries aren’t that intensive.

          The generation of pictures, text, etc with AI uses quite a small amount – roughly 300 – 3000x less electricity than if you had a person sit in front of a computer for the necessary amount of time to produce something similar.

          1. AmberFox*

            When does an AI stop training, though? From what I can tell, you have to explicitly tell it to stop training… and something big like ChatGPT is never going to allow anyone outside its owner turn the entire training module of the entire AI off. (Maybe you can turn training on your own searches off… but again – that’s one individual. It’s still learning from everyone else – not to mention whatever other data it’s being fed.)

            1. Chalky Duplicate*

              (My prior comment seems to have been eaten — apologies if this is a duplicate)

              Even when a model is trained on prior users’ interactions, that doesn’t mean that training happens _while that interaction is taking place_; instead, it means that the company may choose to use prior input/output pairs to inform future behavior when they train the next version (which is a completely separate process from running the current version and retrieving its output). In general, if you rank results as good or bad through the UI, that’s what you’re _asking_ the team to do, so that the next version they train will be more likely or less likely (as appropriate) to generate that kind of output again; whereas if you’re using a paid commercial account, part of what you’re paying for is privacy — not having your interactions used for training (unless you ask that they be used that way, as by providing feedback on a given interaction’s quality).

              And in general, folks training models have a “training budget” — they decide they’re going to put X billion or trillion tokens of training data in, and keep going until they hit that budget — so giving them more data isn’t really changing what the overall budget for a given next release is going to be.

        2. Come on, man*

          If you’re worried about the ethical AND environmental impact of ChatGPT, I’d like you to research which site gets over 100 million a day (roughly double what ChatGPT does), and some of the things they’ve gotten in trouble for facilitating…

          That particular “hub” uses far more resources and contributes to far more problems, both socially and environmentally.

          1. Caramel & Cheddar*

            I am confused why you think people can’t be upset about more than one tech giant at a time, but I assure you many of us are capable of multitasking.

            1. Come on, man*

              Because there’s not much point pearl clutching over a useful tool when there’s an even bigger polluter doing much more to “acclerate the way we’re killing the planet.”

              Kind of an analogy to the actual idea of climate change and realities of reducing our carbon emissions when it comes to other nations, and the fact that we still have politicians using jets to get to a conference that could be done over Zoom (or email).

              1. Lenora Rose*

                Save this energy for the articles telling people to minimize use of AC in a heat dome while tech companies guzzle our water and drive environmental destruction, not for trying to split hairs between the different tech companies who are doing the environmental destruction.

      4. MaskedMarvel*

        It’s great for assisting coding.

        I know how to code already, but it gets rid of the repetitive donkey work.

        I’m 4 times more productive now.

        I also use it as a sounding board for business solutions problems. it sparks ideas.

        1. C-Suite Diva*

          I think folks with a technical background who use it for tech work are at a distinct advantage. I keep running into communications pros who don’t want to think or problem solve or write an original thought and expect ChatGPT to do that work for them and the results are decidedly meh to downright alarming.

          1. Le Sigh*

            If I have to hear one more person suggest I get “AI” to write my next donor pitch ….

            1. Donkey Hotey*

              Related: can’t link to it here because LOTS of swearing, but there is a lovely article out recently called “i will (redacted) piledrive you if you mention ai again.”

            2. House On The Rock*

              As an experiment, I put an email I’d written describing a relatively complex data problem through my institution’s in house ChatGPT 4. I wanted to see if it could make it a bit cleaner and clearer for a less technical audience.

              It took several paragraphs of explanation around all the reasons why two similar metrics were different (data sources, timing, different methods of aggregation and attribution, etc.) and came back with something along the lines of “Metric A is different from Metric B because they are different for a number of reasons”. Yeah….

              1. Le Sigh*

                This is part of why I’m just uninterested. Anything I’ve seen ChatGPT spit out needs so much rework it would be a lot faster (and more accurate) to write it myself from scratch. In writing-heavy fields, I just don’t see any efficiencies (in addition to the many other documented issues).

                1. Snoodence Pruter*

                  Yep. This is happening at my workplace right now and it’s maddening – they’ve laid off half the editorial staff and they’re getting AI to write everything, trusting in literally one guy (who is trained in the subject matter, but IMO not sufficiently specialist) to pick up on any errors it makes. It’s a horrible idea and it’s going to screw the product. I’m so angry.

              2. N C Kiddle*

                What, it didn’t put them into a numbered list? Every time I’ve tried it, it’s given me a numbered list of something.

          2. NoTurnOnReddit*

            My (c-suite in communications) boss wanted me to use AI to create a video we were using for donors (instead of hiring a voice actor & paying royalties to a musician). I told him I would quit if he made me do this.

            We hired the voice actor & paid the musician.

          3. Audiophile*

            As a comms pro, I can’t fathom this. Anyone who’s even played around with ChatGPT knows it cannot write well or solve problems without a lot of work.

          4. Skoobles*

            Using AI for technical work is deeply stupid unless you’re in a field where the kind of questions you might ask are routinely posted and answered in online forums or somewhere else that can be scraped.

            Using it for mechanical or chemical engineering work, for instance, will get you very confidently wrong and negative-helpful answers.

        2. Moose*

          This is also how I use it on occasion; I have enough training to muddle through working in XML, and know how to run an XSLT transform on it, and know what they should look like….but don’t know how to write said transforms myself (which I did try doing!) Asking our IT people for help wasn’t an option, so I ended up using ChatGPT, rather than trying to teach myself transforms in a rather limited time frame. I think the key is using it judiciously, to assist yourself, NOT to try and do the entire job for you. You need a baseline of knowledge to know that the program is giving you accurate code/information.

          1. MaskedMarvel*

            I break the problem into manageable chunks where I know what good looks like.

            I also tended to score highly on tests for critical reasoning, and tend to recognize malarkey.

          2. Polyhymnia O’Keefe*

            Recently, I was trying to get a specific, reasonably complicated excel formula to work, and I just couldn’t figure out the last 10%. I was almost there, but I was missing something. I plugged the formula into ChatGPT and it corrected it for me, but what it came back with was still not quite right, so I described in words what I was trying to do, and the next result was right. To me, that’s the benefit of ChatGPT. I work in the arts, and would never use it to replace things like creative work or writing — but helping me to troubleshoot an error like that was super useful.

        3. Hamster Manager*

          “I know how to code already”

          Yes, ideally, everyone who uses AI tools will be able to check their work against their own human expertise (and will actually do it), but it’s realistically likely that more people are using AI tools IN LIEU OF building any human expertise/doing their own work (see: AI “artists” defending ‘writing the perfect prompt’ as having artistic merit when they could never create or conceptualize the same work by hand). It’s human nature to take shortcuts, and that’s what LW did.

          I think when you’re new to the workforce, you don’t understand that it’s ok to say “hey I can’t compete this task on my own” when you run into situations like LW was in.

          1. Caramel & Cheddar*

            The “in lieu of” is important here and I think really minimized by people who love this stuff. ChatGPT was apparently down recently and you could see the flood of people on social media saying “How am I supposed to complete my school work / job work now!” I cannot imagine a situation where ChatGPT being down has any impact on my ability to do my job, but that’s because I have actual expertise in it and am not just outsourcing that work to pass off as my own.

            1. libellulebelle*

              This is just wild to me, given that ChatGPT has been a “thing” for, what? About a year-and-a-half now? People really can’t remember how to do anything without it?

              1. Caramel & Cheddar*

                That doesn’t surprise me, tbh. I remember coming back to “normal” work after the pandemic had started and the number of colleagues I had who couldn’t remember very basic tasks they had done pre-pandemic was shocking to me. So many people just forgot how to do their jobs. Our brains really do seem to atrophy really fast.

            2. MaskedMarvel*

              it was down, and I was somewhat hamstrung, not because I cannot do the task, but because it was so much faster than I am at producing code without silly syntactical mistakes.

              I did something else until it came back up.

            3. An Australian in London*

              What I think is missed here is that writing code is the *easy* part.

              Code is very odd and is one of the few (perhaps the only?) human creative endeavor where it is easier to write it than to read it.

              1. Antigone Funn*

                Ha, this is an interesting point. Good prose draws the reader into the writer’s thoughts or feelings; that’s half of what makes it “good,” as opposed to bad. (Having worthwhile thoughts to convey is the other half, which is something LLMs are never going to replicate with math.)

                Good code needs to solve a problem first and foremost, and preferably continue working even if things change (the runtime environment, other parts of the code). Being readable and transferable to another person is considered desirable, but not strictly necessary, and many programmers don’t bother with it.

                1. An Australian in London*

                  Yes to all that, and I think it’s even worse than that.

                  It’s harder to read our own code than to write new code, even when it is superbly documented. Even when the entire design process and all design decisions are documented.

                  I’ve heard it said that it’s a rite of passage for all coders to read some old code and ask, “What idiot wrote this?!” and then realise that we are the idiot who wrote it.

                  This is why I’m not very impressed by using LLM genAI to write code. Literally any coder can write code – junior developers fresh out of college with no experience can (with some onboarding and written guidance) write code good enough to run the business on.

                  It takes immense skill and experience to be any good at glancing at code and understanding what it does, let alone why. And surprise, surprise, genAI is absolute rubbish at doing this. I have given it very simple code and asked, “What does this code do?” and yes, it can walk me through the blocks and explain what each section of code is doing, but it cannot tie it together.

                  Anyway, no offense to those writing here about how much more productive they are at *writing* code using genAI but it is not the flex they perhaps think it is.

        4. Justin*

          Yeah I’ve used it to help me make up quiz questions based on data I’ve given it. Like, “hey here’s the numbers, turn this into ten questions.” I never trust it to actually give me any new info.

      5. Lab Boss*

        I think it can be a great tool. I was asked (well out of the normal scope of my job) to “design a mentoring program.” I asked one of Chat GPT’s cousins to give me a list of 12 topics for a corporate mentoring program- I wouldn’t have even been sure where to start, and it gave me some good (if very generic) ideas. The key is I used that as a starting point of inspiration and created my own outline- discarding, combining, rearranging, and adapting the AI material.

        1. Resident Catholicville, U.S.A.*

          Genuine question: couldn’t you have Googled “how to start to design a mentoring program,” read a few of the results, then pared the info down/adapted it to your own situation? Why was Chat GPT the choice you took?

          1. Caramel & Cheddar*

            Google kinda sucks these days *because* of AI, ironically. Though if you look for results before 2022 or so, usually you an find something decent.

            I think a lot of people like ChatGPT because it talks to them in a conversational tone vs having to actually parse through Google results. That’s one of the many reasons I hate it, because it takes too long for it to actually get to the heart of the matter.

            1. riverofmolecules*

              > That’s one of the many reasons I hate it, because it takes too long for it to actually get to the heart of the matter.

              At my last job, the CEO would use it to write things and I could always tell because it always talked around the issue. It never actually made a strong assertion that one would expect the company and/or the CEO would be able to do. It just would define terms, say something generic about DEI, repeat points.

              To be fair, it was very similar to what actual people who didn’t really know their stuff about racial justice would say, just generic, non-actionable stuff with the right vocab thrown in.

              1. Perihelion*

                Yeah, that’s one of the ways that I recognize when my students are using it. After a discussion on the faculty list serve about detecting AI writing, I put some samples of my own writing in and asked if it was written by AI. Chat GPT said not, and explained it was because the excerpt showed specialized knowledge and included specific examples. Which I thought was telling about the major limitations of text generated by ChatGPT.

                1. just some guy*

                  GPT’s explanation here is bogus. When you ask GPT to show reasoning, it will fill in something shaped like a reason, but this doesn’t mean that’s actually how it reached that conclusion.

                  For instance, ChatGPT’s training set includes enough multiplications that it pretty much has its times tables memorised out to 999 x 999. If you ask it to multiply two 3-digit numbers and show working, it will often give the correct product, but the working along the way will be garbage, full of errors that shouldn’t have led it to a correct answer. It’s operating something like a student who’s copied the answer in the back of the book and is now trying to interpolate the working to make it look like they actually worked through the problem.

              2. amoeba*

                Hah, to be fair, that kind of sounds like our management even before ChatGPT existed!

            2. MaskedMarvel*

              I set a background prompt to assume I had a PhD in the subject, never humor , flatter, use weasel words or make stuff up and to always provide links for citations.

              Hey presto! Dry as dust.

              1. Cthulhu's Librarian*

                And still as untrustworthy as a conman.

                LLMs are just very comprehensive weighted bias tables. Anyone using them for significant work or to gain understanding is being foolish.

                1. MaskedMarvel*

                  I work as an engineer at an AI data company.

                  For certain well defined problems it works very well.
                  additionally, in turns of turnaround time and understanding requirements it typically works better than contractors

              2. Lenora Rose*

                Dry, sure, but do the sources it cites actually A: exist, and B: say what it claims they say?

                You can’t tell it not to make stuff up when its entire process is to predict the next text based on the prompt, and it doesn’t understand making things up, or the concept of truth, because it doesn’t have a brain.

                1. Perihelion*

                  Yeah, in my (limited) experiments it mostly made up citations. The one that was real did not say what it said it said.

                2. k.*

                  I’m with MaskedMarvel. I think a lot of the responses on this post are from people who rarely use tools like ChatGPT and aren’t aware of widely-known strategies to verify the information it provides (asking for links to citations, asking the model to verify a response with code) or are referring to the kinds of responses that are generated by outdated or free models. Which is fine! But a casual user who dabbled with a free LLM six months ago for fun is just not going to have a good sense of how it might be used judiciously and effectively in a professional context.

                3. Evan Þ*

                  @k, but on the flip side, a lot of people are trying to use LLM’s at work without knowing those strategies themselves. For example, look at the multiple lawyers who got found out when they cited nonexistent precedents made up by ChatGPT. They sure aren’t using it judiciously or effectively.

          2. Lab Boss*

            Like Carmel and Cheddar says, Google is already turning into an AI scraper anyway. To be more specific: because Microsoft Copilot had just rolled out onto our desktops and I wanted to see how good of a job it would do. I understand the overall idea of mentoring, and just needed some quick inspiration on topics, so I was OK with “here’s a broad summary of what the internet thinks.” If I needed to know about something totally unfamiliar, or a more detailed guide to processes, then I’d have been hunting for actual content written by a specific expert.

            1. Dina*

              Except you didn’t get a broad summary of what the internet thinks. You got an approximation of what the model thinks a webpage might say. Sometimes those line up… Sometimes not so much.

              1. Lab Boss*

                Sure, but the point I’m trying to make here is that I wasn’t looking for a set of verified facts or an official method. It was the quickest way to get a list of inspiration ideas that gave me a starting point to flesh out into what I needed. There was no way for it to be “wrong,” the worst it could do would be suggest something stupid or irrelevant that I could look at and say “nah, I’m not using that one.”

          3. Resident Catholicville, U.S.A.*

            I’ll caveat: I used one of the image makers recently to create an series of images for me because my art skills are limited to crayons and stick figures. Not only did I use it because I would never have been able to draw/color/design them myself, it was for strictly non-commercial use (ie: personal use, no business use) and they’re highly personal (spiritual images, nothing obscene or anything) that I might not want to have to explain to an artist why I might would want them created. I’m not opposed to the use of them or the creation of them, but it seems like if it’s something I can reasonable accomplish and/or the implications are wider than myself, I’ll do the research and make it work for what I’m going to need it for.

      6. Bear*

        I mean, I used it at work the other day to generate some “catchy” name ideas for a recurring work social activity. (A very different scenario from above…)

        I agree with Megan. It’s already in the world. We can’t force people to not use something that exists, we need to put instruction in place (starting in schools!) so people understand HOW to use it.

        1. Siege*

          There are dozens of name generators out there that aren’t pilfering stolen content, being used to benefit the very few, or using enough energy to be actually detrimental to the environment, though. That is not a niche website concept.

          1. Bear*

            My company provides a ChatGPT tool to employees and encourages us to use it. I wasn’t getting anything close to what I needed from the handful of web name generators I tried, so I tried the AI tool instead. Didn’t use any of the names it gave me as-is, but the results pointed me in a better direction that got me to the name I did end up using.

            I stand by what I said: this isn’t going to go away, so we need to be teaching people how to use it if/when they choose to.

      7. Walk on the Mild Side*

        Recently, I had *pages* of feedback from 16 individuals that I used ChatGPT to summarize into themes and bullet lists, and then I reviewed the output to ensure it captured the information accurately. AI did this in minutes, and only took two hours of my time for the review. All with full support of my manager. The two key points in it’s successful use were: 1) I reviewed the output for accuracy, and b) I had support from my manager.

        AI *can* be a useful tool, if like other tools, you understand how to use it and when it is appropriate to be used.

      8. Justme, The OG*

        I disagree with that. I’ve used Chat GPT to help me craft emails when I’m having a brain fart or don’t know how to say something without sounding mean. It definitely has a time and place.

        1. Asdf*

          For me as a non-native English speaker it is occasionally super useful. I ask it to give me alternative formulations etc. Sometimes it’s hard to get it to use my tone but even then fixing my grammar is helpful.

      9. JFC*

        I strongly disagree. Chat GPT has many wonderful uses when it comes to idea generation, background information, writing boilerplate text, etc. The problem is that you can’t pass it off as human-created ideas and anything it generates needs a human review before being presented or submitted.

      10. Hastily Blessed Fritos*

        “Never” and “don’t” unless you are working on a project specifically to make use of it in a well-defined context. It can be useful for handling natural language questions about a restricted dataset (so that hallucinations aren’t as much of an issue).

      11. Beth*

        I don’t think this is actually useful. Once a tool exists in the world, people will use it–and there are ways to use ChatGPT that are at least pretty harmless. A human who’s not confident in their writing using it to proofread an email they wrote before hitting send? A job hunter using it to generate a resume template that they then edit and update to reflect their own info? I don’t see any issue with those. And I hear from software engineers that it can actually be pretty useful for proofreading code.

        But the ‘when’ and ‘how’ of using this kind of tool is really important. Using it inappropriately can cause big problems, and it’s a new enough tool that I think it’s still not intuitive for people to judge when is the right and wrong time to reach for it. I’d never use ChatGPT for:
        – Anything that would require me to submit sensitive info to it. If I wouldn’t send a text to a random third party to read it over, I probably don’t want ChatGPT storing it and incorporating it into its ‘learning’ database.
        – Research. In cases like OP’s project, where they need specific info from a specific team, there’s no way ChatGPT would know the answer. But even for questions that can be answered by Google, ChatGPT hasn’t shown itself to be very good at differentiating between the right answer and the ‘someone said this on the internet once’ answer. It’s not reliable.
        – Writing something that I plan to submit without edits. It’s one thing to use ChatGPT to get a framework and then beef it out yourself, or to write a solid draft and use ChatGPT to proofread it. But writing that’s fully by ChatGPT tends to be low quality.

        1. Jackalope*

          Part of the argument is that the AI tech in and of itself is not harmless. It’s causing devastation to the environment and as others have said above, it’s using outright theft and plagiarism for training. I can imagine ethical ways to create this tech but they aren’t the ways that its creators have chosen. A lot of companies are excited about it right now as the newest and shiniest toy on the market, but I would honestly encourage people to push back on it as hard as possible. Despite what its creators want us to think, its adoption isn’t inevitable, and I truly hope that it’s just a flash in the pan.

          1. Beth*

            I agree with you about the concerns around it, but am also aware that we live in a world where “is this environmentally sustainable?” and “was this tool created in an ethical way that offers full credit and ownership and to all contributors?” are separate questions from “is this usable in the workplace?” I avoid it myself, and on a personal level, I think less of people who I know use it heavily. But I also wouldn’t feel that I could write someone up someone for using it to proofread an email or find typos in their code, you know?

            But I absolutely could formally reprimand someone for feeding proprietary data into ChatGPT, or presenting ‘research’ from it as actual results to interviews as OP did, or submitting consistently poor work because they’re using ChatGPT to do their work without revising it for quality.

          2. An Australian in London*

            It’s one of the stronger examples of why technology is not neutral.

            Technology was designed by humans, reviewed by humans, subject to human governance, and used by humans. Every one of those stages embeds biases and prejudices.

            I think people who protest that tools are neutral and only their human users can be ethical or unethical and confuse technology with science. Nuclear fission – that it exists and is possible – is neutral. There are natural fission reactors buried underground – some large deposits of uranium. All human-designed nuclear fission technology is not neutral.

          3. Lenora Rose*

            Its adoption has already resulted in filling the internet with even more unuseable crud than all the human scammers and spammers had managed, too. The Enshittification process was already a problem, but AI has made it worse to a genuinely astonishing degree.

        2. CowWhisperer*

          My issue is that all your examples were able to be done in 2000 using a desktop – and really don’t need ChatGPT.

          I really wouldn’t recommend that a non-native or less than fluent English speaker use ChatGPT for proofreading. It struggles getting verb agreement correct in sentences with prepositional phrases.

          It might be useful for finding new verbal phrases or constructions to use – but proofreading is trickier than a person adapting a construction.

      12. Salsa Your Face*

        Not so. My job encourages us to use AI within parameters and has even provided us with our own custom, internal-only chatbot tool. It’s very helpful when used correctly.

      13. Adereterial*

        No. That’s just unrealistic, and scaremongering. It’s perfectly possible to use it sensibly, and effectively, as long as it’s done with care.

        1. Lenora Rose*

          How? What uses does it have that don’t outweigh the combination of destructive environmental effects, active plagiarism, lost jobs, worsened internet sites and searches, and in several cases so far, destructive attempts to replace jobs (like pychologists and therapists) which absolutely cannot be handled that way?

      14. SheLooksFamiliar*

        Exactly, never a chatbot like a vending machine of curated information. Whatever ChatGPT, Gemini, Perplexity, and the rest give you will still need a human to evaluate and make decisions about the…well, okay, let’s call it ‘information.’ And I’m not convinced that’s always an accurate term.

      15. k.*

        I use ChatGPT all the time at work for things like:
        – debugging code
        – generating simple code snippets that are timesucks to do by hand
        – answering questions about niche software instead of poring through a manual or digging through reddit/stack overflow/low-quality forums (ex. “How can I create a shortcut to change text colours in Logseq?”)
        – getting a more efficient result to a search query because Google search results have become increasingly low quality.

        And on a personal level, it’s been an incredible tool for language learning. I can have a voice conversation with GPT-4 in a different language about any topic I want, then get it to provide an annotated list of corrections of things I said incorrectly during the course of the conversation.

        I think a lot of the criticisms are from people who a) are using the free version of GPT, and b) think it’s only utility is a prose-generator. A lot has changed in the past year.

        1. Salsa Verde*

          I have thought about using one of the AI foreign language learning tools, but how would I know it’s correct? What if it’s telling me the wrong words for things, the wrong agreements between subject and verb, etc.? I can tweak AI wording in English, but I’m too concerned that it would teach me a foreign language incorrectly.

          How do you get past that concern? Or how do you confirm it’s telling you the correct things in the other language? Is there a specific tool you are using that you have confidence in? I’d love to know, because if it does work, it seems really helpful.

          1. k.*

            So in my case I’m familiar enough with one target language (French) to understand why something is a mistake once it’s pointed out (like I can’t get out of the habit of putting an indefinite article before a profession in casual conversation even though I’ve known that rule for years) and the correction is typically a rule I “know” already but struggle with implementing in practice.

            My other target language is my partner’s heritage language so it’s easy to get a gut check. (Of course my partner is also a fantastic resource, but unlike ChatGPT does not have an unlimited amount of time and patience.)

      16. Bitte Meddler*

        I have used it to help me get started on writing an audit program / audit report. I never use any identifying information. I find it’s easier to edit something that is already in existence than it is for me to invent something. I have really bad writer’s block when presented with a blank page.

        So my ChatGPT prompts are things like “Write an audit program for an inventory management process”. I get enough back to use ChatGPT’s framework and a lot of the wording.

        But I read every single word and every single sentence, making sure that the end product makes sense and is what I’d hoped to create if I didn’t have writer’s block.

      17. Lea*

        Right I’m so glad I’m too old to even consider doing this at least ‍♀️

        Btw if they are a ‘best case scenario, your office is one that collects input from people and then lets it sit unaddressed’ company this is probably why nobody cared enough to provide info

      18. Ally McBeal*

        This is so off-base. ChatGPT can be used ethically for plenty of tasks. I work in media and frequently use it to jump-start my brainstorming when I’m writing interview questions, or to help me construct an outline for a story. Sometimes I’ll ask it to give me a 101-level rundown on a topic I’m not familiar with, which I’ll use as a starting point for actual research. Recently I had to create a PowerPoint for an internal training (one of those “other duties as assigned” things) and used an AI image generator instead of paying for stock images.

        1. LizB*

          So instead of paying for images that a photographer and model got paid for their labor to produce, you had a computer smush together the stolen art of thousands of human artists? I don’t see how that’s an example of ethical use of AI.

        2. Nodramalama*

          Thats not a great example considering you’re using AI to get around paying a person for work they’ve done that AI has essentially stolen

      19. Ann O'Nemity*

        “Specifically “never” and “don’t” respectively.”

        Same bad advice I heard in the 90s about using the internet.

        The AI horse is out of the barn. The question is not if we use it, but how we use it.

      20. Bruce*

        My company uses AI for very specific technology tools and it is forbidden for anything else. A big reason is we don’t want any leakage of customer data into a some AI data base!

      21. ThatOtherClare*

        I use it for brainstorming a lot. Sometimes the suggestions are a decent starting point for me to fill out further. Sometimes my reaction is “Definitely not that!”, but it gives me inspiration on how not to do something.

        It’s a tool like anything else. Just because some people make PowerPoint presentations with red text on a blue background and each letter flying in one by one from the side of the slide doesn’t mean we should never do presentations at work. AI language generation models are the same. Use AI if it makes your work better. Don’t if it doesn’t. It doesn’t need to be controversial.

      22. ArtK*

        Absolutely. It is not “intelligent” in any way, shape or form. That’s all marketing garbage and it’s going to get people killed. LLMs have no concept of ‘truth’; all they do is make statistical associations between a prompt and some training data. They have no idea whether that data is valid or not. This is why LLMs lie (or, to use the euphemism “hallucinate.”) Unfortunately, in their rush to bring “AI” to market, people like ChatGPT have trained their models on whatever they could find, including Reddit posts. They’ve done *NO* curating on the training data which means there’s probably *more* garbage there than actual facts.

        An LLM can be useful if trained on good data and for a limited domain. Trying to be all things to all people means that the existing ones are useless for everyone.

        That’s not even addressing the ethical/theft issues.

      1. bertha*

        That’s not helpful. I have run into employees at my work who are literally copying and pasting ChatGPT into finished products. They aren’t even removing the lines that come from copying and pasting.

      2. Spencer Hastings*

        Ah yes, I can improve my efficiency so much by using a hallucination machine that runs on plagiarism and copyright infringement — I’ll get right on that, boss…

    1. I should really pick a name*

      While that’s not a bad idea, it doesn’t really apply to this situation.
      I think “don’t fake results” is a given for most people.

      Why it wasn’t for the LW is something they’re really going to have to do some introspection about.

      1. Hannah*

        Ironically, one of the first things I used it for was to generate fake results. I was teaching a class on how to code data and I very much didn’t want to use real data – so I had it come up with about 50 “responses” to a fake question. I then used that to pair down into about 20 pieces that I used for my class.
        Fake is ok so long as you let everybody know it’s fake!

        1. Leaving academia*

          It’s also really good at creating fake student work for “find the mistakes (and good parts)” and practicing peer feedback. It’s really hard to write fake student work, I used to use things from when I was a student, but most of what I still have is from my senior year and already too well written for a sophomore level course. Students are also way more willing to give real feedback when they know it’s ChatGPT (or that I wrote it) than when it’s another student. I did have to tell them to find mistakes in addition to ChatGPT repeatedly saying 8-1=3 (I actually know exactly why it did that, the version of the question with 7 is far more common than the one with 3).

          I was also in a joint math/cs department, so I’m very used to the faculty suggestion for things like generating fake data or work for test questions, writing basic code, commenting code, etc. The bigger discussion was how to introduce “learn to do it yourself first, then learn to use various LLM tools to do it more efficiently later in the semester.”

      2. Antilles*

        Yeah, in this case, ChatGPT is almost a red herring.
        This exact same fake data could have been pulled at any point in the past couple decades using Google to research “common team cultural issues” and using that. Or the even more low tech version of just flat out making up the data and claiming that it’s based on a phone call which definitely happened, not sure why you can’t remember it.

    2. Betty*

      I do research involving LLMs (the technical name for the technology that Chat GPT uses). The biggest thing I wish people knew was that that tech is good at *summarizing* information that it has/you provide, but (1) it does not actually *know things* (2) it is a technology to produce something that looks like what a good answer to the question would be, which is not the same as *being* a good answer. Also, that any interaction you have with it is providing the company with free training data that it will use and store forever (so lets’ hope none of those emails involved anything proprietary…)

      The idea of asking Chat GPT to *summarize* a long email thread isn’t totally ridiculous as a way to help a human get some perspective on the key themes/recurring ideas, and I think the OP might be able to say something like “I thought Chat GPT might be able to help me find some common threads in the responses I was getting, but I got carried away and used its output as if those were the actual responses from the team.”

      1. Czhorat*

        Yes. It is a tool.

        It is NOT a mind.

        One problem comes when some of us mistake it for an intelligence because of its ability to carry on a conversation; this kind of natural language chat is something we’re only accustom to when it comes to people with brains. It’s weird to think of a mindless but complex entity making conversation.

        1. Justin*

          Yes, I’ve used it to rearrange info I’ve given it. “Hey, here’s the balance for a fake company. Turn this into a ten question quiz based on these numbers.”

          Never an answer to anything, I go to actual research places for that.

      2. Arthenonyma*

        I don’t get the impression the OP gave it the whole email thread though? This reads to me like they literally gave it their email full of unanswered questions and used its answers.

      3. Kelly L.*

        Yes!

        I’ve used it exactly once, and it was because a friend basically dared me to put a story idea into one of the AI bots and see what it would come up with. My original idea involved Greek mythology. The AI bot spit out a beautifully written blurb, talking about Poseidon’s daughter, Persephone.

        That is…not correct. But if you didn’t know anything about Greek mythology, you might not know that, and it sure was pretty writing.

      4. Bruce*

        That whole “proprietary” thing is why LLM AI is absolutely forbidden at my company! I don’t work with AI myself, but we do have tools that use it for coding and such. For the LW sake I hope they don’t have to fess up to actually creating the responses from AI… your suggestions may be plausible if they have to!

      1. Siege*

        Clarify this for me real quick, I’m about to add the gasoline to my pasta and the sauce has mushrooms I’ve been assured by an LLM are button mushrooms.

        1. CowWhisperer*

          *eye twitches*

          It’s fun to ask ChatGPT to teach you ASL phrases. It kinda picked up on the ideas of hand shape, palm orientation, location and movement – but doesn’t collect that you need to be aware of all of those at all times.

          At one point, it instructed users to point their fingers upward towards the ceiling while having the palm downward. That’s not how those joints move …

          1. kt*

            Similar — LLM-generated knitting patterns! It understands & can reproduce “the format and language of a knitting pattern”. It does not understand anything about the topology or physics of the knit object constructed according to a pattern written by a human.

      2. Irish Teacher.*

        I actually saw something on facebook yesterday (don’t know if it’s true or not) that said if you ask ChatGPT how many rs are in the word “strawberry,” it will insist there are two.

        OK, I just asked it how many rs are in strawberry and it replied “The word “strawberry” contains two ‘r’s.”

        So…it is true.

        1. Siege*

          I have a screenshot of an LLM insisting there are 4 n’s in mayonnaise. What’s both interesting and completely d*mning about it is that every n it thinks is in mayonnaise is a valid, common placement of n in English words, even though the words it claims it’s counting n’s in are not real words. It identifies the second n in mayonnaise as one, and then it’s counting n’s in mayonnainse, mayonnaine, and mayonnaisne. Or, more accurately, rinse, main, and sneak.

          Pattern-matching, thy name is LLM.

        2. Ann O'Nemity*

          I tested to see if this was true. And oh my, ChatGPT said strawberry had two r’s. I had to teach it that there are in fact three r’s.

          1. Tangochocolate42*

            I just tested this, got the same response, asked it to identify where they were. Well it got the second r correctly, but insisted the 5th letter was r! That’s shocking!

    3. Student*

      I think this is a deeply disingenuous take.

      These large language modeling tools are very good at telling people what they want to hear. That is exactly how they are trained and what they’re designed for. They put words together in a way that makes humans respond positively – that is the objective! They are not repositories of knowledge and they were never meant to be. They are tools for better bullshitting. Why bullshitting? Because good bullshitting helps sell things to people that don’t really need or otherwise want them. It’s cheap and effective.

      The people who are most effective at using tools such as ChatGPT already know this. There were some well-intentioned fools at the start of this trend, who jumped at the opportunity to use state-of-the-art tech without really understanding what it did. But, the articles, stories, anecdotes, and first-hand experience on the drawbacks, limits, and downsides of large language models are copious. Anyone into “cutting edge technology” and early adoption is well aware of what these tools really do by now. The people still using these tools know they “hallucinate”, and we should not pretend this is some innocent mistake they’re making any longer.

      This letter-writer knew the tool was not providing useful, actionable information – it was providing bullshit. We know that precisely from the OP. Look at what the OP says!

      “Here’s where I really really messed up”
      “in a last-minute panic, I put the email into ChatGPT”
      “Now I’m terrified about when my bosses find out”
      “I know I did the wrong thing.”

      This is a person who lied and knew it the whole time. They did it with the shiny new lying-tool that crafted a plausible lie for them. That’s why the ending advice from AAM is that they need to come clean to the boss about the lie – not recommendations on better on-boarding procedures or new training.

      1. kt*

        See Hicks, M.T., Humphries, J. & Slater, J. ChatGPT is bullshit. Ethics Inf Technol 26, 38 (2024).

        1. Chuffing along like Mr. Pancks (new name for Reasons)*

          Extra points if you used ChatGPT to generate that citation.

          1. Siege*

            In a highly amusing twist, it is actually not ChatGPT; this is a real paper. It references Frankfurt’s “On Bullshit”, a slim volume I’ve had on my desk for almost 20 years, that examines bullshit not as specifically lies, but as speech intended to persuade regardless of truth.

            1. Bruce*

              The paper is open access, I’ve posted the link but the comment is still under moderation

            2. Chuffing along like Mr. Pancks (new name for Reasons)*

              Oh, that’s amazing! I read “On Bullshit” during, um, a previous President’s tenure when it seemed apotheotically relevant. I will take a look at this, thanks!

    4. ferrina*

      My workplace just added AI into our onboarding process :) Some of us use AI on a daily basis. The training covers data privacy basics (i.e., only use company-approved AI for company work, including brainstorming) and what AI is. Like Pizza Rat says, AI doesn’t actually know what it’s saying- it’s predicting the next likely word. It’s not a psychic, it’s a summarization tool.

    5. Red Herring*

      I think ChatGPT in this letter is a red herring. ChatGPT was the tool used to fill in gaps in the information from the team that was not being cooperative. However, the OP could just as easily have said, “So, in a last-minute panic, I sent the email to my friends from college. Their answers sounded plausible, so I used it in the presentation.”

      The main issue here is that OP based the presentation off of information they claimed was provided by the other team that wasn’t.

      1. Drowning in Spreadsheets*

        The main issue here is that OP based the presentation off of information they claimed was provided by the other team that wasn’t.

        This is true, but I think using ChatGPT makes it more egregious than asking people other than the designated source of information. ChatGPT has been shown to be drastically wrong in plenty of scenarios, so not only was the OP submitting data from the wrong place, they risked presenting horridly wrong data that could seriously muck up the next phase of the work.

        1. Hiring manager and burnt out mom*

          I agree that ChatGPT can provide horribly incorrect, dangerous, and discriminatory information. (I’ve seen AI say that peanut butter cookies are safe for someone with a peanut allergy.) However, there is no guarantee that random people being asked for input aren’t also ill-informed. I guess that’s why I feel like the misrepresentation is more critical than the source.

        2. amoeba*

          I mean, I assume they read over what ChatGPT gave them and made sure it made sense in the context, not just post it blindly!

    6. lilsheba*

      It needs to not be used at all it’s a form of cheating, as this letter proves. We got along all these years in the office without it, we can continue to do so.

    7. TheBunny*

      It really doesn’t. It should be like Wikipedia… go ahead and use it as a jumping off point, but it’s never a cited source.

    8. Another Hiring Manager*

      +1000

      I was interviewing someone that I’m pretty sure was using Chat GPT or the like in an interview. Almost every question had a pause and some typing and some of the answers didn’t make sense. It’s one thing to take notes in an interview, but the panel was all sure that wasn’t what the candidate was doing.

      “Your resume says you’ve used tools A,B,C, and D for your work. Which one is your favorite?”

      It was a bit scary that someone had to ask an app how they felt about something.

      1. I Have RBF*

        LOL.

        My answer would be: “All of them but none of them. Each tool has its strengths and weaknesses. A does ….”

        No, I’m not a bot. I just seldom have favorites.

    9. RT*

      I’m a teacher and I use it to make test questions :) but I ALWAYS vet them. And I limit it too – just multiple choice ones.

    10. Roland*

      I feel like “don’t lie” doesn’t reeeally need to be part of training materials for adults.

  1. BatManDan*

    Nooooooooooooooooo! (Sorry, OP, that’s all I have for you….can’t imagine what will happen next or give you any meaningful guidance to recover).

    1. Van Wilder*

      I would be doing what Alison suggested and “clarifying” that this is what I inferred or interpreted based on the limited info I got from the team. But you definitely need to come clean that this is not what the team explicitly said. Or things are about to get really confusing for all involved. And agreed, loop in your boss on how to deal with the uncooperative team.

    2. ariel*

      I know, I’m nervous sweating just thinking about this. OP, come clean, you’ll at least sleep better – hopefully you will be able to solve for what needs to happen next and can rebuild trust.

  2. Sasha*

    Oh no. I feel for you. Though I do think you need to dig an little into why you felt telling your manager / asking your manager for help wasn’t an option earlier in the process.

    1. Cyborg Llama Horde*

      Yes. Your manager should be there to support you when the people who need to give you the inputs to do your job aren’t cooperating. Either you have a manager that you have correctly assessed won’t do that for you, which is a problem (especially in one of your first jobs, which is when you’re learning professional norms and often need a lot of support), or you on some level feel like you can’t ask for help, which is a problem — you need to be able to ask for help, especially for things that require manager-level clout, when entry-level clout isn’t working.

    2. Awkwardness*

      This, and

      I think it’s really important that you figure out (… ) why ChatGPT seemed like a reasonable solution

      OP, no processes are the same and there is no standardized answer.
      In order to do process improvement, you have to really understand why processes are what they are instead of cheating your way through with platitudes of process improvement.

      1. ferrina*

        Yeah. Unfortunately, this is the corporate equivalent of a Mr. Bean mistake. It’s going to be tough to come back from this.

        OP really should have been asking their manager for help, and I’m also wondering why they didn’t.

    3. Beth*

      Yes, this is the real crux of the issue. A lot of people are talking about ChatGPT and its limits, and I think that’s a useful conversation in general…but OP, the issue with your specific situation isn’t the tool you used, it’s the underlying goal you had for that tool.

      Why did you feel like you had to make up info for this presentation? It sounds like your team was aware that your first interview with them wasn’t fruitful. You tried some good follow-up strategies, and it just reinforced that this team really doesn’t want to talk to you. I’m sure you’re not the first one on your team who’s encountered this. Do you feel like you can’t go to your teammates or manager when you’re encountering problems? Have they shown that they won’t support you? Do you have trouble asking for help in general?

      I agree with Alison that you’re going to have to come clean about this, and it’s probably going to be a big deal. But showing an understanding of what you did wrong, as well as an understanding of what you should have done and a proactive plan to do it right next time you’re struggling, is a good foundation to at least start rebuilding on.

    4. Snoodence Pruter*

      Yes – this was never about just having *something* to hand in by the deadline. This was about obtaining specific information from a specific source. It’s not LW’s fault she couldn’t do that, but this wasn’t a good solution. And LW clearly does recognise that, but still panicked and lied in the moment rather than communicating the issue to her boss. Is that because this boss is intimidating or unhelpful? Because a previous boss was? Because LW is still stuck in school mode, thinking that in a pinch it’s best to at least hand something in and get what marks you can?

    5. I went to school with only 1 Jennifer*

      > I’m two years in my first full-time role.

      I think that’s a lot of the answer right there. A lot of folks here can attest to not realizing at first that managers are not just work-assigners, but also work-expediters. That managers expect to be told when someone can’t complete an assignment because someone else is being completely unhelpful.

      The stuff in LW’s 2nd paragraph is exactly what they should have taken to their boss. “I’m trying to get answers and they’re either blowing me off or completely ignoring my stated scope of responsibilities; what can I do here?”

      1. Melicious*

        Yes, this is an important work thing to learn. I used to tell my new to the work world employees (who would often apologize for interrupting me to ask questions) that a very important part of my job is to make sure they have what they need to do theirs.

      2. Toxic Workplace Survivor*

        I have an employee a few years into their career with whom I have had to do some coaching about workplace norms and I can say … The idea that a manager should be a part of the process along the way for some kinds of work is very not-intuitive for some people. OP it is Not Great but you are not alone.

        In my field we do a lot of back and forth editing where the staffer has the detailed information and I know what the stakeholders are looking for out of the final product. Matching the output to the ask is a huge part of my own role. If my staffer was struggling to get the information or wasn’t sure if A or B was the right area of focus and then went ahead and produced a report without checking in with me, it would be so much more work for me to gut the entire report than if I could redirect them early in the process.

        All of this is to say: OP I completely see how this happened. Deep breaths. Now you move forward. Some more thoughts:

        If your boss is engaged with you and your team and generally a decent boss, you go to them and say “Look, I made a big mistake and want to talk about it. I get that I messed up and we can work to fix it now but after we deal with it, can we have another conversation to talk about how it happened? I’d like your help getting straight where I should have asked for help and how to avoid something like this way earlier.” If they are willing to do that, really listen to their feedback and find a way to make it a process you can follow next time. Even if it is a situation where she has to keep a closer eye on your work for a while, know that it is in everyone’s best interest. Once you are more used to each step of the process, you will be able to do several steps before checking in about how you are doing. But you will perform better with more frequent check ins that allow you to change course earlier. Nothing frustrates me more than when one of my reports does a whole bunch of work and I needed them to talk to me like 10 steps earlier in the process.

        If your boss is more of a hard ass that made you fear approaching her earlier on … well it will be harder to come back from, though still possible. Make the confession ASAP so she can handle whatever fallout she needs to and think about asking her to check in sooner on other projects anyway as a way to show her you are trying to learn from you mistake. Not asking for help because you think you know better or because you think you are supposed to do it all by yourself with no help is a common issue with some entry level people. The more confident employees are much more inclined to ask for help because they realize other people can be huge resources and help the work be better and faster. Work toward being one of them!

        You can start looking for peers and mentors elsewhere at your company or in your field with whom you can workshop issues like this going forward. Some of the best problems solving advice in my career has come from the peer level.

        And finally, OP, I do worry a little that your other department members didn’t help at all when you said you were having problems. If you can, next time ask more specific things: Have you ever worked with a group who doesn’t give you anything? Do you have any go-to questions when people tell you everything is fine? Have any of you worked with (specific person) before? I’m getting a weird vibe and wanted a gut check if that is just how they are or I need a new approach.

        Good luck and let us know how it goes.

  3. Nonsense*

    You need to ‘fess up, OP. You presented solutions based on completely made up information and that will get found out as they start introducing the solutions. The sooner you go to your boss and admit what you did, the less of an impact this will have – especially if you can clearly show that you recognize there was a breakdown in communication on both sides. Owning a mistake, acknowledging how and why it happened, and showing that you now realize what the better option would have been will reflect much better on you than letting this fester.

    1. Vveat*

      Leaving aside the ChatGPT issue, have you considered why you are not getting that information? Unless the team has a personal issue with you, are you asking the right questions – or rather, are you solving the right problem? Is that a problem for your team or for theirs? I would suggest going back at least once on the question what we are solving for, and do they feel it’s a problem. (I am a consultant myself and sometimes you get that going in with the wrong premise upfront)

      1. Falling Diphthong*

        Yes. It’s entirely possible that this team’s problems are in fact difficult to explain to someone completely on the outside, with little experience in what they do.

      2. juliebulie*

        Sometimes solving a problem or streamlining results in not needing as many employees. The uncooperative people may have been trying to protect someone (or themselves) from improvements.

      3. D*

        I also flagged that OP decided having the background the team said was necessary to understand was wildly out of scope. Maybe that’s true, but it also makes me wonder how narrow the answers they wanted were, and how broad without the team admitting they were broad and general.

        1. eleven plus two*

          My assumption is that OP was there to get justification to implement a predetermined “solution” she’d already come up with. The team seems like they were initially pretty willing to sit down and have a meeting and talk, but things went south when she wasn’t hearing what she expected to hear (“talking in circles”). That’s why the feedback and background the team provided was “wildly out of scope;” her scope was to sell them on a solution they don’t need.

      4. ferrina*

        I’m also a consultant and trained moderator, and my mind went down the same track.

        OP may have been given a challenge that they didn’t have the skills to accomplish. Guiding difficult conversations and soliciting feedback is a skill set. It is not something that everyone magically knows how to do- there are professionals that specialize in it. Asking the right questions in the right way is hard, especially when you are working with a hostile/busy/non-communicative audience.

        OP, you really should have escalated this to your manager or a senior member of your team when the other team refused to communicate with you. Sometimes a team refuses to work with you- that’s when you need back-up. And not in the form of ChatGPT, but a human who understands the nuances of your workplace.

          1. Dasein9 (he/him)*

            Projects like this often surprise the people driving them because they’re a lot more detailed work than they look like. I’ve been part of these initiatives a few times and there’s always a phase when the upper-management person in charge becomes unresponsive, then hands it off to someone in middle management. That person invariably (IME) sees the project as an opportunity to shine and complicates it further for the rank-and-file trying to get the documents and flow charts made.

            Looks like LW’s workplace dispensed with all that and just handed it off to someone who wasn’t ready for it at all yet.

      5. RagingADHD*

        Or has the other team become jaded by a series of shiny-sounding initiatives that don’t amount to anything but a waste of their time?

        They may simply be speaking quite literally that LW, with only 2 years’ professional experience, does not have the industry / technical knowledge to understand their processes or needs. The fact that leadership assigned a very junior person as the primary contact and interviewer suggests to me the possibility that this improvement initiative may not have the priority or resources to accomplish relevant and substantive goals.

        1. Guacamole Bob*

          +1

          I have developed a certain cynicism about “I’m here to help” type initiatives from people who are not extremely well-versed in what we do. It’s not quite what OP was talking about since it’s more about data and less about process, but I had this comic strip on my wall at work for a number of years after a variety of frustrating experiences with smart, well-intentioned people who just didn’t have the background going in: https://xkcd.com/1831/

          1. ferrina*

            I’ve been a part of a lot of these, and the only way to actually get real results is to first ask “What have we already tried, and why didn’t it work?” Then you design a proposal, then get the experts back in a room and ask “I mocked up a design. Tell me everything that’s wrong with it.” Then repeat until the necessary experts sign off (also makes the implementation soooo much easier because you already have stakeholder buy-in).

            So many people take the Timon and Pumba approach to process change- it was in the past, and now we’re here with our catchphrase and showstopping song. Doesn’t work so well in process development and change management.

          2. Chuffing along like Mr. Pancks (new name for Reasons)*

            + all the 1s

            A few years in, my second job had a huge, top-to-bottom restructure, the brainchild of the new Head Teapottist who’d started about the same time I did. We saw a process consultant who’d been brought in to streamline the project visit the teapot assembly and tea production lines–she visited exactly one time. Regular managers received increasingly panicked questions as the project approached: “They do know that by deactivating our electronic tea storage-and-sorting tool and by building walls between the teapot production lines and the final QC-and-glazing check area, that they’re going to slow every part of the process down and we’ll need more staff to support us, right?”

            There WERE physical walls splitting the work area for the multi-month duration of the construction project, and there were NO extra staff. By Week 3, unopened totes of mixed sachets and boxes of ceramics lined the hallways, stacked precariously as staff rushed past in both directions. Routine tea deliveries went unfilled, leading to emergency orders to FIND the lapsang souchong, GET it in the blue japanned 16oz pot (once THAT could be found), and BOIL THAT WATER. They need this tea on the 6th Floor RIGHT NOW. This, of course, prevented the orderly production and distribution of the 7th Floor’s teapot and beverage orders, and then the 5th’s, and the 4th’s, and the 8th’s… and forget taking an inventory of those boxes. Tea Acquisitions will just order more since these didn’t get scanned, and we’ll put those away tomorrow.

            A few weeks more, and so many complaints were tead-up that the Head Teapottist hisself showed up, flanked by other very serious people in business attire. Their burning question to those of us scrambling back and forth in the mayhem? “Have you noticed an increase in the number of ‘tea-out’ notifications in the system?”

            But this was a hospital pharmacy, not a teapot dome, and these were scattered, disorganized, thrice-ordered, miscounted, lost, delayed medications, and I’d be stunned if the mismanaged transition killed zero patients. I will never forget the Teapottist standing in a room that might as well have been on fire and asking if we were noticing extra orders come in (that clearly weren’t getting filled.) It is amazing how drastic the changes are that bad leaders will make while understanding nothing of their organization’s operational details.

        2. talos*

          +1, I often don’t even show up to these meetings with people I don’t already know, because nothing worthwhile has ever happened because of them.

      6. Tiger Snake*

        Mhm.

        This feels like OP has only dealt with square pegs before, didn’t know how to make a hole for a circular peg, and decided they’d use a 3d printer to make square pegs instead.

        The end result, surely, is that a bunch of process improvement does a LITTLE bit of benefit but on the whole leaves their client parties feeling like they weren’t listened to, that their time was wasted – and that when someone tries to fix this again in future, that they’ve no incentive to participate.
        Just like they felt before OP started, as though this is exactly the situation they’ve been in before. Just chucking the problem to the next person down the line.

    2. Dido*

      Acknowledging and apologizing for doing something wrong isn’t a cure-all. This is so egregious, I don’t know how OP can come back from this. There are obviously much deeper problems here, either the workplace is so toxic that she didn’t feel she could rope in her manager earlier (in which case she needs a new job) or she just has a very avoidant and anxious personality (in which case she needs therapy)

      1. Jackalope*

        Or she’s new to the workforce and/or this specific task and didn’t realize that was an option? Or was trained growing up not to bother the people in charge because part of the job is figuring it out on her own? I know that I was generally taught as a kid that I should try to resolve stuff on my own and not “tattle” about someone else causing me problems. There was a lot of usefulness in that ideology, but when starting work at a paid job I didn’t have a good understanding of when I should bring things to a manager vs resolve them myself. (And this is something that managers vary on a lot in terms of what they think should be brought to them.) I know the OP says she’s been at this job for a few years, but if she hasn’t had to deal with another group being recalcitrant like this she may not have realized that, “Just get it done and don’t rattle,” isn’t the right approach here.

        1. Irish Teacher.*

          Yeah. I could be completely off-base, but as I wrote below, this could be partly from looking at things through the mindset of school or college where assignments are a test and well, in primary school, if you are asked to interview your father about his job and a student goes into school the next day and says “my dad is unemployed so I couldn’t interview him” or “my dad was away yesterday” or “my dad hadn’t time to be interviewed,” a lot of teachers will say, “well, then you should have interviewed your mum or one of your grandparents or your babysitter or just made it up based on what you know about your dad’s job,” because the aim is to see if somebody can come up with good questions and for them to learn more about the workplace, not for the teacher to know the details of the dad’s job.

          And yeah, that’s not a brilliant example because I’m pretty sure the LW isn’t thinking in terms of 4th grade assignments but even at college, some assignments can have elements of that. If you are a design student and are asked to create a logo for a business, the lecturer probably doesn’t care if you create one for a real business or just make up one. It’s your design skills that are being tested.

          And often the emphasis is very much on you and if you run into problems like somebody not answering your questions, you are meant to figure out a way around it.

          In the workplace, the aim is far more usually to get the job done than to test how much ingenuity the employee has and therefore getting further clarification is more likely to be seen as a positive.

          But I can see somebody fairly new to the workplace still thinking that finding a way to complete the assignment without asking for help would be a positive thing.

          1. Lenora Rose*

            Any teachers who say that aren’t teaching well. They should be asking, “Okay, what can we do instead?” and not assuming the child will have the answer without guidance.

            This is also true of “Don’t tattle.” IME, all the pressure not to tell grownups or appeal to them to help solve a problem came from peers, not from the actual grownups who usually did want the students to let them know about interpersonal problems that are beyond their skill to solve. And yes, I know some grownups can be jerkish about the weirdest things, including ones like teachers who should know, but not helping a kid you’re supposed to be teaching with their problem solving is outright bad pedagogy.

            1. Your Former Password Resetter*

              Yes, but that doesn’t help the kids who get trained into bad habits by those teachers.

              1. Lenora Rose*

                No, it doesn’t; but Irish Teacher seemed to be talking not only as if this was a common but unfortunate experience but as if it was *correct* to treat children so, and just wrong when applied in an office with adults. I wanted to be clear that this isn’t correct.

  4. ragazza*

    Oof. I was in a similar situation years ago when a VP of marketing expected me, a copywriter, to know detailed answers to very specific and often technical questions for RFPs related to our company’s services. I was told to go to various senior leaders in different departments to get the answers, but my requests were routinely ignored and then guess who got in trouble? This is called “responsibility without authority” and it sucks. I can certainly understand the pressure LW was under, but making stuff up (which is basically what they did by including AI answers) is not the answer. Or at least not usually, heh heh.

    1. Sneaky Squirrel*

      I agree with you that responsibility without authority is a thing that happens frequently within organizations and it sucks. But I’m not getting the impression that this an example of a responsibility without authority situation because I don’t see that LW ever went to higher ups to ask for help when the communication issues broke down. They went to their team (which may or may not have included leadership) who had the same questions they had, but they never say that they had a sit down with their leadership to address their concerns.

    2. NotRealAnonForThis*

      It is an utterly frustrating thing, this. Saw it in OldOldJob. Hated that I had to do function X in order to create a financial thing, yet nobody who held the information I needed felt it at all needed to pass along the information requested…

    3. Dasein9 (he/him)*

      Yeah. Sometimes what I do is lay out what I do understand (or make something up), then ask the SME to tell me what I got wrong. Having a document that exists already to correct is easier for the SME to do than coming up with the process in the first place.

      LW may have been okay if she’d brought the generated material to the SMEs and said, “Here’s what I’ve got. It’s guaranteed to be flawed, but all you need to do is correct it. How about we schedule a working meeting to do that and I’ll share my screen so you can see what I come up with while we talk?”

      1. Project Maniac-ger*

        This is a great way to get around roadblocks, especially with very technical folks who are so deep into their craft that giving a 101 lecture is maddening to them.

        I call it the Reddit method – because if you ask for advice about a specific thing on Reddit, nobody will answer, but if you post something wrong, you’ll have hundreds of comments calling you a flaming idiot and ~ telling you what you did wrong ~ which is what you wanted anyway.

        It requires you to swallow your pride but it keeps projects moving.

  5. Red Wine Mom*

    Perhaps go back to the other team and show them what Chat came up with. Ask them to consider that *type* of a response was what you were looking for. And then hopefully they can understand your goal, and might have input for you.
    Either way, you have to go back to your manager and admit your error.

    1. Sloanicota*

      I agree that when I can’t get people to give me specifics, it can sometimes be effective to show them what I’m thinking and ask them to react to it. In the spirit of crisis management at this point I would probably go to them with the current list and say, “this was what I gleaned from what we talked about” (huge reach but maybe that feels okay to you OP?) “but I didn’t get the level of specificity I was hoping for and kind of filled a lot of this in myself. What here seems good or bad to you?” Maybe you’ll find out that a lot of it was burble anyway (“increase efficiency by streamlining processes!” – sure okay, whatever) or that most of it is fine and just one point is really off. You can then focus on the follow up of correcting the one that’s clearly wrong.

    2. Falling Diphthong*

      OP could do that with the results from other groups who did respond. It’s just a much better example: “Here’s an example of what Spouts said in response to these questions. And one from Handles.” (Possibly with details anonymized if confidentiality is a factor.) A lot of people can then generalize from the specific examples to determine what type of response you need from them.

      “Here’s what someone who doesn’t understand what you do gave as an answer you could give” is not helpful, whether the someone who doesn’t understand is human or AI. The example is likely to be exasperating.

      1. ThatOtherClare*

        You underestimate the human desire to correct others. It’s an old trope that one of the best ways to get an answer to a question out of reddit is not to post the question itself, but to post an incorrect answer to the question as ‘fact’ and watch it get shouted down by 16 different re-wordings of the correct answer.

        1. Falling Diphthong*

          While I believe this is true in many circumstances, this particular department seems to have the self discipline to glance at the bad answer, roll their eyes, and fail to respond some more.

    3. another Anna*

      I was going to give this same advice! As a graphic designer, if someone isn’t giving me enough specifics to really capture what they want, often putting together my best guess will prompt them to actually fix it, so this may actually work. Some people are just way better at responding to suggestions than coming up with them on their own, and if you throw a few wild suggestions out there, it can get them going

    4. Caroline*

      I’d be tempted to fill the slide with “Team X’s processes were to complex and had too much variation for them to propose any meaningful improvement ideas, so I think the real opportunity for improvement lies in going through their functional area and map out how their processes work and implement some standards so they have a foundation for future improvements.” IME people who can’t give me an elevator speech about what’s going on in their area either have no idea what’s going on or are trying to avoid being interfered with (and it’s my job to interfere). I may be projecting, but I see this kind of thing sometimes. Patting me on the head and telling me your big important process is too complicated for my tiny little brain grinds my gears.

    1. amoeba*

      Well, then they might have just taken a few more minutes and made up the answers themselves. Or gotten them from some generic management advice website. Not sure how that would have been better?

  6. Falling Diphthong*

    A detail I appreciated in Mrs. Davis was that the global AI confirmed it didn’t tell people the truth. It told them what they wanted to hear. User engagement was better that way.

    OP, the thing to do was tell your manager that you couldn’t have the meaning to discuss Result X because the person in charge of giving you Result X had not done so. Sometimes that means the meeting is delayed until that information is available; sometimes it means the thread that meeting was to address is dropped; sometimes it means you need to be told a different route to determine Result X.

    Like, right now I have wound up my project for the morning because I need Part Z, which I have not been sent and may not get before this project is due. I will need to put Part Z in in revisions if it doesn’t show up. I know this because the PM preemptively warned me it might happen. If she hadn’t, I would have sent a note that I was done except for incorporating Part Z.

  7. MistOrMister*

    Oh boy. I don’t have any idea how one recovers from this kind of thing. I think Alison is correct that it is really important to reflect on why going to a manager and saying the other team was refusing to respond wasn’t considered a viable option. I would have sent a few reminders/follow ups and then forwarded those emails to my boss sayinh the other team wasn’t responding and asking how they wanted to proceed. Doing a whole presentation based off made up information can be found out so easily that I think it’s a matter of when, not if. Personally, I would likely be looking for another job because I’m not sure this is survivable.

  8. Don't You Call Me Lady*

    FWIW, this doesn’t really seem like a ChatGPT issue at all. It’s falsifying information which could have been done through any number of methods.

    It may not be an issue though if the info is accurate enough, or depending on how this is going to be used, is this a high priority project etc.

    1. Lab Boss*

      Agreed in principle- but AI/ChatGPT is being dangled as the shiny new toy that can pull meaningful content out of murky e-mails, and I can see how people (especially inexperienced people) could convince themselves it was somehow different than a comparable form of falsification (like if OP had just written their best guess of what the team wanted and claimed it was exactly what the team had said). Employers should be ready to either emphasize a total ban on the use of GenAI, or be very specific in training on what it can and cannot do.

      1. Hastily Blessed Fritos*

        Ironically, I’d be more forgiving if OP had panicked and just done a best guess rather than using ChatGPT to just straight-up make stuff up. At least a best guess would be based on *some* domain knowledge and interactions with the team rather than just “these words look like an answer to that question”.

        1. Dark Macadamia*

          Yeah, the best choice would’ve been to talk to their manager and/or postpone the presentation, but if that didn’t feel like an option they should’ve just… presented what they had? “This team didn’t give me much feedback but said X and Y, which is similar to what this other team said. I’m currently following up to get more details but my impression is Z.” Basically do the processing/extrapolating that ChatGPT did but with actual knowledge of the situation.

        2. Antilles*

          As long as OP was honest about it and prefaced it as such, a best guess probably would have even gone over just fine. Like, let’s imagine that OP’s presentation started with this:
          “Hey guys, I tried to get in contact with Jim’s team and we’ve been going through some back and forth. It’s still not completely clear, but I’ve summarized the current status as best I can and made some recommendations based on my understanding.”
          In most cases, nobody would be particularly upset with that kind of update. You’d move forward with that best-guess, continue to push for more information, and adjust on the fly if/when you got the full story.

      2. CRM*

        I agree. I don’t think that OP was intentionally trying to lie. I think this was caused by a general misunderstanding of how ChatGPT actually works. A lot of people use ChatGPT like Google now, which sometimes works when a task is general and broadly applicable.

        My read on the situation is that OP wanted to find out how other teams in a similar field might respond to this question, and figured that there would be resources out there that ChatGPT could scrape from and format into the way that OP wanted. Alternatively, OP thought that ChatGPT is smart enough to generate novel insight from a string of emails. It’s possible OP actually thought that ChatGPT’s responses were somewhat legitimate, and not complete nonsense.

        Workplaces need broad training on generative AI!

        1. amoeba*

          Eh. They didn’t feed ChatGPT any e-mail chain, just the mail asking for feedback (their own mail that never got a reply). So pretty sure they were aware that it had zero information on what the team wanted because they never gave it any input from them.

  9. Sneaky Squirrel*

    Oh no! Perhaps there is a way you can reframe it; make the suggestions to your clients or point out that these are suggestions that you found while trying to do research to their concerns. However, I think ethically you have to confess that these are not the improvements your client suggested.

    While it sounds like you learned a lesson already about how to not use ChatGPT, I would also caution that copy/pasting emails directly into ChatGPT is not good practice especially if your work has anything that could be considered proprietary information or sensitive data.

    1. Dido*

      Good point, I’d be fired immediately for pasting any work product into ChatGPT or other AI, because these models train on it and will use your company’s proprietary data in someone else’s answer. I had to sign an AI policy that I wouldn’t

  10. Brain the Brian*

    I would:
    1. Email that team lead asking for a meeting — today.
    2. Explain that you had to give a presentation using the information you had from their team and that their responses had been so limited that you basically had to wing it, and tell them that — to save face on both your parts — you are going to loop them into a conversation with leadership so they can correct / flesh out as needed.
    3. Send them a copy of what you presented so they have that for reference.
    4. Loop leadership and the team lead into an email thread so the team lead can correct the record about what really needs improvement.
    5. Separately, figure out what that team lead how to improve your communication with their team.
    6. Also separately, it may be worth it to come fully clean about using ChatGPT to your own boss. I don’t think you *have* to do that to the larger group, though, as long as you acknowledge that you were winging a lot of the materials with limited information.

    Standard disclaimers about knowing your situation and so forth apply…

    1. bamcheeks*

      I think this would be good advice for a mid-level employee, but for someone in their first couple of years, I think it would be WAY overstepping and probably just create more mess and confusion. LW shouldn’t do anything else without speaking to their own manager first!

      1. Brain the Brian*

        Fair point. Lower-level employees should absolutely seek guidance more often, and this is definitely of those situations. Make that item 1. :)

        I would argue that the employee should advocate for coming clean themself to other team, which I think would seem more sincere and help preserve a crucial working relationship.

    2. Ginger Cat Lady*

      Nah, if I was in a meeting where a low level employee told me what they’d done, blamed the group for using Chat GPT to make stuff up, and expected everyone to step up and rescue him from the situation he created with his lies, that would NOT go over well with me.

    3. MsM*

      If I’m on the other team, my reaction to that is going to be “that sounds like a you problem.” Especially if – as I suspect – the reason OP wasn’t getting useful responses from them is that they think this whole internal processes improvement initiative is at best irrelevant to their actual problems and at worst is just going to make their lives harder, and this really only serves as supporting evidence from their perspective.

      1. loggerhead*

        Yes, this is an employee engagement problem larger than the specific process analysis can solve. I am not sure LW recognizes that larger issue or had the vocabulary for that. It’s also important to for LW to realize they did have a real finding—the other team was uncooperative and disinvested in the process. That would be a great question for the boss: how to handle that by changing the kinds of data you were collecting or who was collecting the data. The scope of the real problem was outside LW’s view. Going to ChatGPT was such a poor choice, and the message should not even unintentionally be that LW was backed into making it. This is just going to reinforce the idea that what comes out of LW’s team is not worth taking seriously.

  11. CubeFarmer*

    Yeah, I’m not sure why Chat GPT was the go-to here.

    On the other hand, that team seemed so negative about the process, it’s likely they won’t even notice the “solutions” or if they do, they won’t follow them.

    1. Sloanicota*

      Honestly it might have been better to fill in the gaps with your own/your team’s knowledge of the issues versus use literal AI – I would be interested to hear why OP thought ChatGPT was the move. Was it just that it felt like “external info”? Because whoof.

    2. Falling Diphthong*

      It’s making up a lie (an olden work error) without the step of doing it yourself (by outsourcing to recent tech).

      (ChatGPT mystifies me because humans will baselessly bullshit for free. You didn’t need to outsource that.)

      1. bamcheeks*

        I find it really interesting because when you think about it properly, it really forces you to name the benefit that a real person can bring. This time a couple of years ago, we were in a meeting and one of my colleagues was having a bit of a moan about the evaluation report she was writing, so another colleague as a joke prompted ChatGPT to produce the report. It produced something that could *completely* pass as my colleague’s report, including four perfectly plausible recommendations for improvement. My colleague probably spent about ten hours writing a report that was 95% identical, because there are really only so many ways you can say, “we should have involved stakeholders at an earlier stage” and “we underestimated how much marketing and comms resource we would need” and you can guarantee that these will be Learning Points for 99% of the projects we do.

        So then the question becomes: what was the point of writing 95% of the report? Why not just write three bullet points that ACTUALLY differentiate the learning on this project from every other project we’ve run? If 95% of the report is so templatey that GenAI could produce it in 20 seconds, why are we writing it? And if we (and every other organisation that runs similar projects) are making similar recommendations to ourselves in every evaluation report, why is it still getting lost in every planning stage? Some really big questions we should be asking ourselves!

        (this is not to discount all the other ethical issues with GenAI, specifically in how it’s been trained and the lack of informed consent given by all the writers, artists and ordinary social media users that its been trained on. But I do think the “if GenAI CAN do that, then what are we doing?” is a good one.)

    3. Lab Boss*

      I feel like there’s this general zeitgeist understanding that AI is somehow a magic bullet that can produce answers out of nothing, instead of just something that’s pretty good at regurgitating what it read in a plausible-sounding way. Like trusting your friend’s terrible advice because they read a lot and always sound confident in what they tell you.

      1. Daughter of Ada and Grace*

        It’s also got an issue where sometimes you can ask it a question, and it will give you a response that is not actually an answer to your question.

        So, if you ask “Should I zorpendefularize the fleebleboxen?”, and it will answer with why you should use a fleeblebox, or the history of zorpendefularization, but nothing at all about what you actually asked. (It’s particularly bad about niche topics, where there isn’t much data for it to scrape, but I’ve seen it happen when it should have everything it needs based on the current conversation history.)

      2. Kelly L.*

        I agree, and I’m also a little befuddled at the people who use it as google. Like…you already had google?

        1. Annie*

          I think the people using GenAI tools as a replacement for ordinary search are either outsourcing the processing of information that would otherwise be contained in an ordinary search, or are trying to quickly look for a “novel” solution to a problem where the “usual” solutions didn’t work or aren’t available.

          Ordinary search: 9001 results, first few pages chock full of re-hashes of the common solutions that didn’t work

          GenAI/ChatGPT: Robotic summary and synthesis of 9001 results, including both common and novel solutions (real or made-up) to a given problem

    4. Dawn*

      I think mostly because LLMs are being sold by, uh, literally the majority of tech companies now as the solution to all life’s problems.

    5. fhqwhgads*

      I’m unclear on if OP asked ChatGPT the questions it asked the team and used its answers as if that’s what the team said, or if it told ChatGPT “I asked them these questions and they gave these answers and what did they mean by that?” and then used what ChatGPT thought they meant… if the latter – still a very bad idea – but I can kinda see why they thought that might actually help? If the former, then yeah beats me why it even occurred to OP to go there.

  12. Van9*

    The ChatGPT part isn’t the actual problem, the problem is that you presented information that you aren’t sure is true and can’t back up. That’d be true even if you didn’t use ChatGPT and just guessed at the problem and presented that info. I agree that you should have (and need to now) escalated the issue to your manager or the team-you’re-consulting-with’s manager. You said you tried talk to the team lead, but when that didn’t panning out, was there anyone above that lead you could’ve gone to?

    And I’d reframe Allison’s second question to you: not so much “why did you decide to use ChatGPT”, but rather “why did you decide to make up the information?”. Or actually more accurately, “why didn’t you feel comfortable saying that you didn’t have that part of the information?” Again, in your case, ChatGPT isn’t the actual problem, it’s that you presented important info that isn’t actually actionable and that you knew you couldn’t back up.

  13. Tom R*

    I think “Oh No” is the only correct response here. I hope LW (who appears to be young and inexperienced) can learn from this because this is not a good situation.

    AI is not a magic solution to problems

    1. N C Kiddle*

      On the positive side, LW will never be stuck for an answer to “tell us about a mistake you made at work and how you handled it” questions in future interviews

  14. Peach Parfaits Pls*

    All the massive ethical issues with EVER using AI aside, chatgpt is just a red herring to soften “I lied”. There’s no difference whether you made it up yourself or had a computer string words together for you. It seems like you got so focused on the presentation needing to go well in the moment that you lost sight of the point of any of it, or of what would happen immediately afterwards.

    I’d advise to own up to it. You’ll probably have to either way, and it’s a lot easier to get sympathy for “I panicked!” when you come clean right afterwards, vs if you try to get away with it and waste a ton of resources.

    1. HonorBox*

      I panicked is a perfect way to frame this. Give your manager a step by step outline of this process, including all of the communication failures that were outside of your control. Then “I panicked and input information into ChatGPT.” I assume the LW reviewed the information and provided context in the presentation, so saying that they panicked and got help from AI, felt like it was usable and valid, is probably showing both that they’re able to recognize the poor choice AND that they were trying to make the best of a crappy situation.

      And then LW, in the future… just talk to your manager first.

      1. Mo*

        I wonder if there’s a way for OP to say they just put information that sounded plausible to them in the presentation based on the limited input they were able to get from the team – without mentioning ChatGPT. I would feel way more skeptical of an employee who had ChatGPT make things up for them than an employee who went way beyond their scope but tried to think things through themselves. Perhaps others would disagree with this take, but I think saying you resorted to ChatGPT would be a really bad look in a lot of workplaces.

        Be extremely clear in acknowledging what went wrong. You were unable to get the information you needed, you didn’t seek help, and you decided it was better to present false information than admit that you didn’t have what you needed to do the presentation. You will never try to conceal difficulties or failure to complete a task again because you realize how it snowballs and can mess up an entire project.

        1. Alianora*

          I agree – I think bringing ChatGPT into this will look worse. I feel like commenters here tend to advise total transparency on mistakes of this nature, but sometimes total transparency will get you fired.

          To be honest, I think pragmatically the LW’s best option is either to do what you and Alison said (getting ahead of it but without mentioning ChatGPT), or to stay quiet and hope it goes unnoticed (risky but high reward if the office is dysfunctional enough not to notice/follow up). Either way a job search is in order, because lying to higher-ups is a really bad look.

          1. ThatOtherClare*

            I am of a similar opinion. I agree that bringing ChatGPT into it will make it seem worse, because rightly or wrongly it feels like a very planned and premeditated form of deception when you’re going so far as to use tools (whatever tools) to help you.

            Saying “I panicked because of the deadline and so I just made up something plausible-sounding! I’m very sorry and it won’t happen again!” when you’re young and new to the role wouldn’t get you fired at my current workplace. It would lose you some trust and some autonomy for a little while, but you would be able to earn it back again if this was truly a one-off. However I’ve also worked at places where this would get you either fired or slowly ‘managed out’.

  15. Caramel & Cheddar*

    “I think it’s really important that you figure out (a) why you didn’t do that and (b) why ChatGPT seemed like a reasonable solution — because otherwise I think you’re likely to have significant lapses in judgment again. ”

    I think this is super key. Ethics of using ChatGPT aside, you’re in a consulting role, LW, and that requires being able to assess information and make judgment calls, which were not things demonstrated here in relation to this problem. ChatGPT is kind of a red herring here, really, since the outcome is the same as if you had personally made up answers wholesale and put them into your presentation due to the time crunch.

    I do wonder to what degree deadlines/time management are part of this. You asked these folks for their input many times, but I wonder if you thought you had more runway than you did ahead of the presentation deadline but quickly ran out once the other person changed the meeting they agreed to back into an email. What might have felt like enough time to suss out their challenges suddenly became zero time because you were put back squarely into your original problem.

  16. bamcheeks*

    What was I supposed to do?

    LW, take a good look around at your work environment. You made several good-faith attempts to engage with a team that wasn’t engaging, and didn’t get the information you needed. That was the point where you should have gone back to your manager and said, “I’ve tried, X, Y and Z, but the digital llama team isn’t engaging. What should I do next?” Your manager might have said, “Try A, B and C and let me know how it goes”, or “Gah, this is typical of the Digital Llama team– let me talk to my manager.” But when you have been given a task which depends on other people, and the other people aren’t playing ball so you won’t be able to achieve it– you should always be able to escalate that to your manager.

    Take a really good look at why that didn’t seem the right course of action. Had you left it so late that it would be embarrassing to admit you hadn’t done it earlier? Was there something else you know you should have tried and you didn’t want to admit you hadn’t? Do you have a tendency to feel like you have to comply with requests no matter how impossible, and you were scared to admit to anyone you didn’t succeed? Is your manager unapproachable about things like this and would have given you hell even though you had done everything you can think of?

    If it is that your manager is unapproachable and you’re scared to admit to not getting information from someone, that’s a sign of a unhealthy environment, and in that case it’s a really good idea to look for something better. For all the others, those are things to work on yourself! They’re not uncommon feelings to have as a relatively inexperienced worker, but they are definitely things to work on and get past.

    1. Not Tom, Just Petty*

      I wrote a longer comment below about OP’s reasoning still being influenced by group projects in school. If teammates aren’t doing their share, someone steps in and does it because GRADES and FAIL.

      1. SweetestCin*

        This was where my brain honestly went too – group projects, even in university level, because the adult in charge of the class doesn’t CARE if someone is blowing off the project, you’re all getting the same grade. And it seems like there’s always an “evil-CEO-caricature-in-training” in every group who uses this to their own advantage.

        1. Irish Teacher.*

          And they also often don’t care if the problem you write about was really the issue or not. If you are asked to “write about a problem your group faced and how you solved it,” so long as the problem is plausible, you can make one up and get just as good a grade as you would if you had a real example, because the teacher or lecturer wants to see you demonstrate your problem solving skills; they aren’t interested in the problem itself in the same way a workplace is.

          My feeling is that the LW approached this like a test – can I show that I can make a good presentation on this issue? – which is kind of understandable after maybe 16 years in education and only 2 in their career.

        2. Irish Teacher.*

          And I will say that as a teacher, I always tell my students that if they want to work collaboratively on a project to make sure each has a clear role, so that I can mark them separately and give them different grades if that is what is deserved. But I know that’s not true for all classes.

      2. Awkwardness*

        My mind went to possible internships and if it made the impression that consulting is working on such a high level of commonplaces, that no one would realize actual input missing?

        1. bamcheeks*

          There was a discussion recently about whether the point of an internship was to do non-business critical high-level work (“develop a new marketing plan for a project we’re probably not actually going to launch”) vs business-critical lower-level work (“go through our marketing list and make sure all these contact details are up to date, then send this press release out to them all”.) I’m a firm advocate of the latter, because I think learning what business-critical is and how it all fits together is a much harder skill to learn in a classroom, and that’s how you should differentiate an internship from your academic course. This feels like the kind of mistake that’s quite a logical progression if you had the former kind of internship!

    2. Lab Boss*

      I think all of this is very important. OP says they’re only 2 years into their first full-time role. “What was I supposed to do” is a legitimate question, and I see so many young professionals who think “complete the assignment” is the be-all and end-all goal. Sometimes you’ll be given a goal that turns out to be impossible, for plenty of reasons: uncooperative coworkers, changing circumstances, or just being more complex than expected. Figuring out how to address them is a key skill- and sometimes all you can do is go back to your boss and say “Hey, I can’t make this happen, even after trying XYZ” and then collaborate on what comes next.

      1. bamcheeks*

        “complete the assignment” is the be-all and end-all goal

        I think that’s a really good insight, and quite possibly what was going on! I do a lot of work with new graduates and one of the things I’m focussing on a lot is to make sure they understand the purpose of the work and how it fits in with a larger organisational goal. There is a qualitative difference between “do a presentation” in university where the point is to show that you learned the material and can do a presentation, and “do a presentation” at work where the point is for your audience to go away better informed about something, and actively change something because of the information you provided. Superficially it looks the same, but the context is very different and not everyone gets that instinctively.

      2. Lomster*

        Yes, I agree that this is a young person thing (not all young people, of course). It’s like, “if my assignment is completed, it doesn’t matter if it’s good or right or accurate as long as it’s done.”

        I used to be the same way. “Well, it’s done so no one can get mad at me” instead of “how do we get the information we need to do this in an effective way.” If I have a 24-year-old coworker assigned to send an email, she’ll send it, even if something’s off, so the job gets done. Rather than come to the team and say “I can’t send this this way because if we send this to customers this week and that other email next week we’re going to look like idiots.” If she sent it she wouldn’t get in trouble, because she will have completed the task, but it wouldn’t have overall good for the team or the company.

        Anyway, young LW, think about how you can take initiative to get the job done for the good of everyone rather than just as a task to check off your list so your boss doesn’t get mad.

        1. Brain the Brian*

          I disagree that this is a young-person thing. I have coworkers who are 60 and still think this way.

          1. bamcheeks*

            I think saying that it’s a young-person thing means it’s more likely to be something they can learn from and improve on. There are definitely going to be some people who just never get that context and way of thinking, but some mistakes are normal to make when you have less experience and that’s part of what experience teaches you.

            1. Lab Boss*

              It’s especially normal for people coming out of school. In school, the point of your assignment is (almost) never for something useful or actionable to be created, it’s for you to have done the assignment, emphasis on the YOU doing it. The idea that in the workplace, the downstream effects of what you do are probably more important than the specific task you’re doing, is something that needs to be taught.

            2. londonedit*

              I agree. Often when we have a new editorial assistant, we have to spend time teaching them that sometimes you can’t just rattle through tasks for the sake of getting them done, because there are various people involved and various things in play. They see a deadline looming for a cover to be briefed to Design, for example, so they’re asking ‘Shall I just do this cover brief? Shall I send it to Design even if we don’t have the author’s approval? It needs to be done by Thursday…’. And we need to explain that no, it’s more important that the author approves the brief before we send it, because otherwise we’ll have problems further down the line because Design will have spent time and money coming up with cover options and the author will hate them because they weren’t consulted. Missing the deadline by a few days matters less than having something that’s agreed on and actionable. But someone who’s just left university, as most editorial assistants will have done, is understandably more worried about deadlines, because if you miss a deadline at uni then you fail or you get marked down or whatever. It’s just retraining that mindset and making sure they’re asking us whether it’s OK before they launch in and send something off.

              1. bamcheeks*

                Missing the deadline by a few days matters less than having something that’s agreed on and actionable

                Ooh, that’s a really good example that I might steal for training!

        2. Sloanicota*

          This sometimes comes from the top though. Sometimes leadership will drop the hammer if something doesn’t get done on schedule, but tolerates a very low bar much better – thus, they are incentivizing things getting done even if poorly, and this kind of panic response can ensue.

    3. Hibiscus*

      It really strikes me that there was a shadow assignment here. Part 1 was try to pry the info needed out of this group, but the real goal was that this team is difficult/has been burned/whatever and the OP was really supposed to befriend and establish working relationships and get buy-in.

      1. Awkwardness*

        This is what I find a bit sad – OP lost the chance to really connect to that team about the problem.
        They’d probably call me pushy or argumentative. is what stood out to me.

        1. River*

          Agreed. Some people tend to view clarification or more questions as that their manager isn’t agreeing with them or looking to pry answers from them. In essence, yes the manager is looking for answers, but at the same time it’s an opportunity to build a line of communication with their team. My take was that the team is more Q&A, direct answer and response, yes and no, instead of having a discussion. Also “I could explain, but you wouldn’t understand it” also was a little off-putting.

    4. Chidi has a stomach ache*

      LW, your role is a lot like mine, and I do run into this from time to time. The communication issue is definitely something to raise to your manager sooner rather than later – that’s typically my first step.

      Second, I would look at the following:
      1) They gave you feedback you felt was “out of scope” for your work. This happens to me all the time. What I do here is to take a step back and look for patterns in this particular set of feedback, and then try to re-craft my questions to identify those patterns and ask for their application to what is within scope. Also — sometimes the “out of scope” issues are exactly what is preventing progress in the in-scope issues. That might still be worth flagging for leadership, even if you can’t fix it. In my field, we do volunteer recruitment, and one of the major obstacles we run into is the overall decline of volunteering in the US over the past few years. I can’t fix that, but I can flag it in our planning processes to make sure 1) reasonable targets are set and 2) develop procedures for staff in volunteer recruitment to figure out when to abandon a lead so we don’t waste too much time and resources.

      2) When faced with “too abstract” change your interview question tactics to force them to think practically. I often ask interviewees to walk me through a process step by step and that often reveals the areas they need help with or that we can improve.

  17. I should really pick a name*

    In the future, if you’re not getting the information you need, could you put together a proposal and state what assumptions you made?
    Some people are better at providing feedback when they have a plan in front of them than they are at describing what issues they’re having.

  18. JP*

    This is so tough, and I really feel for the letter writer. I thought the same as Alison – is there a way to portray this as you own best guesses based on the non information provided? I know it’s not the whole truth, but if it’s possible to leave the involvement of ChatGPT out of any confessions, I would. You didn’t have sinister intentions, but mentioning it will probably raise more hackles.

    Otherwise, my advice is to stare blankly at the wall for a while, feel your anxiety rise, contemplate fleeing across the country and starting over in a new place with new people where you’ll never ever make any mistakes again, come to your senses, and resolve to deal with the situation as it is. I remember a big screw up of mine from a few years ago. I spent time pacing the empty parking lot behind the office crying because I thought my actions were going to put half the company out of a job. Turns out I was catastrophizing just a smidge and everything worked out. But running away into the woods was tempting for a bit.

  19. Not Tom, Just Petty*

    I feel you, OP. You had a job to do and you felt that not completing it would be your failure. It is not. This is not a group project in school. If your coworkers aren’t giving a good faith effort, you don’t have to step in and do all the work.
    You can ask for help.
    You can even ask for help determining if the others aren’t giving a good faith effort.
    Again, I understand the feeling. Two years of work doesn’t erase 20 years of “present SOMETHING SOMEHOW or you will FAIL,” but you will get there.

    1. HonorBox*

      Yep. There were two types of professors I ran into in college. The first didn’t care about the inter-group dynamics of group projects. The project being completed was the key. The other wanted the project to be completed, but did consider situations when someone clearly pulled more weight than the rest of the group. If it was people dropping the ball versus someone taking control, those who dropped the ball received different feedback and grades.
      The latter group of professors had been in non-academic roles more recently and had a better sense of how the working world worked. And if you came to them like you would a manager to discuss information or work not flowing correctly, they’d focus on how things work in the “real world.”

    2. Margaret Cavendish*

      Yeah, OP was definitely not set up for success in this one – obviously it was wrong to use ChatGPT, but I can absolutely understand why they were so frustrated. They were given a job to do, and no support to actually do it. Maybe the other team was deliberately being difficult, maybe the questions OP was asking weren’t clear, maybe it was something else. But regardless, I wish their manager had been more involved from the beginning – even to let OP know it’s okay to ask for help if things aren’t going well!

      1. Sloanicota*

        Also, sometimes as a newer employee OP doesn’t know the back story. “Why won’t these people share the insights they have so that we can make changes that improve their experiences of their job?” But you don’t know that your department is constantly asking for feedback that never gets used, or that last time someone raised something innocuous that got them in big trouble so now nobody wants to say anything other than banalities, or whatever.

        1. Not Tom, Just Petty*

          I am very interested in the back story. The manager who requested a meeting then changed it to an email. Is she a Bond villian or just jaded?

        2. Orv*

          I ran into that doing internships. There’s nothing quite like being a young intern sent unaware to talk to some union guys who will see you as an extension of management! I just noped out of that one and left. I wasn’t being paid enough to deal with that kind of macho posturing.

    3. RVA Cat*

      This – all those years of school plus Anxiety telling you to keep the lie going about being from Michigan, not Minnesota.

    4. Toxic Workplace Survivor*

      This thread, OP. This is the one. Many of us have been there with a panicked response to something you believe to be solely your problem, and many of us have had to coach newer employees through that experience too. It is a tough lesson but an important one.

      When you aren’t getting what you need, it’s time to send up a flag. Others can help and even more than that, your supervisor both wants and usually needs to know about it!

      And if you learn out of this mess that your organization DOESN’T want you to escalate to your manager and thinks making it up as best you can in this situation was a reasonable response then, well, you don’t want to be working there.

  20. theletter*

    I would say the fact that they didn’t want to talk about their processes tells you a lot about their problems.

    Was the proposed solution ‘improve communication and transparency,’ by any chance? B/C you could definitely say you came up with that yourself based on the team’s answers.

    Otherwise, maybe you could shotgun ‘The Phoenix Project,’ then redirect the solutions outlined there as the solution – say new stuff has come to light after the presentation, and re-iterate the need to improve communication and transparency.

    1. Guacamole Bob*

      On the one hand, maybe it shows their problems. On the other hand, I have had junior staff on other teams who were charged with collecting input from my team or having us complete our part of some larger project, and sometimes it’s not really a priority for us in the context of our overall work, even if it’s important to that analyst. Sometimes the person doesn’t really understand our work and therefore the answers we do provide don’t make sense to them or get misinterpreted, or in reverse what they’re asking for isn’t something we know much about and so our input isn’t effective.

      I like to believe my team handles it much better than the team that OP was trying to work with, but these challenges are a pretty normal part of consultant type roles.

      1. Warrior Princess Xena*

        Having been that junior person: it sucks. I could definitely tell when the team I was shuttling messages to and from was getting annoyed. It was extra frustrating since I didn’t have the experience to say “no, I’m not doing this, you and my manager need to get together and chat since the only thing I’m doing here is adding a layer of junior confusion to this”.

  21. Margaret Cavendish*

    Oh, ouch. What a situation.

    So, two good things. One, you know you made a mistake; and two, you’re owning the mistake and taking steps to correct it. This is really important! For contrast, search this site for a letter called “I was rejected because I told my interviewer I never make mistakes” and its update. You’re already doing better than this guy!

    Honestly, I think you should come clean now. Whether or not your team uses the info from the presentation – clearly this is weighing on you, and you’ll probably feel much better once it’s sorted. And when you’ve made a mistake, it’s better that your manager hear it from you, rather than someone else – and also better that they hear it from you before it’s too late to fix it!

    Your letter is really clear about describing what happened and your thought process behind it. So start with that. Schedule some time with your manager, show her the letter, and say “I made a mistake and I need help fixing it.” That’s going to be the hardest part, I promise! After that, the two of you can figure out how to fix it, and how to prevent it from happening again.

    You got this, OP. This internet stranger is cheering for you!

    1. I should really pick a name*

      I find this one tricky because I really can’t recommend coming clean.

      I don’t see how they avoid getting fired that way.

      1. Margaret Cavendish*

        Look at it this way. If OP talks to their manager and comes clean, there’s a chance they’ll be fired.

        But if they *don’t* talk to their manager, one of two things will happen – either someone else will find out about the mistake and it’ll get back to OP’s manager that way, or OP’s team will try to use the data from the presentation and OP will have to call the whole thing to a screeching halt that way. In which case, they will almost certainly be fired.

        Any time you make a mistake like this, you absolutely cannot try to cover it up. That’s only going to make things worse.

        1. RVA Cat*

          This. Coming clean means getting fired for performance, so OP should be eligible for unemployment and could get at least a neutral reference.

          Doubling down means being fired for cause.

        2. Florence Reece*

          There’s another option: it won’t be found out, at least not immediately. And speaking from experience, OP, that’s the worst possibility. If it’s never found out, you will grow increasingly scared about when it will inevitably be found out. Maybe it never will be, maybe no one cares. Maybe someone already discovered it and just quietly thinks less of you; maybe everyone does?

          The best case scenario if you don’t own up is that you just lie awake at night worried that you’ll be shitcanned tomorrow, every night, no matter how far you’ve come and how good your work is now. BTDT. It is not worth your mental health. It’s so much easier to bring this up under your own volition now, even though I know it doesn’t feel easy in the moment.

          1. Kitry*

            There’s also a fourth option: LW never gets caught, learns valuable lessons from this incident, and goes on to have a successful career. I see the rest of coming clean as too great.

            1. Orv*

              Honestly I’m with you, especially since no one actually seems to care about this work product. Most likely it gets filed away and no one ever reads it again.

      2. MsM*

        I don’t see how this doesn’t come out unless this project is genuinely just being treated as busywork, in which case OP should be doing some soul-searching as to what they’re even doing here at all. Otherwise, better for them to get out ahead of it than for anyone else to follow up on this information with the other team only to be met with “what are you talking about?”

        1. Yorick*

          My agency had a process like this, and that team was very serious about it being meaningful. But many other teams were responsible for actually changing the processes, so it turned out that almost nothing was ever done based on any recommendations. This could be the case at LW’s job, although they might not know that yet.

      3. Margaret Cavendish*

        Also OP, I don’t think you’ll be fired! You’re two years into your first-ever job, which is still very new – especially in the consulting world. You said you’ve done this type of interview before, and I assume they’ve gone reasonably well up to now. Nobody has acted on the information you gave, so there has been no waste of time or money. Obviously this is a big lapse in judgement – and I know you know that – but the actual impact so far is pretty minimal.

        And that’s exactly why you should come clean now! There’s still time for this to be just a lapse in judgement, before it becomes a much bigger problem. I’m not your boss obviously, but if I were I wouldn’t fire you over this. We would definitely be having a Serious Conversation about how this can’t happen again etc, but assuming it never does happen again, that would be the end of it for me.

        1. Rex Libris*

          It’s more than a lapse in judgement, it’s lying and falsifying data. I’d very likely fire someone over this, for what it’s worth, since my ability to trust their work output would be compromised.

          In some ways it makes it worse that it was something relatively minor. What are they going to do when they make a major error, or face a truly big problem?

  22. Falling Diphthong*

    OP, to expand on the very good advice to think about why you went to lying in this circumstance: By chance did telling the truth about a problem not work out for you as a child, and so you learned to pretend whatever it was had been done, so no adult would be angry at you?

    One of the tasks of young adulthood is to figure out what patterns for dealing with things (with authority, with stress, etc) you might have learned in your family of origin, and whether those patterns actually map onto your adult life in a good way. It’s not fair to the people you interact with to assume that they will react Just Like Angry Parent, and they will be frustrated if you paste that pattern on them because they have authority over you, and so are like Work Dad.

    1. Juicebox Hero*

      I feel this so hard. My mother came down on me like the wrath of god for mistakes, and wouldn’t let me ask anyone for help out of HER fear that people would think I was stupid. So I spent my years in school and early working years living in terror of teachers – with the added fear that if I screwed up they’d tell my mother – and bosses the same way.

    2. RVA Cat*

      Adding that this can be a lifelong task. I’m a middle-aged adult still untangling things from my mother years after she died.

  23. Artemesia*

    When teams behave like this, you need to meet with people individually and get them to identify one thing they would like to change to make their work easier. If they refuse to play, then you go to your manager for strategies.

  24. Skippy*

    One piece of context that I think we’re missing is what OP fed ChatGPT and whether they did any kind of QC afterwards. Did they give it a query that produced some general jargon like “maximize our synergies in X and Y to spark creativity”? Was it the Underpants Gnomes? Or did it come up with dangerous concrete suggestions based on vague prompts?

    GenAI can be an interesting suggester. What happens with the results is what makes the difference.

    1. Grumpy Elder Millennial*

      I’d be curious to know how the other team would respond to being shown the presentation. I’ve found that sometimes you need to give people something to react to, rather than asking them to provide ideas or structure or whatever from nothing.

      1. Disabled trans lesbian*

        The risk of trying that would be that OP does not know whether that generated presentation is even reasonably close to being true.
        If it’s off the mark, I suspect OP would get grilled on where they got their ideas from and if OP cannot answer that in a good way (OP can’t give a good answer anyway since it’s LLM-generated) that team is likely going to react even more negatively.
        I have been at my current job for a year and a half, and I would not have high expectations of a consultant with 2 years of work experience total trying to consult for us unless their work was indeed of very high quality. The reality of my work is simply that it takes a lot of time to understand it, and there’s unexpected nuances to it that take at least a year to learn, if not more. No shade on OP, but if this team has a similar case then I can understand them being sceptical of OP, though that does not justify the treatment OP got from them.
        If this happened at my workplace and OP used LLM generated output and tried to pass that off as something I said would make me actively press to have that consultant dismissed.

  25. What's That Again*

    Is there any way you can meet with the team lead to discuss the rewording, phrasing it like I’ve thought about it and to make sure I understand is this a lot of the problem and share the CGPT response to get an idea of how close it was. Sometimes people aren’t great with filling a blank canvas. Best case scenario, is they agree with it, and you now can say there is alignment. If the lead says it’s off-base, dig in and get exactly why. It doesn’t help the presentation, but at least you have information which may help mitigate the damage.

    I don’t know about you all, but the way I use Chat GPT, is I type in what I’m looking for and see what it says. I never like the response on its own, but there are usually nuggets I can start with and grow on myself. Even when I hate what comes out, it’s a help to point me in the direction I want to get started.

  26. I'm A Little Teapot*

    OP, I’m an auditor. There have been projects/jobs where my job was to do exactly what you were trying to do. I have gotten pushback similar to this. To help, I’ve met with people individually, I’ve spoken to others on their team, I’ve gone to my management and they’ve had a word with the other team’s management, etc. You DO NOT make up stuff, and that’s exactly what you did. You made it up.

    There’s a decent chance this will imperil your job. Yes, you could get fired or put on a PIP or something for this. So figure out why you did it and fix that.

    1. Czhorat*

      I’ll quibble a bit: you CAN make stuff up if you’re transparent about it and you seek confirmation.

      Below I gave one of my approaches for lacking required feedback: instead of asking the same question again you give THEM the answer and invite them to tell you if it’s wrong. Now when you submit the report based on your swag you can point to an email trail that you sent it to them to correct if need be and they never said anything, confirming that it is accurate.

      It doesn’t work for every situation, but sometimes you need to lead with answers – especially given the bit about how some of the resopnses OP got was outside the scope of their department. “making it up” also gives them an answer that fits within constraints. If they DON’T agree they might have enough framework to give you a correction that is useful.

      1. I'm A Little Teapot*

        I wouldn’t characterize what you describe as making it up. Drawing inferences or tentative conclusions from existing data and then asking for feedback with which to adjust them is a valid technique. OP didn’t do that.

    2. Emotional support capybara (he/him)*

      OP did worse than “make stuff up.” They had ChatGPT make it up for them and passed the resulting mishmash of other people’s work off as their own. So it wasn’t just BS, it was plagiarized BS.

  27. Poison I.V. drip*

    Just for fun I posed OP’s entire letter to a chatbot and here’s how it answered:

    “Here’s what you can do:

    1. **Come clean to your supervisor:** Explain the situation honestly, including your struggles with the interview, attempts to follow up, and the decision to use ChatGPT in a moment of panic.

    2. **Reiterate your efforts:** Emphasize the time spent trying to get real answers and the dead end you reached.

    3. **Offer solutions:** Propose next steps to get the info you need. This could involve rerouting the interview with a different team member or reframing the questions.

    4. **Lesson learned:** Acknowledge that using ChatGPT was wrong and explain that you’ll focus on getting real user feedback in the future.

    Focus on honesty and proactive solutions to move forward. While the situation isn’t ideal, transparency can help minimize the damage.”

    1. MsM*

      Not bad. Which I’m guessing means this has happened often enough for it to just have a script ready to go.

    2. Mo*

      I really think OP should leave out ChatGPT if it’s possible for them to say they just tried to come up with things that seemed plausible based on the limited information they had gotten from the team. Honesty is generally best, but I think OP is much more likely to be fired if they say they used ChatGPT instead of thinking about things themselves.

  28. Czhorat*

    I’m not going to pile on LW for using ChatGPT, and I don’t in fact think ChatGPT is the real problem; the actual problem is that, absent answers from other stakeholders, LW made something up.

    I get the frustration; I often have project work that I can’t complete without input from people who seem disinterested in giving it to me. At this point in my career I usually *can* make an educated guess, and then just tell THEM what the answer is. “Our recommendations will be centered on streamlining process X. If this is not the case, please email me with your preferred improvement”. That puts the ball back in their court but gives you enough of an answer to keep moving with.

    In this case you could even have used the ChatGPT output to create something like that to reflect to them. Or, if you didn’t feel confident enough that it was the right answer, escalate to your boss and ask how they’d like you to handle the lack of responsiveness.

    Moving forward, you need to somehow fess up. One way is to send a summary of the presentation to the team you’re trying to help with a message like “absent direction from you, we made these assumptions. Please let us know if any need to be corrected”. Like the language above (assuming the answers) this puts the ball in their court while giving you a default answer to proceed with if they continue to ignore you.

    1. Abundant Shrimp*

      Yes, I would’ve done (and have in the past done) the same as what you’re suggesting.

      I don’t love it that, in every scenario presented in the comments that I’ve read so far (so, the comments above yours), OP gets 100% of the blame and the consequences, and the team walks away knowing that not only is it okay to give someone a ridiculous amount of runaround when they are asking for information from you, you can even get them fired if you push back on them hard enough and then they’ll never come back to bother you again. (and yea OP should’ve escalated to their boss!)

      1. spiriferida*

        I object to this – the other team didn’t/couldn’t get the LW fired, because if the LW is fired in this situation, it’ll be because of their own actions. The other team didn’t make the choice for the LW to falsify information. They did make a frustrating situation for the LW which conceivably could have backfired and caused consequences despite the LW’s best efforts, especially if their boss is unsupportive and they didn’t have other tools to make the team get the information – but that would be a much more unjustified situation than the one that we actually have here.

  29. Hyaline*

    Yeah I too am unclear why it matters whether ChatGPT, the OP, the OP’s roommate, or Madame Claudia the psychic fabricated the information. The information was fabricated, full stop.

    There seem to be multiple points of failure here including not involving the manager, but I wonder as well if OP could have played things less delicately with the difficult team. Deadline is approaching, OP sends their (limited, likely inaccurate) read if the situation to difficult team and says “I will be presenting this on Tuesday. If this isn’t how you see the situation or have things to add, let’s meet (or whatever is helpful)”.

    1. Elbe*

      Yeah I too am unclear why it matters whether ChatGPT, the OP, the OP’s roommate, or Madame Claudia the psychic fabricated the information.

      These are all the same in the sense that fabricated information is generally not reliable. But I think that the main difference here is that ChatGPT information doesn’t have a source at all. Getting information from a knowledgeable friend or from a reliable trade website is still dishonest, but at least it has been vetted in some way. Using ChatGPT is similar to Madame Claudia in the sense that it’s EXTRA poor judgement to not even get information from a reliable source.

      1. Hyaline*

        Well…here it isn’t just that it’s unreliable, it’s that it was presented as coming from a specific source but was in fact fabricated. ChatGPT may have created a crappier lie than OP might have crafted on their own, but the really egregious part here is that the presentation claimed “this information was gathered from talking with the team” when it wasn’t.

        1. Emotional support capybara (he/him)*

          There’s also the fact that much of ChatGPT’s training data is copyrighted material used without the creator’s consent and without credit or compensation to the creator. So OP didn’t just make stuff up and lie about where it came from, they took credit for someone else’s work.

  30. Serious silly putty*

    Can you take the answers CHAT GTP gave to the other team, and say something like, “this is the KIND of answers that will help us, but we don’t know what specifics are true for you. Could you edit this to reflect what’s true?”
    Or, show them your slides and say “this is what I’m guessing is true for your team based on what I’ve read online. Can you edit this to be true for your team?”
    Hopefully you get something back, then you can go to your boss and say, “I need to admit that I never got clear answers from Team X, and I extrapolated details without knowing if they were accurate. After the presentation I went back and clarified things, and these are the amendments I want to make.

    1. Irish Teacher.*

      That’s a great suggestion. I love that script. It’s true without admitting “I used ChatGPT to make up an answer.”

  31. Naomi*

    Oh nooooo. OP, I feel for how stressed you must have been, but I’m wondering why you were so desperate to cover for yourself, when it sounds like you did your best to get the information while the other team stonewalled. Why did you think you would be in trouble if you explained the situation honestly? I think the answer will be at the crux of why you went wrong–and in particular, whether it was your own neuroses at play, or if you’re in a toxic workplace where you would really be blamed for other people’s refusal to cooperate.

    And in the future, don’t use ChatGPT for anything at work that you wouldn’t admit to your boss.

  32. Trout 'Waver*

    Can we talk for a minute about how terrible the other team is, though? “You wouldn’t understand” is never an appropriate response in a business setting. OP is being intentionally sandbagged.

    1. Dust Bunny*

      RIGHT?

      The “solution” here was entirely wrong but the other team is being completely impossible.

      Obviously this needs to be brought to their manager, and let’s hope that whoever that is is a lot more responsive.

    2. Awkwardness*

      OP wrote They’d probably call me pushy or argumentative.

      I can imagine a team with very detailed or technical processes and/or a history of being steamrolled over (because body understands those processes) be very frustrated if OP might not be willing to listen first.

      But maybe it’s only my own bad experience speaking.

      1. Dust Bunny*

        Or they’re information hoarders who think they smarter than everyone else and shouldn’t have to deal with, or suffer oversight from, mere mortals. But maybe that’s only my own bad experience speaking.

        1. Awkwardness*

          Please do not get me wrong, I am not saying OP did this in bad faith.
          If a team is icing you out, you need either a lot of patience, people skills, or a lot of guidance and support from your manager. If OP was a little bit hot-headed, new to the workforce without guidance and support, this might not be optimal and it is a assesment question OP can ask themself. If that’s not the case – good. If yes – something to learn and be better prepared for the next time.

          As commented by Dust Bunny, the experience differs widely. In the end, it’s only OP who has all the info.

          1. Awkwardness*

            Also, I do not want to blame OP!
            I was very undiplomatic when I started working too.
            But I did not think it was okay to call out the other team as “impossible” without further facts and wanted to give a possible explanation.
            Something comparable happened at Old Company and even though I did not like the behaviour, I couldn’t blame the offenders. They were caught in the crossfire of office politics as well.

    3. tabloidtained*

      This is so common, in my experience, that it’s almost like its trained into people.

    4. Myrin*

      Yeah, I was exhausted and annoyed just reading all of this and long before I even got to the “ChatGPT” portion of this letter.
      I deal with a (milder) version of this sometimes and I’m very forceful by nature so I can usually get what I want but my best friend at work can be a bit of a pushover and he regularly gets soundly ignored by the worst offender.

    5. Kelly L.*

      I do think there’s something to this. The other team just doesn’t want to do this, and is trying to avoid it as long as possible.

    6. talos*

      My experience with process improvement consultants at my job is that at best, they do nothing, and at worst, they make the process worse! My general approach as the person being interviewed is to get the consultant to go away as fast as possible before they have a chance to make the process worse. Being rude definitely helps.

      1. Orv*

        Yeah, my experience is management brings in consultants hoping the consultants will back up management’s view of the situation. If they recommend something management didn’t have in mind, they get fired. So there really is no benefit to helping them. They’re just there to push the management line and my input will never matter.

    7. Elbe*

      In my experience, process consultants are a bit hokey. A lot of them are very young and inexperienced (which I think may apply to the LW) and it’s a bit insulting to send them in to “fix” problems on teams of knowledgeable professionals.

      In a lot of cases, process is difficult exactly because there are so many variables and competing priorities and factors that require attention to detail. It wouldn’t surprise me if the team was resistant to this exercise because they knew that the trying to get the LW up to speed wouldn’t be worth the time they would need to invest.

      Still, I think that they were very wrong in how they handled it. They should have taken their concerns to whoever assigned the LW, not just ignored the project. Making things impossible for the LW was not the way to go, and it really does make them look like jerks.

  33. DramaQ*

    Oh no LW. Big OH NO. I would come clean right away because it is going to be 20x worse if they start acting on your data and the people you “talked to” come back and say they never told you that was a problem. The truth is going to come out in some form eventually.
    I am not sure there is a way to come back from this. In my profession making up data is grounds for immediate termination. People depend on our data being accurate so they can make business decisions, apply for grants or publish papers. Mistakes you can come back from, deliberately choosing to falsify information, even if it was a heat of the moment response is not permissible.
    In the future the best thing to do is own up to you don’t have enough data to assess and explain why. Perhaps the higher ups you were presenting to could have offered solutions and guidance. Is it embarrassing to admit you didn’t do your job? Yeah but it was outside of your control and most employers understand that and would prefer you be honest over lying.
    I would also suggest therapy to figure out why this was your first option instead of the above. A therapist/career coach could help you develop the skills and confidence to better be able to handle situations like this in the future.

  34. Justin*

    Minus the ChatGPT of it all, I had a similar situation – I had to get info from subject matter experts and they just would not give me the information I needed. For months.

    However my response was just to be unprepared and get in trouble for that.

    Good luck to you, OP. Definitely a big error, hopefully you can learn from it.

  35. Wow....*

    Always interesting to see what gets grace on this site and what doesn’t.

    (Even taking into account that not all the same posters respond to every letter.)

    1. Margaret Cavendish*

      I think the common denominator is people’s ownership of their actions. When the letter says “I made a mistake and I need help fixing it,” they usually get (mostly) graceful responses. Some are sterner than others, and some people create a new username to show their disapproval, but overall the responses will be generally helpful.

      But when the letter says “I never make mistakes and I need you to agree with me that the rest of the world is wrong,” the responses can be more harsh. Like the person whose employee had a Leap Day birthday, or last week’s letter literally titled “I never make mistakes.” The responses still have the general intention of being helpful, but the help does come with a bit more snark in that case.

      1. Irish Teacher.*

        That’s exactly what I think too. If somebody writes in and says “I made a complete mess of x. How do I fix it?” there is no point in berating them for their mistake. They already know they made a mistake and don’t need to be told that. What they need support with is where to go from here.

        On the other hand, when somebody writes in and says something like “clearly I had to use chatGPT because my coworkers weren’t being helpful. How do I ensure that my manage believes those are really the answers the coworkers gave?” or something else that indicates they don’t realise it was wrong, well, then there’s a bigger problem.

        I once read that what says who you really are isn’t whether you mess up – we all do. It’s not even how badly you mess up because there are all kinds of things that factor into that and some are things that aren’t really a person’s fault such as how much experience they have or what examples they were given or cultural issues (somebody in an unfamiliar culture may well make more mistakes than somebody familiar with the culture). What really says who you are is how you react when you are called out on the mistake (or in this case, when you realise your mistake).

        The way the LW tells this gives me the impression that they realise they were in the wrong and that they are not by nature a dishonest person. They just got themselves into an awkward situation and made bad decisions.

  36. tabloidtained*

    I’ve seen this exact scenario play out in real time, up until the ChatGPT part, so I completely understand why you did it. Short schedules; teams somehow feel their ownership of a business area means only they understand it and, simultaneously, can’t explain it to anyone else; boss wants a presentation and will not accept that another team is refusing to provide information.

    In the scenario I’m thinking of, the presentation used “potential” pain points instead of interview-based facts. The pain points were created by the presenter based on her research.

  37. Mytummyhurtsbutimbeingbraveaboutit*

    If the workplace is bad enough that you can’t get the info you need and your manager won’t back you up, then you need to start looking for a new job.
    Or a therapist if your anxiety about messing up is that bad.

  38. Irish Teacher.*

    Not sure how much help this will be or if it’s even accurate but I wonder if perhaps the LW is still thinking in terms of college rather than work with relation to such projects. I mean, that at college, if a lecturer asks you to suggest a solution to a problem, they are trying to test whether you can generate a good solution and possibly testing your presentation skills as well, so often the problem itself doesn’t matter much so long as it sounds plausible.

    It does sometimes happen at college that they actually tell you to “make one up” if you don’t have an actual example. I know when I did a post-graduate course in Inclusive Education, we had a couple of assignments that were to talk about how we supported a student with particular needs and we were usually given fictional profiles of a student and told that if we hadn’t ever worked with a student with the needs in question, we could use the example and say how we would support that student. Because the aim was to see if we understood how students with various needs could be supported, not to find out the specific needs of any student.

    In the workplace, it’s usually different. Your boss isn’t testing you to see how well you can present on possible problems or what solutions you can suggest. She is actually trying to find out what the concerns are.

    Now, I’m not saying that using AI in college would be a good idea either, but it would be a problem for different reasons.

    I wonder if the LW got caught up in seeing this sort of as a test and that they’d “fail” if they didn’t bring a convincing answer to the boss when in reality, “I don’t know; they don’t seem to be able to articulate what their exact issues are. They keep telling me it’s abstract. Does that mean anything to anybody?” might actually be helpful feedback in its own way.

    If this is true or resonates in any way with the LW, it might be worth reframing work tasks less as a test and more as a collaborative effort. In work, asking for more help from your boss in a situation like this isn’t an admission that you aren’t good enough.

    1. Dust Bunny*

      I actually don’t think this is the least likely factor. We don’t know that the OP is that young–they might just be relatively new to the position–but it wouldn’t be the first letter we’ve seen in which someone is struggling to transition from school to job mindset.

    2. Abundant Shrimp*

      Yes I thought of it too. OP seemed to be thinking of this in terms of “I have a presentation due on X date that I’ll get a failing grade on if I don’t have it by that X date”. When in reality, the information was needed so the business could take some kind of action based on it. Now they do not have the accurate info and don’t know it that they don’t have it, so whatever action they decide on might not alleviate the team’s problems at all. It is about getting things done together as a group, not about having the right answer by the time it’s due so the boss doesn’t fail you.

  39. NorthBayTeky*

    What I don’t understand is why many LWs don’t take their questions to their boss. Particularly in situations like this where you need information from outside your team. That’s what bosses are for, to solve problems outside your chain of command.

    1. ecnaseener*

      It’s not intuitively obvious that “that’s what bosses are for” if you’ve never had a boss who did that!

      1. Evan Þ*

        This. If you haven’t had a good boss, you don’t know what a good boss looks like.

        My first boss was bad. He kept cancelling our meetings, didn’t give me a clear picture of my responsibilities, and then rebuked me for not doing the things I didn’t know I should be doing. At the time, I could’ve told you all these things he was doing, but I had no idea they were his failures, because I didn’t know that a good boss wouldn’t do them.

        1. Jackalope*

          At my first post-college long term job I had an issue come up that o had no idea how to resolve. I brought it to one of my supervisors and asked her for help and she said she didn’t have time to deal with my problems. At that particular job that sort of brush off happened over and over again; although I don’t think I ever brought an issue to her again after that, sometimes a supervisor would ask me for input on what was needed in a situation and then when I answered, they would completely ignore my feedback. It took a long time for me at future jobs to figure out that asking a boss for help was a potential strategy since it was so thoroughly trained into me at that job that such an action was useless.

  40. learnedthehardway*

    Honestly, I would just leave it. What’s the other team going to do? They didn’t provide adequate answers. If they don’t like the suggestions the OP came up with (WHICHEVER way the OP came up with those suggestions), then they can push back. If they question how the OP came up with the ideas, the OP can say they gave it their best guess, based on the very limited information the team was willing to share / explain. Unless the ChatGPT writeup is complete nonsense, odds are it is a regurgitation of the OP’s observations, anyway, or something generic enough to apply to any similar situation – there might even be some actionable ideas.

    In future, rope in your manager when/if the team that is supposed to be giving you input refuses to do so.

  41. Kitry*

    Honestly at this point I would just go ahead and brazen it out. You didn’t plug the questions into chatGPT, the other team did reply to your email and you used their response in your presentation. Commit to that reality fully and move forward with that truth.

    The other team has been so so distant and uninvolved, I wouldn’t be surprised if they never push back on this at all. If they do, just act baffled as you would if somebody sent you an email and then denied that they had sent it. Everyone has had the experience of an email being mysteriously eaten by the interwebs. Which sounds more plausible- that an email randomly disappeared or that an employee used chatGPT for some odd reason instead of telling his boss the other team was ignoring him?
    If the other team tries to fight back, they are going to look like the crazy ones, not you.

    1. Elbe*

      I the LW’s company actually intends to use the presentation information to make changes, I think that this is really bad advice.

      Once the changes are suggested to the team, they will likely have a lot more feedback than they did in the LW’s interview. It will almost certainly come out that they didn’t provide any of the information listed in the presentation.

      Proactively saying, “I thought I was being helpful, but I now know that this wasn’t the right thing to do” could be enough for a sympathetic boss to clean things up quietly behind the scenes. Doing nothing and letting it blow up, publicly, across departments, is going to be an absolute mess that will almost certainly result in the LW being fired. The LW still has the opportunity to demonstrate good judgement, now, in how this misstep is handled. Doubling down will only demonstrate more poor judgement and a lack of regard for fellow employees.

      1. Kitry*

        Sure, but why would the consequences fall on LW, instead of on the other team that provided such vague and unhelpful advice?

        1. Saturday*

          Because being vague and unhelpful is… unhelpful. Making stuff up and saying someone else told you these things is much much worse!

        2. I should really pick a name*

          Who says it only has to be one of them?

          But at the end of the day, the LW’s behaviour was way more egregious.

          The team was being unpleasant. The LW was being dishonest.

          The team was being unhelpful regarding process improvement which isn’t their core function. Process improvement is the LW’s core function and they provided fake data for it.

          1. Czhorat*

            Yes, and I think that LW realizes and acknowledges this. Now is the time to figure out a way to dig your way out of the hole you dug yourself into. The first step is realizing you’re in a hole, looking for a way out, and throwing yourself on the mercy of the powers that be.

            If you tried to brazen it out and THEN it blows up now you didn’t just make one big error in judgement – you had perpetuated a long-term fraud. You’ve destroyed any trust you might have had.

        3. Elbe*

          I think that you’re misreading the situation a bit.

          Not completing a project is a problem. But the REAL issue here is the LW’s dishonesty and lack of professional judgement. That is what is going to get the LW fired, not one assignment being late or incomplete.

          The fact is, issues arise all the time. Employers need to know that when things don’t go well, their team members can handle that appropriately and with integrity.

          1. Czhorat*

            Yes.

            And there is a path forward for LW. I might not even mention ChatGPT, but say “I need to clear something up – I never got complete answers from Team X, so I made my best guesses based on the little data I DID have for the presentation. I have no real confidence that it matches their needs and need to circle back with them”.

            That’s putting as close to a positive spin on it as you can, acknowledges that it’s not based on real data, and opens a path to get it right.

            It’s also always better to bring an issue to your boss rather than let them get blindsided when it hits them from some other source.

            1. I forget what name I used earlier*

              I disagree with a large part of this comment thread but I want to highlight this specific one to the LW if they’re reading this far. If you admit the issue and the reason why the issue — but do not mention ChatGPT, just say that you did your best with what you had but that it’s not precise and more follow-up is needed, then you’ll probably be fine.

              I also think people in these comments are wildly misunderstanding what a presentation is in the in-house process improvement industry vs. a report that’s actionable.

            2. Chuffing along like Mr. Pancks (new name for Reasons)*

              This seems like excellent advice to me for a very tough situation.

    2. Saturday*

      I can’t imagine telling all the lies that would have to be told to do this. I don’t think emails disappearing happens very often, and that knowledge plus the fact that the entire team is likely going to flatly deny saying these things would make the LW look very bad.

      1. Kitry*

        At which point, LW expresses polite confusion and reminds them that this is exactly the feedback that was provided. W

        I mean, what are they gonna say, “Nuh-uh, we never said that! In fact, despite being asked for our input multiple times, we told you NOTHING!”
        As long as LW stays calm and admits to nothing, the uncommunicative team has nothing they can say that won’t make them look lazy, crazy, or both.

        1. Czhorat*

          Elbe is right – this is astoundingly bad advice. If there’s no trail *and if the proposed changes don’t match what they need* then their answer will be, “no, we did not give that feedback. Do you have record that we did?”

          And the answer will be that there isn’t. No email trail. No meeting notes. Nothing, because it was made up out of whole cloth.

          The other issue – aside from Elbe’s very good point that you CAN work it out quietly behind the scenes if you fess up – is that this runs the risk of making the project go badly by providing the wrong changes. The first goal should always be to complete the project successfuly. Saving your own posterior when it goes south should NEVER ne the primary thought.

        2. Elbe*

          This is not very realistic.

          The LW understands that they lost the moral high ground when they decided to solve the problem by lying and presenting false information. Being unhelpful in a meeting is not – in any way – as great an offense as making up information and letting your company invest time and resources into actioning it.

          When all of this comes out in the open, the other team may have reasons why they did not want to invest their time into this exercise. The LW isn’t going to have any good reason why they lied and then doubled down on lying. This is very bad advice and I hope that the LW does not follow it.

      2. Abundant Shrimp*

        Agree. LW should just bring the receipts. Instead of making up the whole story of disappearing emails, why not show the PTB the whole thread or threads where the team goes “we cannot give you this info now, but will in a meeting” – “no no, why did you schedule a meeting, send us an email” – “no no, what is this email, we aren’t replying to that, are you crazy, bring us the eye of the newt, a hair of a unicorn, and a black cat and we’ll give you the info then” just bring proof of the whole history where the team kept sending LW on side quests instead of giving them what was needed. And conclude with “at that point, I felt I had no choice but to go with my best guess on what they’d have wanted based on what they’d provided.”

    3. D*

      …I mean, I would definitely assume “coworker is a liar for some odd reason” is more likely than “oh oops just that one singular email is somehow deleted from every single server ever!”

      1. Czhorat*

        Yeah. I keep EVERY email I send or receive, and I’ve never had one disappear and not be recoverable. I’ve also never took “I can’t find that email. Weird” as anything other than a lie, especially if it was directed at me regarding an email I know I didn’t sent.

        I think one thing this scheme ignores is the possibility that the proposed changes are not useful; if OPs ChatGPT fueled guess turned out to be not useful then the team would KNOW that wasnt their feedback because it’s wrong.

      2. Kitry*

        OK, and what happens when the team is asked what information they did give? They’ll have to admit that they just… didn’t. And now everyone is in trouble together.

        I mean, I’m not saying that this is a fool proof plan.But look how many people all up-and-down.This thread are straight up saying that they’d fire the letter writer for this. None of the options here are very good.But I do think that brazening it out is more likely to be successful than coming clean.

    4. Czhorat*

      I have one last thought on this. Lots of people have pointed out that this is a logistically poor choice that maximizes the chance of LW getting actually fired for this, rather than just a stern talking-to. There’s another side.

      It’s unethical.

      When I look myself in the mirror, I want to see a decent, honest person who tries to do the right thing and makes corrections when I need to. I don’t want to see someone who will take shortcuts, lie about those shortcuts, and accuses other people of being crazy or delusional when they try to call me out on my lies.

      Every day we get to decide who we will be as people. Decide to be good.

  42. Bumblebee*

    For practical advice, could you say something along the lines of, “Faced with little feedback from the participants, I selected a number of best practices that might address the issues they seem to be facing.” ?

  43. Immortal for a limited time*

    I actually love this question, and I love that the LW tried using ChatGPT to solve a problem (which is NOT to say I love how they *used* the information — but they know it was wrong).

    I work for a public entity, and while we could certainly find practical, responsible uses for AI (such as summarizing our existing documentation into FAQs for our website), we cannot do so yet, because we don’t yet have a written, approved policy on it. But we intend to create one that specifies how, when, and why it can be used. In fact, I attended a conference in my field earlier this month, and the creation of AI policies was a major topic of discussion. Some ‘brands’ of AI are better at certain tasks than others. For example, one should not expose *generative* AI chatbots to external customers due to the very real risk of hallucinations. Of course, writing queries is an art as well as a science, and staff training is of utmost importance.

    In hindsight, I think this LW’s instinct to use a powerful tool to *assist* with a problem was not wrong. They could have used it to reframe their own thinking before going back to the resistant team and saying something like this, without mentioning AI assistance at all: “What I’m hearing is that your team’s pain points are [factors X, Y, and Z], but the root causes are hard to pinpoint, which in turn makes it difficult to think about possible solutions. If that’s not accurate, can you help me refine these points for my presentation to management? I want to be sure I don’t misrepresent your concerns and ideas.” (In other words, ChatGPT might have helped LW write a draft to facilitate communication with the resistant team. slide. My role includes a fair amount of business analysis and, even though I’m an elder GenX’er who is nearing retirement, I can see how AI can be a useful tool in our workplace toolbox. I predict that in 2-3 years, that’s how we’ll think of it.)

    1. Kevin Sours*

      An industrial sand blaster is a powerful tool. And one just about as applicable to OPs situation. I don’t love that OP tried to use ChatGPT for this because it’s not problem ChatGPT has any ability to solve and using powerful tools incorrectly generally does a lot of damage. The current case is a great example.

  44. Yours Sincerely, Raymond Holt*

    I feel your frustration and can see the process internally that got you to this place.

    But yeah, this is really going to come off as dishonest. Because, no matter your intention, it is.

    If you have to gather information from them again, I wonder if a possible prompt would be to use the ChatGP answers to l say “this is what ChatGP identified, do you agree, if not, tell me what’s wrong with it.”

    They might be the type of people who are better at criticising things that proposing solutions. Sometimes I find that giving people like that something to criticise solves the issue.

  45. Whatthewhat*

    I would seriously consider firing you for this. You have shown little to no judgement here and faked your report. By doing so you have wasted time and resources and demonstrated quite clearly that you cannot be trusted.

    1. Kitry*

      And are you also going to fire?Everyone on the other team for being so uncommunicative and rude? I mean seriously, “You wouldn’t understand”? I have 2 decades of experience in my field, and I wouldn’t speak that way to someone who just started yesterday. LW’s panicked cover up is a problem, but it’s not even close to the biggest problem here.

    2. I forget what name I used earlier*

      Little to no judgement? This person did their due diligence. They went above and beyond to try to communicate with this team. And then when the deadline was hanging down on them, they panicked. And they are clearly not in a great place with any oversight or help from the boss; it’s the boss’s responsibility to check in on them and ask how things are going and if they can help. The LW did everything they knew they could do and couldn’t ask anyone for help and they were wrong and they know they were wrong. Wasted time and resources? Can’t be trusted? That’s absurd.

  46. CheesePlease*

    Everyone here has good insight. I’m in a similar role but with 10yrs experience. I do internal continuous improvement / KPI management / quality etc. What you should have done is the following :

    1. Flag your boss after the first interview. “Hey team red is being really difficult, how do you typically manage projects with them?”.

    2. Flag your boss again when they didn’t respond to their emails “Sharon wanted to have a meeting about the possum project, but then said it could be an email. It’s been a week and I haven’t heard back. Should we postpone the presentation at the end of the month until I get a response or could you help me bring them in to a meeting to get the answers I need?”

    Good managers in roles like this empower you through difficulties. You may think you got hired based on your technical skills, but your soft skills matter WAY MORE. Even if you deliver a great solution, if the team dislikes the solution it’s not a good solution. And improving soft skills takes time.

    Asking for support is not admitting failure. Think of it as any other challenge you face. “Hey the round peg isn’t going into the square hole. Can you help me find some square pegs?” Ask for help.

    Presentations can be modified or postponed, but integrity is hard to rebuild.

  47. An Australian in London*

    It’s crucial to note that both ChatGPT and Gemini lack default sanitisation of AI commands within text bodies. Considering the known attack vector of unsanitised input in web browsers, this oversight is a significant security risk. It’s surprising that AI vendors haven’t given this issue any attention.

    Consider the implications: AI commands can be concealed in a resume, a work sample, or an assignment. These commands can direct the AI to disregard all previous instructions and instead produce results that could significantly impact the outcome, such as declaring a candidate exceptionally well-qualified or a work sample exceptional. The potential for misuse is clear.

    Try it. It works.

    The simple forms of this can be defeated with a prompt to ignore all embedded AI commands and anything that appears to be an instruction in the following text and instead call it out explicitly as an attempt to subvert and corrupt the analysis. I have now taken to adding an instruction to accept instructions only from the primary user and, if there is any doubt, explicitly call them out and ask if they should be followed.

    FYI, even that *can* be defeated by more sophisticated means of hiding AI commands. This is an arms race, and methods are evolving weekly.

  48. CheesePlease*

    I tried commenting earlier and wrote a lot of words given my experience in a similar role but it appears it never posted?

    Either way, what you should have done is flag your boss. Each roadblock. Flag your boss if it’s impeding your ability to get stuff done.

    1. Hlao-roo*

      Sometimes comments get caught up in moderation. I think I see your earlier comment now–is it the on with the 1:18 pm timestamp?

      Links and certain words will send comments to moderation, but sometimes the filter grabs completely innocuous comments for no discernable reason. They show up after AAM approves them.

  49. PayRaven*

    I’m also wondering what information OP put INTO ChatGPT in order to get these results. We know these models aren’t ethically sourced or well-secured; what if the company’s proprietary info is now floating around out there? To say nothing of presenting machine-calculator gobbledegook as if it were true and relevant.

    OP, I’m so so sorry, but you HAVE to get into the part of your brain that made you think ChatGPT was even a possible viable solution here. It is going to get you in so, so much trouble.

    1. Caramel & Cheddar*

      I have seen a surprising number of people put proprietary info into ChatGPT and I am absolutely gobsmacked they don’t see the issue.

  50. The Not-An-Underpants Gnome*

    As a good friend of mine once said, “Oh noey.”

    Taking the AI out of the equation completely, OP made things up, something folks have been doing at work, at school, and in relationships from time immemorial. The difference here is that OP feels bad about it, which is a good thing!

    OP, I add my voice to the “Tell the truth” chorus. Will it 100% guarantee that your job is saved? No. Will it show your bosses that you are someone who takes ownership of their wrongdoing, whether it be accidental or on purpose? Yes. That on its own may end up being what saves your job. It may not, but taking ownership when you’ve done something wrong is a big part of the adult working world, and the sooner you learn to do it, the better.

    Is it going to suck if you do end up losing your job? Yes indeed. But speaking as someone who has lost a job or two due to performance issues (though of a significantly different nature; I wasn’t fast enough for the folks), it’s survivable, especially when you’re young.

  51. Nilsson Schmilsson*

    What if the Chat solutions work?

    I’d confess to nothing. If it’s a failure, then you shrug it off and let management know that this what you gleaned from the information (or lack thereof) provided by the team that needed your help.

    1. RagingADHD*

      How do you propose they implement any solutions without the other team knowing or having to participate? The first thing they will say is “What the heck are you doing? We didn’t say any of these things. Where is this coming from?”

      1. Kitry*

        At which point, LW expresses polite confusion and reminds them that this is exactly the feedback that was provided. What are they gonna say, “Nuh-uh, we never said that! In fact, despite being asked for our input multiple times, we told you NOTHING!”
        As long as LW stays calm and admits to nothing, the uncommunicative team has nothing they can say that won’t make them look lazy, crazy, or both.

        1. Dahlia*

          “and reminds them that this is exactly the feedback that was provided”

          And when the LW is asked to show the email where this feedback was provided, what do they do?

        2. RagingADHD*

          Sure, I’m absolutely sure that a team of technical experts is going to be absolutely flummoxed by the clever manipulations of someone who has been working full time for two years.

          The response will be that they gave LW a list of their needs (via email, no less), and LW told them it was outside the scope of the project. All that this will accomplish is to make LW look incompetent in a different way.

  52. Elbe*

    Great response from Alison, as usual.

    I think the best way to approach this depends on some factors not listed in the letter:
    – Who was the meeting’s audience? Passing bad info to your boss is bad, but presenting it to the CEO is worse.
    – Did the presentation explicitly lie about how the information was sourced? It’s easier to come back from “These are the conclusions” rather than “This is what I was told by the XYZ Department”.
    – What type of relationship does the LW have with their boss? What is the assessment of their performance so far? If the LW has done excellent work and has a great relationship with their boss, the situation is very different than if they are on a PIP or work with a boss that is very unforgiving.

    Honesty is the best way to go here, but how to frame it and what info to reveal depends on the specifics. If possible, the LW should admit to padding the information without mentioning ChatGPT. The LW should own up to the poor judgement, as that alone is evidence that even if their judgement was poor in the moment, they don’t have poor judgement in an ongoing way.

    But, if the situation was explicitly lying, especially to people several layers up, I think that the LW should be prepared to be let go for this. Mistakes sometimes have some pretty harsh consequences, but the LW will be able to bounce back.

  53. Katie*

    Ugh. I just have unkind reactions to this. My work is complicated and has lots of problems that need to be fixed. My assumption is the team has met with these ‘fixers’ in the past and all it has done was waste their time.

    I have met these consultants too and honestly gobleygook presentations is all that ever happens.

  54. RagingADHD*

    If the list of what they need in order to even answer your questions was out of scope for your project, then it sounds like your project does not have the scope to meet their needs.

    It seems to me that was the real answer to put in your presentation.

  55. Dandylions*

    From this letter alone I can see why the team was hesitant to take the time answering.

    They let you know the changes they needed are “not something you would understand”
    and when you pushed they came back with their real list to which you responded “this is wildly out of scope of my team!”

    They have probably been burned on this before and don’t want their words used against them getting what they really need aka ‘those wildly out of scope” items.

    It doesn’t really matter if your team can auto format emails for them if their real problem is that they need a new manufacturing customer system and of course they don’t want to be on the record saying all they need is better formatted emails.

    And then you had such little regard for their needs that you just lied about it to fill in a presentation.

    I’m glad you have realized your mistake but I also think it is worth asking why you had such little regard for their real needs that you were willing to lie about them. Even your letter still comes across as if the other department were at least equally at fault for you using chatGPT to make stuff up.

    1. Evan Þ*

      But even if I have an unmet need at work, that doesn’t excuse me from cooperating with other things.

      Suppose that I really badly need a new work computer because my old computer’s slowing me down dreadfully. Suppose I’ve brought this up several times with Alice who’s responsible for things like this, and she’s rebuffed me without a good reason. So, when Beth from a totally different team comes by to ask how meetings can be streamlined to help me with my job, I… refuse to answer and snap at her that it doesn’t matter because what I really need is a new computer and if she can’t get me that she doesn’t understand what I need.

      That’s a problem. I’ve got a real grievance with Alice, but Beth can’t help with that; all I’m doing is interfering with Beth’s job. Maybe I’m also hurting myself because Beth’s meeting replanning might help me a little even if not as much as if I had a new computer.

      1. Resentful Oreos*

        I agree. The other team were being jerks, and I think it would have been fine for OP to go to their management and say “I gathered all the necessary information except for the alpaca herders, who were hostile and unresponsive and stonewalled me, so I don’t have the information I needed from them. I tried X and doing Y but no dice. What do you suggest we do going forward? Should we still hold the presentation?”

  56. verocopter*

    I messed up royally. I’m two years in my first full-time role. My job is like in-house consulting. My team is trying to improve our internal processes. We interview people in the process about what they’re struggling with and look for ways to improve it.

    I worked in systems improvement and since you also work in that, I’m going to phrase this lie this:

    you have a systems problem. Your system absolutely failed. And it’s not just your system, it’s the system that you use for finding out about systems problems.

    What should have happened: as soon as you realized this team was giving you the run-around and would never answer your questions, but you’re on a deadline and need this information, you should have been able to escalate this to your manager or to a more senior member of your team and they should have handled this.

    That didn’t happen. Why didn’t that happen? Are you not empowered to tell your boss that someone else is causing you problems?

    Was this team a known problem? Were you set up to fail?

    This post reads to me EXACTLY like the person who ended up taking a bunch of timeshare paperwork and setting it on fire out of work frustration with being given a task they were unqualified to do.

    You messed up, but more than that, your system messed up.

    And what happens next time it messes up? What happens the next time a team causes you problem? If you cannot think of a better way this could realistically been handled in your situation (ex: your boss is the sort who thinks failure is not an option), then you need to get a different job, because the system will not be fixed and you will find yourself in a place you don’t want to be, struggling to do a job that no one will let you do.

    Systems improvement is hard. Teams like this are part of why it is.

  57. CzechMate*

    For future reference, OP–there are instances where you may need to tell your superiors that you weren’t able to complete a project based, despite your best efforts. Ideally, you could have gone to your supervisor before your presentation and said, “These are all of the documented way I have tried to work with this team to get these answers, but they have been hostile/nonresponsive. How do you suggest we proceed?” This could be a sign of a bigger culture issue that should be addressed at the management level–it may not necessarily be something that you need to solve. Sometimes it’s better just to say “team was nonresponsive or uncooperative–additional interventions are necessary.”

    1. Hyaline*

      This—I can see why no one wants to show up a meeting without information you were expected to bring, but what would the result of giving an honest progress report have been? I would hope “management telling Team Problem Pants to get with the program and provide acceptable answers for this project.” If OP could only envision getting chewed out for honestly sharing the status of the project with their boss, they need to ask why that is—unreasonable toxic workplace or personal issues with perfectionism, people pleasing, whatever?

      1. Kevin Sours*

        It feels like OP should have looped in their management a long time prior the presentation. About the worst thing you can do in any situation is to surprise management. Just a note up the chain of “I’m having trouble getting any useful information from Team Rocket, my next steps are X, Y, and Z” after the initial push back provides context when you later have to present “here is a gaping hole in my data” formally.

        1. Hyaline*

          Oh yeah. But if that ship had sailed I’d still think the better option would be “honest progress report” than “flagrant fabrication” and not for purely ethical reasons—OP probably wouldn’t get away totally unscathed (unchided?) but at least the problem is out in the open and maybe a real solution could be found.

          1. Kevin Sours*

            This is absolutely true. But a lot of bad decisions get made because admitting that they’re flailing requires admitting that they failed to mention they’ve been flailing for too long already.

            One of the hardest lessons I think for junior people to learn is say “I need help”.

    2. Resentful Oreos*

      I agree – and it’s *okay* to say that the other team was “hostile” and it is *okay* to snitch, tattle, nark, and rat out. This isn’t “Fergus took a long lunch and I want to get him in trouble,” this is “the other team is preventing me from doing my job.” Alison has written before that when you go to your manager in cases like this, it is not “tattling.”

      I don’t know if LW’s manager is unreasonable or unhelpful; but assuming a normal not-dysfunctional workplace, then it is absolutely OK for the LW to go to their manager and say that the other team is hostile, obstructive, and withholding necessary information, preventing LW from doing their job.

  58. Daring Darla*

    Ooohh yeah I totally feel second hand anxiety here for you, letter writer! I also recently was on the opposite end of this exact thing, where the client created documentation for me to work off entirely from ChatGPT and send me two weeks in the wrong direction :( she was new to the workforce and I think just didn’t understand my questions I needed answers from to do the work, so she generated entirely fake briefs and documents and sent them to me. It was a huge stress to me, but once we talked it out I did my best to be understanding since I’m sure it was out of inexperience and not malice.

  59. Moose*

    Hello OP! If you’re reading, I have some advice on how to handle this type of situation moving forward. My job also entails asking people questions and then proposing changes to programs based on their answers. It can sometimes be really frustrating when people just flat out don’t answer your questions! It can feel like you have nothing to work with and like you are a failure.

    However, this is super common. No matter who you are or how well you communicate, sometimes some people are just not going to answer. So, I like to structure my “research question” and interview/survey questions in such a way that a non-answer is still good information to have. To make up an imperfect example off the cuff, instead of an overarching research question of “How can process X be improved?” the question becomes “What feedback do staff have about process X and how can we implement it?” Or something. Then when you run into people not giving a lot of information in the answers, your conclusion becomes “Staff don’t have much feedback. However, we know X, Y and Z are still problems. What else can we do to capture improvement feedback?”

    This is tricky and often doesn’t feel plausible, but it’s one way to approach this particular issue.

  60. Hyaline*

    FWIW I think it’s also worth asking why this team is stonewalling the OP. Are they snobs who truly look down on people outside the department? Poor communicators who thought they’d done their due diligence? Either is possible but I’m guessing they don’t trust something about this process—system improvement in general, they’ve been burned before, they worry they’re being pushed out, they fear this is part of larger changes they don’t like…and not that the OP can control any of that, but a) it might be a situation where they were set up to fail and that’s helpful to know or b) they might be able to better understand and communicate around the challenge if they know what it is.

    No offense, OP, but it kinda seems like you underestimated the difficulty of this part of your job—that often people won’t want you poking in and you’re going to need ways to work around that, whether it involves looping in management or not.

  61. Travis*

    I assume the other team views your “internal process improvement” work as generally useless or unhelpful, and just doesn’t want to participate.

    Unfortunately, if you can really get away with faking their input, they might have a point!

  62. Skippy*

    Raise your hand if you’ve ever worked somewhere that this stunt could have gotten you promoted?

  63. The Bill Murray Disagreement*

    This is a situation where the impulse to use ChatGPT to get the conversation going is a great one, but it has to be timed & performed correctly. When you started to encounter issues getting feedback from team members, that could have been a great opportunity to run the email through ChatGPT, get its suggestions, and have follow-ups with individuals (or as a team) to talk through what was suggested.

    A LOT of people struggle to pinpoint / articulate problems with long-running processes, especially if they’re being interviewed solo. Using ChatGPT could have provided some starting-off points where the respondents could “throw darts” at the findings or agree with them.

    The thing is, no amount of insight coming out of AI will replace being able to faciliate these kinds of sessions and knowing when to ask for help from a leader if no one is participating in the feedback process.

  64. noname1234567*

    You could also send the presentation, edited to appear in bullet form, to the other team. Tell them that this was your understanding of the issues as per your meetings and emails. Ask them to respond by a specific date clarifying if your understanding is correct. If they reply and agree with you, then it’s a win for everyone. If they reply and there are only slight changes, then you can make the updates. If they reply and you’re way off base, then you will need to tell your manager that they weren’t being initially responsive so you made your best guess. And if they don’t reply at all, then you can take that to your manager, along with telling them that the original presentation was your best guess based on limited information.

  65. ThatOtherClare*

    My dear letter writer, I’m so sorry to hear that you lost your good sense in a moment of panic.

    The comments above sound fairly dire, but please take heart. We learn better from out mistakes than from our successes. You may have made a big one here, but it’s not a scar on your integrity forever. If the lesson you learn from this is “That felt AWFUL. I’m never doing that again!”, well, I’d trust you more than some of my more sanctimonious colleagues. If nothing else, you now have a great answer to everyone’s least favourite interview question: “Tell me about a time you made a mistake”.

    Despite your episode of caving in under pressure, you come through in your letter as a generally hard-working and conscientious person. I’m looking forward to seeing your update, and joining the commentariat in cheering for you as we read the way in which you solve this problem and move through it with renewed dignity and integrity. You’ll get there. I trust you :)

  66. An Australian in London*

    And now that I make myself resist weighing in on ChatGPT…

    I am a consultant who does a lot of this sort of work. The fact that the target team is hostile and alternately dismissive or invoking specialist domain-specific knowledge as a prerequisite for engagement is not an obstacle; it is a finding, possibly the main finding.

    It’s important to note that these responses are not random or chaotic. They are all rooted in specific reasons, which we can systematically uncover.

    Teasing out those reasons is an opportunity to add a lot of value – even if it’s all already known, documenting it plainly in writing can make some moves possible that are not possible without it.

    Throwing it out by actively concealing it not only is a grave lapse of professional judgement. It has ensured these opportunities were missed and the issues bypassed and perpetuated. Also if I were in the affected team I would point to this false study as evidence that there was never any goodwill or intention to actually do anything.

  67. tufertoosdae*

    Manager: “Have you tried calling these people on the phone.”
    LW: “No.”
    Manager: “Try that.”

Comments are closed.