61

I am writing a paper with a co-author. I spent a considerable amount of time (several months) writing a first draft of the paper. I then sent it to co-author asking them to revise the draft and add their contribution in specified sections.

The co-author returned the draft within a few days. The required parts were added and the writing was heavily edited. The style of the new document looked similarly to a few other documents that I knew were produced by ChatGPT. I asked the co-author and they admitted (quite proudly) that they used ChatGPT to assist with writing and editing of the manuscript.

I was not happy with the resulted state of the document. There were several obvious errors, which I pointed to the co-author. But also, the whole document seemed a bit weird to read. I could not really define it, but reading it gave away a feeling that the document was AI-generated. I was concerned with the possibility that there could be other mistakes in the document, which are not immediately obvious, and that it could reduce the chances for the paper to pass through the peer review process.

I shared my concerns with the co-author. They agreed to correct the mistakes I noticed. They claimed that they checked the document after it was generated, but a small number of issues possibly remained. They dismissed my argument that the writing is unnatural and weird to read, and requested to point out specific line-by-line errors, which should be corrected. They suggested that I could proofread the document and edit it again before submission, since I have concerns about its quality.

I feel that co-author did not put sufficient effort in their writing, and leans on ChatGPT, my work as a co-author, and work of peer reviewers instead. This leaves a bitter taste in my mouth, but I do not know if this behaviour is unethical or wrong in some other way which I can't quite describe.

Are my concerns valid and how is it best to communicate them to the co-author?


My first language is not English, and the same for my co-author. I appreciate that ChatGPT can correct errors in the use of English and improve grammar. I am concerned that it might introduce factual mistakes, change slight variations of meaning, or generally make the text unpleasant to read (although as a non-English speaker I find it difficult to explain why it happens).

Sursula
  • 20,540
  • 8
  • 62
  • 121
Dmitry Savostyanov
  • 55,416
  • 14
  • 140
  • 202
  • 69
    Even if there is disagreement on the ethics, it's totally normal to be pissed if a co-author butchers your writing. Authors should be open to considering the changes suggested by someone else but that doesn't mean that if someone rewrites your paper to be worse that they get their way because they touched it second. It's also extremely rude to make changes and then expect you to fix the things they screwed up. – Bryan Krause Jul 14 '23 at 18:09
  • @BryanKrauseisonstrike The co-author agrees to correct mistakes introduced by their edits, but insists that I should point them out since I believe these errors exist. They disagree with my assessment that the writing is unnatural and suggest that I should re-write it because this notion is not something objectively measurable. – Dmitry Savostyanov Jul 14 '23 at 18:14
  • 41
    I try to look at the bright side! I reviewed numerous math papers written in bad and overly concise English, lacking any explanations or context, full of mathematical jargon that the authors - after internalizing it - apply without thought or consideration, Now I'm looking forward to reviewing math papers written in flawless and overly wordy English, larded with misleading context and pointless explanations of trivialities, and still full of mathematical jargon that ChatGPT - after internalizing it - applies without thought or consideration. Who wouldn't welcome a bit of change now and then? – Jochen Glueck Jul 14 '23 at 18:24
  • 15
    The effort is in identifying the mistakes (which they expect you to do), not fixing them. They don't have an objective measure that their version is any better than your original, either. Arguably, the presence of errors you've identified is the closest thing to identifying that their work is objectively bad. If a human had made a series of edits to a paper's language, they should be able to comment on why each change was made: e.g., subject-verb agreement, run-on sentences, terminology used consistently within the manuscript and within the relevant literature, spelling correction, etc. – Bryan Krause Jul 14 '23 at 18:26
  • 24
    Call me crazy, but the fact that the co-author didn't say up front they were using ChatGPT seems like a signal they're on a tack of sneaky, duplicitous behavior. – Daniel R. Collins Jul 15 '23 at 01:28
  • 12
    Note that even if there were no compelling ethical argument against writing a paper via GPT, you are still within your rights to refuse co-authorship of a paper which is of poor quality and might harm your professional reputation. – GB supports the mod strike Jul 15 '23 at 07:32
  • 2
    Perhaps worth clarifying the level of English fluency of the various individuals, and whether this is the driver for using ChatGPT. – avid Jul 15 '23 at 08:23
  • 3
    This coauthor sounds like a very unreasonable person who considers you beneath them. I find it hard to imagine that you have any reason to put up with this. – Servaes Jul 15 '23 at 08:25
  • 3
    Related question prior to ChatGPT: Is paraphrasing my own texts using online tools is okay? Moreover, regarding "They suggested that I could proofread the document and edit it again before submission", if you've put a lot of effort into careful and precise wording already, you're probably going to have a more difficult task in proofreading and reediting the document than initially, because you now have to look for possible "not quite right" phrasings and figure out how to fix them while adhering to the various other changes made. – Dave L Renfro Jul 15 '23 at 19:35
  • 2
    Out of curiosity, do you use a version control system for your paper, like Git? – JNS Jul 16 '23 at 19:52
  • 1
    Two co-authors writing a paper in English, which for neither of them is a native language, is tough, and so I can understand the desire to do better than your own capabilities and also then the belief of your co-author that ChatGPT-written material might be better "English" than his English. But native speakers will know it isn't natural English. Just look at all the discussion (and the ongoing strike!) over at meta.stackexchange.com on how ChatGPT written material is obviously detectable because it is unnatural. – davidbak Jul 17 '23 at 01:40
  • 4
    There is another issue: the entire contents of your original paper are now part of ChatGPT's database, which means someone else could wind up with your text in their paper. That's how ChatGPT builds its database. So effectively the paper has been leaked, even if not in an easily identifiable media. Although I suspect the right prompt would allow ChatGPT to produce your original paper. For this reason many businesses, INCLUDING the big tech companies with AI products, prohibit employees from submitting any internal materials to ChatGPT. – formergradstudent Jul 17 '23 at 06:42
  • 2
    As to the validity of ChatGPT's output, you need to verify everything it did for correctness and double-check to ensure nothing was lifted verbatim (even snippets) from a previously copywritten source. – formergradstudent Jul 17 '23 at 06:42
  • 3
    "requested to point out specific line-by-line errors" — Have they provided specific line-by-line reasons for every edit in the document? – Dmitri Urbanowicz Jul 17 '23 at 09:23
  • 2
    I have been through this experience with a co-author cheerfully butchering the English language of a manuscript we were both working on again and again and all I can say is I feel your pain. It's incredibly frustrating, especially when I am a native English speaker and he was a non-native. – Tom Jul 17 '23 at 13:22

12 Answers12

50

You're right to be angry at this. It is unethical and many journals expressly forbid use of ChatGPT during the preparation of manuscripts (Science and all Springer journals).

I might be reading too much into this, but since you didn't tell them straight up that it is unethical, I assume that the co-author is a superior. You could identify a journal that forbids use of ChatGPT and highlight their policy to your colleague.

Leon Black
  • 542
  • 1
  • 4
  • 1
    how is it unethical? Can you elaborate on that? Or is your argument that it is unethical because journals expressively forbid it? Can you briefly outline their argument in that case? The general advice to point to a journal's policy, ideally the journals the paper is being written for is of course a very good point! – Frank Hopkins Jul 15 '23 at 11:10
  • 30
    @FrankHopkins it is unethical because it is plagiarism. The co-author did not actually author the text but is presenting it as though it were their work. – terdon Jul 15 '23 at 12:09
  • 3
    @terdononstrike that is arguable as no other person wrote the text and you would need to consider translation tools unethical then for the same reason. But if that is the stance of the author of this answer, it would make the standpoint/argument of the answer more clear if they added it. I.e. we don't need to discuss in the comments whether it is ethical, my main point for the improvement of the answer is that the answer does not include its reasoning for it being unethical - as that is part of the question and imho context depending and in general still in open debate that would be essential. – Frank Hopkins Jul 15 '23 at 12:17
  • Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on [meta], or in [chat]. Comments continuing discussion may be removed. – cag51 Aug 03 '23 at 21:54
33

I'm not sure using ChatGPT to write papers is necessarily unethical (assuming proper citation). I wouldn't recommend it and many journals have explicitly banned it from the author list - though this is generally based on the argument that AI cannot fulfill all authorship criteria. I think it can become unethical if one tries to pass of AI-generated work as their own.

I think editing papers with ChatGPT is even more of a grey area. There are already plenty of editing companies that provide lower cost services that rely on AI. How is using a tool like ChatGPT to rework a paper you wrote any less ethical than paying for an editing service (whether that service uses humans or their own AI).

With that being said, your co-author doesn't actually sound like they are contributing to the writing. Instead they are shifting the work onto you. I don't think using ChatGPT in and of itself is the issue. The issue is that they are trying to take a shortcut and claim credit. Assuming they have contributed in other ways you could consider letting them know you expect more direct contribution to the writing. Or perhaps say that while ChatGPT is a powerful tool, their contribution to the manuscript must be their own. Finally you can always just express to them what you have said here, that you don't like the results of the AI edited/generated writing.

The nuclear option (which might be appropriate if they have not contributed otherwise) is to tell them editing with ChatGPT is not sufficient for authorship. And thus, if they wish to remain and author, they must make their own unique contributions.

sErISaNo
  • 7,362
  • 13
  • 38
  • 32
    You can cite ChatGPT, but ChatGPT can't cite its sources. – Luke Sawczak Jul 15 '23 at 02:29
  • @LukeSawczak you can feed chatgpt with bullet points it needs to formulate to a full fledged text and then insert the citations you got those factual bullet points from at the right places saving yourself the writing bits. – Frank Hopkins Jul 15 '23 at 11:12
  • 5
    Good points about the general question of whether chatGPT is ethical. If it is generally unethical, then are non-native speakers also not allowed to generate their initial paper draft with a translation engine from a draft in their own language? And why do we accept the use of computers in writing a paper? Shouldn't an author write down every letter with their ink-dripping feather? It obviously depends a bit on how the tool is used, as a support tool or just to generate the whole paper from two sentences. – Frank Hopkins Jul 15 '23 at 11:16
  • 2
    @FrankHopkins As my late great-uncle Brian said in his British accent, "Tha'... is no' a guarantee!" What it fills in between the headings can easily come from diverse, plagiarized sources. Citing the questions you asked doesn't mean you've cited the answers you've been given. As for the more significant differences between LLMs and other technological aids, I can't do better than mention my talk to the ACSE of Ontario in April: https://www.youtube.com/watch?v=GbF0WxiZzts – Luke Sawczak Jul 15 '23 at 14:57
  • 5
    I'm not advocating for using ChatGPT to generate large sections of text. I'm not convinced it's impossible to do so ethically, but I would not try. I still don't think it's any different from existing editing services (especially since they already are starting to incorporate AI and LLM into their workflow). While your talk was interesting I don't think it's all the applicable... – sErISaNo Jul 15 '23 at 15:30
  • 4
    @FrankHopkins using a computer for writing a paper does not cause any difference in the information conveyed in the text. Written by pen and later typed by somebody else, you would get exactly the same thing. For a translation tool, it's already not exact (and indeed a machine-translated paper should at least be double-checked by a bilingual human) but at least shouldn't have really extra information. But translating bullet points into prose actually requires filling in gaps, because bullet points are incomplete (else why not just write papers in bullet point form and leave it this way?) – leftaroundabout Jul 16 '23 at 02:32
  • 2
    @sErISaNo Thanks for checking it out -- not directly applicable to writing/editing except to argue that there is no production of knowledge (as there should be in publishing). Editing well is in my mind vaguely possible to boost with AI, even if a bad idea, but for writing meaningfully it's not possible. – Luke Sawczak Jul 16 '23 at 14:22
  • @leftaroundabout obviously both aren't identical, but it seems to me there is an inherent feeling others need to do stuff manually because that's how we did it for aeons. I keep saying by the way that of course you need to proofread whatever AI spits out, same as with translations. The part where it helps is generate a nice text with full sentences that flows, is easy to read and perhaps adds details that one might need to correct. Going from bullet points to written out text costs time, that is why in some cases I'd rather have the AI do a first draft on which I can iterate. – Frank Hopkins Jul 16 '23 at 18:09
  • @FrankHopkins the time it takes for the translation from bullet points to prose should be pretty insignificant though, compared to the time to come up with the original ideas / doing the research. And it's a good opportunity for re-visiting your thoughts and reconsidering the general structure. If you find yourself (as an experienced English speaker) spending too much time fleshing out bullet points, it's an indication you're writing too much boilerplate content, which is in itself ethically dubious. – leftaroundabout Jul 16 '23 at 18:31
  • @LukeSawczak Scottish accent. British accent doesn't really mean anything as there are multiple countries contained in Great Britain. It's like saying someone speaks Arabic with an Arab accent. – Tom Jul 17 '23 at 13:19
  • @Tom True. But being unable to specify where in Britain his accent was from (not Scotland, but somewhere else with glottal stop for final /t/), I went for the general and hoped it'd be understood! – Luke Sawczak Jul 17 '23 at 13:41
  • @leftaroundabout lot's of things that one automates are overall negligible, but a) they add up and b) they free mental capacity so you don't have to switch thinking modes. Quite a few people have big issues with starting from scratch. I often would take an old paper as a template and then adjust, it'd be awesome not to worry about that step. Beyond that, just because something doesn't seem useful to you and your work process it isn't unethical or a problem for the output. This goes from "it's unethical" to "I don't see the point, so it's unethical". – Frank Hopkins Jul 18 '23 at 20:07
  • @FrankHopkins I'm all for automation, but ChatGPT isn't an automation tool. An automation tool is one that does something in a well-defined manner which you would otherwise have had to do yourself. If something goes wrong with automation, you can step back and look where the problem happened, and change the procedure. ChatGPT meanwhile does tasks that are at best vaguely specified, and even when the results are useful then nobody knows how it actually did it. Or: useful results are actually an amalganation of other peoples' work, and that should be properly attributed which ChatGPT doesn't do. – leftaroundabout Jul 18 '23 at 20:26
  • @leftaroundabout all those points are valid regarding how AI operates (this kind of), but in this context I feel them totally misplaced. I'd worry for things like AI driving my car but here you have a safe gate where you can manually check and understand the output before it is anywhere else applied. When you start writing you don't have a fixed text in mind either, so that the automation does automate that process of writing "some text". there is no "determinism" in my automation definition, it's just that some task I would need to do myself is done by magic, err a machine. – Frank Hopkins Jul 18 '23 at 20:31
  • 1
    @leftaroundabout from my point of view most criticism I've seen go against bad usage of AI tools, not really against AI tools themselves. e.g., assuming you just have the tool write a paper out of a couple sentences you hand it and then hand the paper in like that -> I agree that is unethical because the tool is not properly used; and I agree OP's colleague acts unethical by misusing the AI tool and not respecting OP by simply rewriting a complete section (exception: they are advisor and OP inexperienced undergrad). anyway, I think we're past improving the answer, so thx for the exchange^^ – Frank Hopkins Jul 18 '23 at 20:37
  • 2
    @FrankHopkins I see it quite the other way around: AI-driving-my-car is ok as long as it's statistically safer as my human driving. I don't really care how the driving happens after all, it's just a means to get to a destination. But writing is not just a means to get to a destination. The whole point of writing is to facilitate human understanding. That's subverted if a tool which nobody understands sits in the middle of the process. – leftaroundabout Jul 18 '23 at 20:45
  • 1
    ...And if you say the particular job the tool does is unimportant, "just bullet points" - well, let's get rid of the job entirely then! There's a point to be made that papers aren't the best way to convey research, IMO a combination of well-typed and well-documented source code with interactive/video presentations may be the future. But if we don't do it with tools we understand, then we're on a slippery slope indeed where more and more of the research will be done by ever-smarter AIs until we don't understand anything at all. – leftaroundabout Jul 18 '23 at 20:45
  • @leftaroundabout okay that last bit is funny; that basically means anyone but a handful of CS scientists are on unethical footing, because most scientists won't be able to tell you how the PC they write their paper with works. They can check that the output matches what they wanted to say though. Which is exactly the same as (properly used) AI text generation. you have to check and understand what it provides you upon your input. If that distinction doesn't get across, well, I guess I should give AI a chance to formulate it as my manual approaches seem to fail. ;) Anyway, good night or day. – Frank Hopkins Jul 18 '23 at 21:01
15

I share your distaste for your coauthors use of ChatGPT, but I would not go as far as to say that any use of ChatGPT for academic writing is unethical. The following, however, is deeply disrespectful:

They dismissed my argument that the writing is unnatural and weird to read, and requested to point out specific line-by-line errors, which should be corrected. They suggested that I could proofread the document and edit it again before submission, since I have concerns about its quality.

Your coauthor has made far-reaching changes to your draft, changing the tone and flow of the text, and introducing enough obvious errors to raise serious concerns on your side. It seems that your coauthor has not pointed out any specific line-by-line errors to you, and they have not proofread the document themselves. To follow through on the parallel with your coauthors own words above; they seem to have no concerns about quality.

This attitude is deeply disrespectful. It shows that your coauthor has little regard for your work and does not value your time and efforts at all. A respectful yet effective way to point this out is to simply return their request to them. A message along the following lines should do, perhaps with some fluff depending on culture:

Thanks coauthor for going over my draft, and for addressing my concerns about the quality of your version of the draft. I've proofread it and edited it again, see the attachement. If you find any more errors or room for improvement, please point this out specifically line-by-line.

Of course you attach your original draft.

Servaes
  • 1,194
  • 9
  • 16
  • 11
    Well I wouldn't call the response " A respectful yet effective way" it's passive aggressive and escalates the situation. Is it appropriate, perhaps. But respectful, not really, certainly not diplomatic. That might still fit, since the other person was not respectful either with their rewrite... but then you might word is as "An equally non-respectful way" ;) I upvoted, but imho it should be clear that this "opens the battle", even if not completely direct. – Frank Hopkins Jul 15 '23 at 11:27
  • 2
    @FrankHopkins I completely disagree with you. OP has the right to ask his/her coauthor to reach a certain quality standard if they are going to work together. You are not rude by asking someone to do properly their job (or, at least, to make a serious effort to contribute). – Amelian Jul 15 '23 at 16:25
  • 4
    @Amelian it's not about pushing back but how. – Frank Hopkins Jul 15 '23 at 20:12
  • The "respectful yet effective way to point this out" is more akin to "the nuclear option". Depending on OP's relationship with coauthor, this could easily make things worse. – Kev C Jul 24 '23 at 22:11
13

Many journals now allow the use of ChatGPT as a writing tool, similarly to grammar checkers or paid services, provided you leave a notice in the manuscript. Many journals have got text snippets on their website.

It depends on the journal. Many require a disclaimer similar to:

ChatGPT has been used in the writing for the purpose of [....]. The authors take full responsibility of the output.

With regards to your co-author, the main problems are:

  • Lack of proofreading
  • despite ChatGPT being very error prone at this point
  • no concern that ChatGPT's writing style is grammatically correct but very unnatural and sterile
  • altogether, lack of effort in the project

Your co-author is too lazy to put in his own time into the project and too dumb to use ChatGPT appropriately, thus having a negative impact on the project. Don't work with him or her.

Life advice: who misbehaves once will also misbehave again, and you should axe them as soon as you can.

Ambicion
  • 5,541
  • 5
  • 31
  • 53
  • Comments have been moved to chat; please do not continue the discussion here. Before posting a comment below this one, please review the purposes of comments. Comments that do not request clarification or suggest improvements usually belong as an answer, on [meta], or in [chat]. Comments continuing discussion may be removed. – cag51 Aug 03 '23 at 21:55
8

The title ("ethical") and the question are slightly divergent.

Personally, I think the issue is not so much ethics as long as the source (ChatGPT) is mentioned. However, given that the latter, in turn, does not cite its sources, clearly the acceptable ethics around use of the bot is still evolving.

"I could not really define it" - that, in my opinion, is the real crux of the problem. ChatGPT generates a strangely synthetic text, a bit too smooth to be true, while possibly introducing hard-to-find errors.

Now, I, for myself, very much do not like having my text substantially altered without good reason, because while writing it, I form a model of what I want to say and, if it's well structured, when revising for errors, I need only to fix the minor errors that remain to adjust. A complete rewrite forces me to re-read the text as if it is new, as if it comes from a third party, in other words, it is possibly even more work than writing it myself. Of course, when you collaborate, that's what you have to do, but if it is text that you wrote yourself and when it is your duty to write that section, that's a real nuisance.

I usually also will not rewrite other people's sections. I will comment and make suggestions, but try not to reformulate complete sections (maybe I would individual sentences - but not the full structure of thinking).

Perhaps the best analogy is code. It is easier to understand one's own code and understand why it works (or it may fail) than to try to understand someone else's. Sure, someone else may spot subtle bugs etc., but the overall structure - in general - is best understood by the author and best not changed without consulting the latter.

In short, your gut feeling is perfectly justified. The co-author took a text where you spent time and effort making it understandable and mangled it through an intransparent process. Now, they dump on you the effort to make sure that this mangled result is correct, while ripping out all mental support scaffold that you built while writing. This is not on. It is like auto-reorganizing your desktop without asking you and without good reason.

This is unethical, simply, because it is creating unnecessary work for you. It's just not a good collegiate behaviour and shirking off their duty to quality, accuracy, reliability and correctness of results.

How to communicate: tell your colleague that you spent a lot of time getting your contribution of the paper right, and do not have the capacity redoing the work effectively from scratch, by trying to sort out all mistakes that the bot introduced unnecessarily. If they decide to write their part via bot, you need to read it as if it comes from a third party and check meticulously for errors. That they expect a line-by-line pointing out of errors is a sign that they do not take your time seriously or they do not understand how the bot works. You need to decide if you push back to remove the errors or, after the second "review" round, say, "now you fix this". I have done that with code development - after having been told for the 4th or 5th time that my library contained the bug in a software we developed, and it wasn't, I told the colleague, "no, the bug is in yours, please fix it".

This is not really ok, and you can warn them that you have to declare the ChatGPT contribution (you may be found out anyway, there are checkers for this now, so hiding that fact is not an option, in case they should contemplate it) which may affect the paper's acceptance.

If they dismiss your issues, you need to consider the option of splitting the paper if that is at all possible.

Captain Emacs
  • 47,698
  • 12
  • 112
  • 165
4

There are already several good answers regarding whether your concerns about your co-author's behavior are valid (they are), so I will focus on the second part of the question:

[...] how is it best to communicate them to the co-author?

Your co-author used ChatGPT without telling you, later admitted to using it after you asked them about it, and yet seems to still be under the impression that lightly edited AI-generated documents are indistinguishable from quality writing by a human. They are not: the fact that you took sufficient issue with the writing that it prompted a discussion about your collaborator's use of AI tools is strong evidence of the contrary, and something I would emphasize. If you found the writing bad, other readers are likely to find the writing bad. If your collaborator cannot see the difference, that's a failure on their part, and a sign they should definitely not use ChatGPT in the way they do, because they currently do not have the skills to transform the output of ChatGPT to the point it would be considered good academic writing.

Your co-author might then say that of course, ChatGPT's output needs some additional polishing to meet the quality standards of good published research and properly reflect the ideas of the human authors, and it is merely a tool to get a quick draft and/or some ideas. There, I would make the following points:

  • Some people might get to a good document faster by editing a first draft generated by ChatGPT, some people might find it slow and painful work to modify the output of a Large Language Model to the point that it's no longer the AI's writing. Even if one thinks ChatGPT is a great help in their writing, one cannot expect that it's the case for other people, and should ponder whether that's how their co-authors work best and whether they're wasting their co-authors' time.
  • Hiding how a document was generated is bad behavior in a collaboration. In the same way that one might not spend as much time checking grammar if one's co-author is a native English speaker, one does not proofread an AI-generated document in the same manner as one written by a human. In other situations, for example more time-constrained ones, you could have just blindly trusted that your collaborator did a good job, and ended up sending a bad manuscript to an editor. Your document might also get flagged for plagiarism because ChatGPT gave similar looking text to several of its users, or copied stuff almost verbatim from the internet.

Do: ask ChatGPT to find grammar mistakes in a text, or to suggest synonyms and alternative ways to convey some ideas, and use its output to improve your writing. Don't: ask it to produce several pages of output at once, edit 10% of it, and send that to your collaborator without disclosing that's how the document was produced.

A.N.
  • 730
  • 5
  • 7
1

Since it feels like an AI-generated text and contains obvious errors, it suggests to me that the co-authors just copy pasted the ChatGPT output, which seems like a terrible practice. From what you wrote, it seems that they might have even copied pasted the whole document. That honestly seems unethical to me, as it suggests that they would have tried to publish something that they are aware might not be true or might have been plagiarized. So I would say that your concerns are definitely valid. So if I were you, I would definitely not cooperate with them ever again. As to how to communicate it... The only thing that I can think of, that might end in a positive outcome, is to find a native English speaking person, who can identify the obviously AI-generated parts, and who can verbalize, why they feel off. Otherwise I am afraid you will have to either bite the bullet in some way, or to be very blunt with the co-authors and force them to put the effort in.

That being said, I find the opposition to ANY use of ChatGPT for academic writing in the answers/comments bizarre. People seem to act as if you have to put what ChatGPT generates exactly the way it is generated, even if it is wrong or feels wrong.

kejtos
  • 160
  • 4
-1

Using GPT to write papers is completely unacceptable. This doesn't mean using it at all is bad- I use GPT literally every day for things like Python questions, debugging, or translating code to do a certain thing.

However, every single word in my papers is written by me or by coworkers. So, you're totally correct to be upset with them, that's NOT how this works. Every word on the page needs to be your own, not GPT's.

Edit: because passing another entity's words as your own is plagiarism.

Jared Greathouse
  • 1,967
  • 1
  • 12
  • 2
    This answer doesn't seem to contain any reasoning, just personal opinion. – Sneftel Jul 17 '23 at 11:18
  • I disagree. @Sneftel either way, the reason is because it's plagiarism, passing off another entity's words off as your own. If a student uses GPT to write large swaths of a paper, it ceases to become their own contribution. This isn't "opinion", this is just how plagiarism works. – Jared Greathouse Jul 17 '23 at 16:00
  • So in your opinion, if the author acknowledges the use of ChatGPT, there's no problem because there's no plagiarism? – Sneftel Jul 17 '23 at 16:35
  • 1
    I strongly agree with this answer... :) Especially the wording "passing [off] another entity's words as your own is plagiarism" :) – paul garrett Jul 17 '23 at 17:05
  • @Sneftel No, you seemed to miss my point about "large swaths". As academics, we quote and cite things all the time, sometimes at great length. However, at some point this has bearing on the originality of ones own work. For example, if a history student literally quoted books for 90 percent (say) of their term paper, this is plagiarism, even if they cite everything correctly. In other words, they are not mutually exclusive ideas. A work can be unoriginal but not plagiarized, can be both, or can be one or the other. Specifics matter. – Jared Greathouse Jul 17 '23 at 19:05
-1

In my opinion usage of GPT is scientific misconduct. The core of science is citing the source of information correctly, so that it can be tracked back and put into context - and errors can be found.

As of now, if you let GPT make statements on your behalf there is not way for you to figure out (and thus you can not cite it correctly) if a certain statement/fact is actually derived specifically from a reference (even if you ask gpt to give these) or if that is actually only a reflection of what chat gpt typically would find referenced to statements relating that topic. So that means that in the core you make it completely impossible to trace back "your" logic and reasoning to it's original sources.

As a side effect there is no way for your coauthor to tell (without reading the citations) if that referenced paper actually says what is implicitly or explicitly stated or if gpt just found it typical to mention that one.

My recommendation: revert you paper to the state which you gave your coauthor, go to the supervisor/principal investigator or person representing the project and ask them about rewriting the paper without relying on contributions from said coauthor. If the coauthor is part of an institution, address the issue to the appropriate office there.

Sascha
  • 3,700
  • 10
  • 23
  • 2
    Neither myself nor my co-author are at the stage of our career when we have supervisors, but your answer might be helpful for a PhD student in a similar situation. – Dmitry Savostyanov Jul 16 '23 at 18:44
  • For most topics the risk that gpt generates enough text that is a copy of a source to count as plagiarism is about the same of you generating a text that looks like a copy of a source and counts as plagiarism. If a concept summary is complex enough to need a source for the underlying concept, you need to manually add that if you write it yourself as well. Can you back up that general statement that gpt specifically comes with a higher risk than a text you write from memory a bit by any references? ^^ – Frank Hopkins Aug 04 '23 at 23:35
  • @FrankHopkins: If you manage your sources properly, you can assure that. And yes, i would expect that i manage sources relevant to a central argument in my paper carefully. – Sascha Aug 05 '23 at 08:08
  • @Sascha if you do that, then you can insert it into the relevant generated parts as well. My point here is that there is a difference between unsupervised AI generated stuff (directly submit what you have an AI generate) vs. supervised generated text (you use AI as a tool for e.g. a draft of a section). Your answer doesn't distinguish between how you use AI based tools, but is universal, so I asked for something backing up those arguments in their generality or alternatively putting them into some more specific context. – Frank Hopkins Aug 05 '23 at 17:11
  • @FrankHopkins: The question was about the use of chatgpt. Obviously a careful redaction process reduces the risk of accidentally making statements without attribution to the right sources. Investigating why ChatGPT made a certain statement is impossible since ChatGPT (by it's own statement) has no direct input from peer-reviewed articles. This means that it's not clear where a statement like "qubit type x is known for it's long decoherence time" is based upon. And for me the correct scientific way in not to later identify a potential (most likely) reference but the one which the author used . – Sascha Aug 06 '23 at 10:16
  • @Sascha okay, to me it doesn't matter whether you can ask chatgpt where its quotes come from, it is your text in the end and if you're a domain expert you can verify and enhance it. So if the author had a single source to base their knowledge on they can add it and if they didn't they might find their primary source know that they learned something (though ideally at that stage they should know the material about which they ask the AI to write, but hey, different people different approaches). Anyway, maybe add that reasoning to the answer. I personally think it's logically flawed/assumes – Frank Hopkins Aug 06 '23 at 15:23
  • @Sascha a certain usage and ideal way to work that isn't necessarily universal, but it would imho make the answer more clear in general. Especially the first sentence is very universal and to me the rest does not seem to back up that universal statement, so maybe you can add parts of the discussion here to set that in context. Anyway, have a neat day. – Frank Hopkins Aug 06 '23 at 15:25
-1

I would argue that using ChatGPT is unethical (plagiarism) or inaccurate (often just lies). I think if ChatGPT is used it definitely has to be cited.

bribina
  • 37
  • 2
-5

I think it is is only a matter of time before AI will be dominant in a number of tasks that we have been doing manually up to now. I think that we should embrace this rather than fight it. There are ways to use ChatGPT for our advantage and to produce quick, efficient results - and all of this done ethically.

My own approach is to start with my draft text, sometimes could be a mix of sentences and bullet points and then ask ChatGPT to rewrite them in a cohesive section without introducing any other ideas.

So the idea is still yours, but its expression is composed by an AI tool. Surely not much different to using a paid editorial service that polishes your draft writing ?

Trunk
  • 4,418
  • 9
  • 28
  • 1
    -1 for "it's time for AI to be dominant" which contradicts your argument that it should be used as a tool or unpaid service. The ideas may be the author's...but I find GPT Chat does heavily reword expressions and phrases which are not idiomatic, and by doing so, the author's creativity may be lost. – Mari-Lou A Jul 19 '23 at 11:37
  • I edited the existing post. This is not to say that I entirely agree with it, of course. I accept that if ChatGPT is only "polishing" the narrative then its contribution is not germane to the paper. All I would say on this is that it is wise to learn how to write clearly and concisely as soon as possible. The exercise of writing helps with oral presentations too. – Trunk Jul 19 '23 at 17:10
-7

I don't see any ethical issues in using ChatGPT or some other automated systems to help write a paper. Just make sure that the publication target allows it eg see https://academia.stackexchange.com/a/193049/452 and if you generate large chunks you may also want to check for plagiarism.

If you are unhappy about your coauthor's work quality, that's a different issue.

Franck Dernoncourt
  • 33,669
  • 27
  • 144
  • 313
  • 5
    ChatGPT is a great tool but it only reproduces existing content or combines it in an unvalidated manner: It should, hence, generally be avoided when writing original articles. If not scientists, whoelse will in the future deliver original human content for tools such as ChatGPT? So, I'm so glad to see that journals and (in the meanwhile) many other organisations begin to ban it and related tools. However, it should be said, outside science, ChatGPT will become incredibly useful. But within science, ChatGPT must be very harshly regulated to the ultimate utmost ... by any means. – mfg Jul 15 '23 at 18:30
  • @Mario the human validates. Introduction and related work often don't have much novel content. – Franck Dernoncourt Jul 15 '23 at 18:34
  • 2
    Difficult point, perhaps. In my field, CS, an introduction is often quite concise and actually quite rapidly written. Perhaps in other fields, it might help, in my context, not so much I guess. Anyway, if you start allowing it here and there, where do you stop? My concern can be summarised by the term "contamination". Contaminated content looses "truth" value if you will, much in the sense of a gain in entropy, which usually is not so cool. – mfg Jul 15 '23 at 18:38
  • 1
    @Mario This "contamination" idea seems based on a weird assumption of what it would be used for. you don't let it generate science, you let it formulate your science and use it to help express yourself faster. – Frank Hopkins Jul 15 '23 at 20:16
  • @Btw. especially in CS I would feel it could be very helpful, e.g. if you want to generate an example program, or structure something with a bit more complex Latex, you don't need to know how to do so completely yourself or look it up - you generate something with the AI and then adjust it where needed. – Frank Hopkins Jul 15 '23 at 20:22
  • 1
    @Mario So AI-generated articles on the ethics of using AI to write articles would be fine? Science journals aren't the only ones who expect original content. I don't know why you'd think original content creation was peculiar to only scientific disciplines. Even outside academia, do we really not care about originality in novels, say? Or poetry? – cfr Jul 16 '23 at 03:09
  • 1
    Ok, @FrankHopkins, I appreciate your points. In my own field, generating and formulating science is not so well separable. I'm doing logic and maths stuff, so, the main method of generating science in my context is writing up my thoughts. And it seems vastly unnatural to me let a tool such as ChatGPT interfere with that quite inseparable process. However, using ChatGPT like a pocket calculator outside my research seems highly useful. Anyway, perhaps I'm speaking for a quite narrow group of people here. – mfg Jul 16 '23 at 08:54
  • 2
    Dear @cfr, it was not my intention to exclude creative domains like the ones you mentioned. I fully support the regulation of ChatGPT there as well, in principle, whenever it is important (for whatever reason) that one can rely on the artefacts under consideration being actually generated by humans. – mfg Jul 16 '23 at 08:57
  • @ctr It's weird to automatically assume "written by AI" = no novelty. You can give an AI engine bullet points of what your article should say, that are all novel. You can create a story with characters not existing before and let AI hash out the textual details. That is novelty. Now, ideally - with the current level of AI - the text is curated too and just taken as a first draft, because it likely will read...dry in my experience, but this equation of "man-written=novelty, ai-generated text=no novelty in what the text says" seems totally illogical to me. Man still drives the content. – Frank Hopkins Jul 16 '23 at 17:57
  • 1
    @Mario I also totally agree that there are areas/tasks where it's a totally misplaced tool, especially in its current form. There is currently from my pov just two a bit extreme sides, one that wants to apply shiny new tool to everything and one that wants to see it banned from everything and talks it down all the time. Both seem to fuel each other, e.g. because a lot of the hypers don't really understand its limits and the other way around people jump on examples where it produced "crap" and overgeneralize. My point is just that the ethicality cannot be judged that universally that easily. – Frank Hopkins Jul 18 '23 at 20:20
  • 1
    @Mario + I agree there is risk of misuse, but, perhaps that also helps to improve actual quality control^^ From my pov, it does not make sense to prohibit tools in such a general way in this context, as it typically depends on how they are used regarding whether they produce something of scientific value. I'd rather prefer to have clear quality requirements/checks independent on the tooling used to generate the paper/contribution (and yes, a paper generated by AI out of thin air should fail those and be ethically shunned, for quality reasons). Anyway, guess we're past improving the answer^^ – Frank Hopkins Jul 18 '23 at 20:26
  • Fully agreed, Frank. Reading your comments, I see we are pretty much on the same side. It'll have to be seen how regulation and quality control can play together fruitfully. – mfg Jul 20 '23 at 09:36