HACKER Q&A
📣 warrenm

If ChatGPT cite its sources, might using its output be ethically viable?


As a follow-on to https://news.ycombinator.com/item?id=34583343

ChatGPT, as any LLM, is trained on "real world" sources

If it were to cite the sources it used to generate output, would you find it more, less, or equally ethically viable to use in support (or against) some position?

For example, if a pair of ChatGPT outputs were prompted on "dubious scientific support for climate change" and "scientific support for climate change", and both cited their sources, would you be more inclined to agree with (or argue against) one over the other?

Could such a citation-attaching generator be a reasonable means to collate multiple sources into a cohesive whole (a la metastudies)?