Overtime, I started suspecting, are those reports and url fake/generated by AI?
Examples of the reports referred by AI: 4. Mental Health Foundation (2015). Why do some people not get the help they need? [https://www.mentalhealth.org.uk/publications/why-do-some-people-not-get-help-they-need](https://www.mentalhealth.org.uk/publications/why-do-some-people-not-get-help-they-need) 5. National Alliance on Mental Illness (2020). Mental health conditions & stigmas. [https://www.nami.org/About-Mental-Illness/Mental-Health-Conditions](https://www.nami.org/About-Mental-Illness/Mental-Health-Conditions) 6. Substance Abuse and Mental Health Services Administration (2019). Mental health services in the United States. [https://www.samhsa.gov/find-help/national-helpline](https://www.samhsa.gov/find-help/national-helpline) - National Institutes of Health (NIH):"The Huppert's Mental Health Spectrum: Population Distributions and Associated Factors". https://pubmed.ncbi.nlm.nih.gov/31407549/.
An LLM is not a search engine, a database, or a knowledge base. An LLM generates plausible-sounding text based on the distribution of its inputs. There is no constraint that anything it spits out is factually accurate. Why is this so hard to understand?