Before deciding to use a tool like ChatGPT in your research, check with your research supervisor (or for staff, your School's Research Lead) to make sure that it’s permissible, and that your intended use won’t breach the University’s and external research codes and policies. You may also need to consider any risks to research integrity using AI tools may introduce.
For more information from the University of Newcastle, see Generative AI in Research - Guidance for Researchers (for University of Newcastle staff and researchers only).
In addition, you must ensure that any use of generative AI tools does not breach the relevant funder’s policies. Check the policies from the Australian Research Council, the World Association of Medical Editors (WAME), and the National Health and Medical Research Council, on the use of generative AI in their funded research.
Just because information is available on the Internet does not mean that it’s free from copyright, or that it has been shared with the owner’s consent. It’s also almost impossible to know where the information came from and it could contain inherent biases, which may then be incorporated into a generative AI tool’s response to your prompts. Generative AI can also generate false information called “hallucinations”. This occurs when AI either provides false information or creates what it thinks is real information but is in fact not at all real.
For these reasons, it’s important to check the outputs from generative AI to ensure you aren’t breaching copyright, consent, or research integrity – and to ensure that your research output is not flawed by bias or inaccuracy. For more information, see Issues and Considerations.
If you are permitted to use generative AI tools for any part of your research, you must acknowledge this openly in your research documentation. Use the following links to check different publishers’ statements on the use of generative AI tools to produce content for publication:
|
If you are permitted to use a generative AI tool for any part of your research, keep these considerations in mind:
Read the response with a critical eye. Does it agree with what you’ve already learned about your research topic? How does it compare with other authoritative research? Does it contain any kind of bias, or unexpected information?
Besides complying with the various codes and guidelines mentioned elsewhere, consider what will happen to any data you input into generative AI. DO NOT input any data which may breach privacy legislation.
There may also be issues around copyright and ownership relating to some uses of AI tools - see our Issues and considerations page for more information.
Has the AI tool been designed for public use? Is it designed to generate complex data sets for research, or for analysis? Can you find any reviews of its performance, usefulness or relevance?
This post from the London School of Economics and Political Science (LSE) contains strategies which authors can use to preserve their anonymity for the peer review process, including:
Where publishers allow use of generative AI tools with research publications, disclosure of your use may be a requirement for transparency (see the Read publisher statements tab for more).
As this recent example from the journal Surfaces and Interfaces shows, issues can creep in when care isn't taken - in this case, part of the conversation with the AI tool was included in the final product:
Screen-capped 18/03/2024, our highlight
The publisher's policy in this space is clear - AI tools "should only be used to improve readability and language of the work", disclosure of use is required, and "all work should be reviewed and edited carefully".
As Technology Networks - Informatics reports, the authors of this article failed to include a disclosure statement in their manuscript and the publisher has posted a response on social media stating that they are investigating the paper.
Scholarly Kitchen has posted an update, including the results of searching for other articles with similar issues.
Some journals have already retracted articles that were found to not meet their editorial and scientific standards.
This situation certainly highlights the importance of proof-reading all outputs from generative AI that you wish to use in publications.
For more information:
Using Generative AI for Literature Reviews
Using some generative AI tools to source citations in literature reviews can be risky. While generative AI is a powerful tool, it comes with its own set of challenges. For example, it can sometimes 'hallucinate' data, spitting out information that’s completely irrelevant, or even false. This can lead to inaccuracies and even citations of sources that don’t exist.
Using AI in academic research means strongly committing to transparency and accountability to keep your scholarly work legitimate (and eventually, as funded researchers, to comply with grant and publishers’ requirements).
So, while generative AI can be a game-changer, it’s crucial to use it wisely and stick to ethical guidelines to ensure your research stays reliable and credible.
Need help with finding research via AI?
The next tab provides some AI tool options that can assist in finding research, such as Research Rabbit and Semantic Scholar, that work as 'connectors' and don't rely on generated text.
If you have citations taken from answers generated by AI tools and can't find them, try searching directly for the journal titles cited to source the articles - it's possible they don't exist.
Still can't locate something that seems important, or just stuck? You can book in to speak with one of our librarians for help - there are online and in-person options for both students and researchers.
Note that the University of Newcastle does not endorse the use of these tools. See the AI at Newcastle page for the University’s position on the use of generative AI tools.
Some free generative AI tools for research you might like to try, keeping in mind the points listed at Stop and Check in this box:
We'd love to hear your feedback on this guide. If you think changes could be made, or more information added, please go to our feedback page and let us know.