Skip to Main Content

AI tools: AI for Research

This guide looks at generative AI tools - how to use them and considerations around their use.

AI for Reseaerch banner

Using generative AI for research

Before deciding to use a tool like ChatGPT in your research, check with your research supervisor to make sure that it’s permissible, and that your intended use won’t breach the University’s and external research codes and policies.

In addition, you must ensure that any use of generative AI tools does not breach the relevant funder’s policies. Check the policies from the Australian Research Council, and the National Health and Medical Research Council, on the use of generative AI in their funded research.

Just because information is available on the Internet does not mean that it’s free from copyright, or that it has been shared with the owner’s consent. It’s also almost impossible to know where the information came from and it could contain inherent biases, which may then be incorporated into a generative AI tool’s response to your prompts. Generative AI can also generate false information which is called “hallucination”. This occurs when AI either provides false information or creates what it thinks is real information but is in fact not at all real. 

For these reasons, it’s important to check the outputs from generative AI to ensure you aren’t breaching copyright, consent, or research integrity – and to ensure that your research output is not flawed by bias or inaccuracy. For more information, see Issues and Considerations.

If you are permitted to use generative AI tools for any part of your research, you must acknowledge this openly in your research documentation. Use the following links to check different publishers’ statements on the use of generative AI tools to produce content for publication:

 

If you are permitted to use a generative AI tool for any part of your research, keep these considerations in mind:

Read the response with a critical eye. Does it agree with what you’ve already learned about your research topic? How does it compare with other authoritative research? Does it contain any kind of bias, or unexpected information?

Besides complying with the various codes and guidelines mentioned elsewhere, consider what will happen to any data you input into generative AI. DO NOT input any data which may breach privacy legislation.

There may also be issues around copyright and ownership relating to some uses of AI tools - see our Issues and considerations page for more information.

Has the AI tool been designed for public use? Is it designed to generate complex data sets for research, or for analysis? Can you find any reviews of its performance, usefulness or relevance?

This post from the London School of Economics and Political Science (LSE) contains strategies which authors can use to preserve their anonymity for the peer review process, including:

  • Take care when writing the abstract and introduction, as they reflect the author’s research domain and creative identity.
  • Omit as many self-citations as possible when submitting content to a double-blind review.
  • Include citations from lesser-known research to increase your citation diversity.

Where publishers allow use of generative AI tools with research publications, disclosure of your use may be a requirement for transparency (see the Read publisher statements tab for more).

As this recent example from the journal Surfaces and Interfaces shows, issues can creep in when care isn't taken - in this case, part of the conversation with the AI tool was included in the final product:

Excerpt from article showing AI conversation text has been left in the article - "Certainly, here is a possible introduction for your topic:"

Screen-capped 18/03/2024, our highlight

The publisher's policy in this space is clear - AI tools "should only be used to improve readability and language of the work", disclosure of use is required, and "all work should be reviewed and edited carefully".

As Technology Networks - Informatics reports, the authors of this article failed to include a disclosure statement in their manuscript and the publisher has posted a response on social media stating that they are investigating the paper.

Scholarly Kitchen has posted an update, including the results of searching for other articles with similar issues.

Some journals have already retracted articles that were found to not meet their editorial and scientific standards.

This situation certainly highlights the importance of proof-reading all outputs from generative AI that you wish to use in publications.

For more information:

 

Note that the University of Newcastle does not endorse the use of these tools. See the AI at Newcastle page for the University’s position on the use of generative AI tools.

Some free generative AI tools for research you might like to try, keeping in mind the points listed at Stop and Check in this box:

How can we improve?

We'd love to hear your feedback on this guide. If you think changes could be made, or more information added, please go to our feedback page and let us know.