Skip to Main Content

AI tools: Issues and Considerations

This guide looks at generative AI tools - how to use them and considerations around their use.

Issues and Consierations banner

Issues and considerations for AI tools

If the response to your prompts is stating things as fact, you need to check those assertions and find an authoritative source to support them.  The generative AI tool can’t distinguish between fact and predicting what you might want to hear.

'Hallucinated' citations and incorrect information

You may have heard that tools such as ChatGPT can make up, or 'hallucinate' (as it has become known), references that look real, but aren’t. To make their answers look more authoritative AI tools will often create sources that will 'mix and match' elements from real experts and publications to provide you with a source that doesn't exist. Don’t be fooled! If you try to find the sources provided through Library Search and get no results, chances are that the citation is a hallucination. You can check with friendly library staff to make sure. 

It has also been shown that problematic/incorrect information can be provided by AI tools, including issues with personal histories, so always try to confirm any information that seems questionable.

Where do generative AI tools get their information from? This relates to Reliability on the previous tab, but also to academic integrity. Generative AI does not follow academic integrity regulations around copying or citing the work of others – in other words, it can plagiarise, depending on the size and quality of the dataset on which it was trained. AI systems have also been referred to as "remix algorithm[s]" that mimic scraped source data for creativity. A recent report suggested 60% of GPT-3.5 outputs contained some form of plagiarism. Testing for GPT-4 suggests that, on average, 44% of prompts can lead to reproduction of copyright content. You don’t want plagiarised material in your final work, and this is as much true for image generation as it is for text.

Using generative AI outputs in your own work

Think about asking a friend to read over your answer to an assessment task and give you feedback on your spelling, sentence structure etc. Now think about asking your friend to completely rewrite the answer for you. One request doesn’t breach academic integrity, but the other one does. Keep this in mind when using tools like ChatGPT.  

Make sure you’re clear on how your lecturer wants you to use and reference generative AI tools if you use them. The University has provided guidance for students on how to approach use of AI. Otherwise you run the risk of Turnitin designating all of your work as AI-generated. 

As part of your university journey, you need to learn how to communicate in an academic way suitable to your discipline. This is an essential skill that you must develop. It will help you to strengthen your critical thinking, evaluation and analysis skills. It also builds your communication skills so that you can confidently interact with subject matter experts. Generative AI tools cannot do these things for you. 

It is important to remember the concept of 'garbage in, garbage out'. This can often apply to the training data that underpins generative AI tools:

AI tools can only reflect the data they have been trained on, much like humans. To learn more, watch the video below:

The Australian Human Rights Commission have released a technical paper on addressing algorithmic bias and the eSafety Commissioner has a position statement document for those wanting to read more.

A number of ethical issues have been raised around AI tools, including the data on which they have been trained:

It is also important to consider the data that users need to 'hand over' when creating accounts with AI tool platforms (and who will have access to that data?). Are you comfortable giving your data to a provider just to be able to access an AI tool for an assessment or a work project?

UNESCO have released their recommendations for ethical AI, and the CSIRO have an AI Ethics Framework for those who would like to read more.

Want to learn more about ethics and AI? Try this LinkedIn Learning course: 

Generative AI and Ethics - the Urgency of Now from Ethics in the Age of Generative AI by Vilas Dhar

Copyright and ownership are often considerations that are forgotten around the use of shiny new tools. The issues usually boil down to two things - inputs and outputs:

  • Inputs are the training data that underpins the AI tools. Most training data was scraped from the internet and there are questions in some countries about whether it was legal to do so. There are also concerns about the potential negative impact on Indigenous Cultural and Intellectual Property (ICIP) that has been scraped from the web.
  • Outputs are what the AI tool creates from your text prompts. There are questions around who may own these output:
    • you as the person submitting the prompt?
    • the AI tools as the one generating the output?
    • the owners/creators of the AI tool who own the algorithm doing the creation?

The current opinion is that as the AI's algorithm is doing the creative work, there isn't a human author, so outputs may not be copyrightable in Australia.

To learn more about copyright and AI, see our Copyright for creators page.

For a quick read, the Conversation has an interesting piece on ChatGPT and copyright ownership.

How can we improve?

We'd love to hear your feedback on this guide. If you think changes could be made, or more information added, please go to our feedback page and let us know.