Think of entering information in most genAI tools like posting it on a public website. Any information you share could end up being used as training data, which could then be included in the responses the tool gives to other users, even outside the University.
Just like with any online service providers processing information, anything you share with genAI platforms, like questions, prompts, or attachments, is typically stored by the third-party service provider. This means your information could be at risk from potential plagiarism by the tool, along with cyberattacks or other security threats.
Additionally, information provided to generative AI services may be accessible by the service provider, and is likely to be used in some way, particularly when the service is free to use. When using these publicly available or free tools, you should assume that prompts/conversations are not private and any information you provide has the potential to be used or become public later.
Do not enter confidential or sensitive information, proprietary information, or personal data into publicly available genAI platforms. This includes your assignments! Upload these at your own risk into non-secure systems.
Copilot is the currently the only endorsed genAI tool for the University, in part due to the data security it provides. Our enterprise access to Copilot is free for staff and students, provides protection for your data and does not use your prompts to train the language model or system. This provides greater security if you wish to upload assignments, etc. Check for the shield icon top-right of window, shown right.
The advice from DTS - Responsible use of AI - reinforces the importance of data security.
Our Copyright for generative AI page also has more information on the importance of data security and the use of Copilot to assist with copyright compliance.
The Conversation, June 2025 - AI tools collect and store data about you from all your devices – here’s how to be aware of what you’re revealing
Copyright and ownership are often considerations that are forgotten around the use of shiny new tools and are key considerations in the ethical use of AI tools. The issues usually boil down to two things - inputs and outputs.
There are two types of inputs for genAI - (1) the training data or model that underpins the AI tools, and (2) the prompts that users enter, which can include uploaded or copied material.
Outputs are what the AI tool creates from your text prompts. There are questions around who may own these outputs:
And these don't even consider the use of third-party materials. The current opinion is that, as the genAI algorithm is doing the 'creative' work, there isn't a human author, so outputs may not be copyrightable in Australia. Where AI is being used as a tool to assist, rather than create, there is more likely to be copyright in the work.
To learn more about copyright and AI, see our Copyright for generative AI page on the Library website. There is further information around using files and other content with genAI on the What can I use? pages in this guide for staff and students (including HDRs).
If the response to your prompts is stating things as fact, you need to check those assertions and find an authoritative source to support them. GenAI tools currently can’t distinguish between fact and predictive patterns.
'Hallucinated' citations and incorrect information
You may have heard that genAI tools can make up, or 'hallucinate' information and references that look real, but aren’t. To make their answers look more authoritative AI tools will often create sources that will 'mix and match' elements from real experts and publications to provide you with a source that doesn't exist. Don’t be fooled! If you try to find the sources provided through Library Search and get no results, chances are that the citation is a hallucination. You can check with friendly library staff to make sure.
It has also been shown that problematic/incorrect information can be provided by AI tools, including issues with personal histories, so always try to confirm any information that seems questionable.
Interested in learning more?
Where do genAI tools get their information from? This relates to Reliability above, but also to academic integrity. Generative AI does not follow academic integrity regulations around copying or citing the work of others – in other words, it can plagiarise:
You don’t want plagiarised or misattributed material in your final work, so it's important to check genAI outputs for accuracy.
Think about asking a friend to read over your work and give you feedback on your spelling, sentence structure, etc. Now think about asking them to completely rewrite it for you. One request doesn’t breach academic (or scholarly) integrity, but the other one does. Keep this in mind when using genAI tools.
For students, it's essential that you make sure you’re clear on how a lecturer wants you to use and reference your use of genAI tools. The University has provided guidance for students on how to approach use of AI. Otherwise you run the risk of Turnitin designating all of your work as AI-generated.
As part of your university journey, you need to learn how to communicate in an academic way suitable to your discipline. This is an essential skill that you must develop. It will help you to strengthen your critical thinking, evaluation and analysis skills. It also builds your communication skills so that you can confidently interact with subject matter experts. GenAI tools cannot (as yet) reliably do these things for you. For help with these skills, visit the University's Academic Learning Support.
For staff, you will have varied considerations around course materials, research and publications, and other work outputs. Before you share any work that has been assisted or generated by genAI with others, you should check whether your use of the tool/s has introduced any potential issues with plagiarism or sources/attribution into the work.
It is important to remember the concept of 'garbage in, garbage out'. This can often apply to the training data that underpins genAI tools:
AI tools can only reflect the data they have been trained on, much like humans. A visual representation of potential unseen bias is included below, the 'AI bias iceberg':
[Image credit: N. Hanacek/NIST. Reproduced with permission]
To learn more, watch the videos below:
For those wanting to read more:
A number of ethical issues have been raised around AI tools, including the data on which they have been trained:
It is also important to consider the data that users need to 'hand over' when creating accounts with AI tool platforms (and who will have access to that data?). Are you comfortable giving your data to a provider just to be able to access an AI tool for an assessment or a work project?
UNESCO have released their recommendations for ethical AI, and the CSIRO have an AI Ethics Framework for those who would like to read more.
Want to learn more about ethics and AI? Try this LinkedIn Learning course:
Generative AI and Ethics - the Urgency of Now from Ethics in the Age of Generative AI by Vilas Dhar
|
We'd love to hear your feedback on this portal. If you think changes could be made, or more information added, please visit our feedback page.