New principles aim to assist AU staff in navigating AI tools: “Don’t use it for confidential data”
AU has adopted seven principles for using generative artificial intelligence (GAI) to help staff and managers use these tools responsibly. The guidelines emphasise the importance of disclosing any use of GAI and encourage staff to use standard search engines when possible, as GAI tools use significantly more power.
The seven principles for staff and managers:
1. Don’t use GAI for confidential or sensitive data. You may not use GAI for anything involving trade secrets, confidential or sensitive data or copyrighted material.
2. Remember that you are responsible for the quality of the content you use or share.
3. Credit your use of GAI so that others know how and why you have used it. All GAI-generated responses must be accompanied by a disclaimer stating the response is auto-generated and may contain mistakes.
4. Be critical when using GAI as a source of information – it can make mistakes. GAI makes mistakes, and GAI may ‘hallucinate’ sources or facts that aren’t correct just to give you an answer.
5. Be sensitive to bias in GAI-generated content – it often reproduces the biases in the data it’s trained on. GAI-generated content can unconsciously reinforce existing biases and power imbalances, because it selects the most likely outcome based on data that encodes majority perceptions, for example in questions regarding gender, race or other demographic categories.
6. Only use GAI when standard search machines won’t do – it’s better for the climate. GAI applications use significantly more power than other online search tools and apps.
7. Coordinate purchases of licenses for GAI applications with AI IT and AU Finance – to keep costs down and keep our data safe.
Get the full version of the principles here.
Source: Staff Service at AU
Disclaimer: This text was translated using machine translation / AI and post-edited by Maria Nielsen Pedersen
It's no longer just the students at Aarhus University who have guidelines for using generative AI tools (GAI) like ChatGPT. AU has just introduced a set of guidelines for staff and managers on the use of GAI. These seven principles are designed to help staff navigate GAI tools with a focus on safety, data security, and critical evaluation.
“AU employees should never use GAI for confidential or sensitive data, and the accuracy of GAI-generated content should always be confirmed. What’s more, these apps should never be used for operations that ordinary search engines can do just as well with significantly lower power consumption”, says Peter Bruun Nielsen, deputy director for AU IT, in a staff announcement.
Peter Bruun Nielsen explains that the principles have been designed to strike a balance between curiosity and awareness of potential risks. The principles are not an exhaustive, one-size-fits-all checklist and must be supplemented by common sense and critical judgement, says the deputy director for AU IT. GAI tools are developing quickly, so it should be viewed as an area that AU will continue to explore and work with.
As an employee – or student – at AU, you have access to Microsoft Copilot, which is Microsoft's equivalent to ChatGPT. The tool can be accessed with your AU login credentials in the Microsoft Edge browser.
“The data you upload/type into Microsoft Copilot will not be saved and will not be used to train the model. Nonetheless, you should never put confidential or sensitive personal information into Copilot or other GAI applications,” AU advises.
The announcement notes that the seven GAI principles were approved by the administration’s leadership team and that they only apply to the use of standard GAI applications in connection with the day-to-day work in a non-research context, where there can be other considerations that are necessary to take into account. Later this year, the administration’s leadership team will announce concrete recommendations for the use of GAI applications at AU on the background of a risk assessment.