GenAI University wide policy and practice

Oxford Brookes position on the use of Generative AI in teaching, learning and assessment: Embrace and Adapt

In line with the majority of the sector, Oxford Brookes has taken what JISC would describe as a progressive ‘embrace and adapt’ approach to generative AI. The approach recognises the value of GenAI in developing digital literacy and as a learning tool and understands that its use cannot be systematically controlled or detected easily.  This ‘embrace and adapt’ (JISC, 2023) approach is sympathetic to Oxford Brookes graduate attributes and lends itself to authentic assessment, which is integral to the IDEAS model of curriculum development. Due to concerns about false positives, we have not, at this stage, adopted the Turnitin AI detection tool, along with many in the sector. 

Currently:

  • Module leaders have the discretion to advise on AI use in the assessment for their modules;
  • Module handbooks should include up to date information on using GenAI in research or assessment;
  • Students are asked to declare their use of AI on submission of assessment and check with module leaders before using it, in line with QAA guidance.

Use of GenAI should take account of ethical and data privacy considerations. Any use of GenAI software not supported by the University must be in accordance with the IT Acceptable Use Policy and sanctioned by the relevant authority in the IT Directorate before being introduced (please contact info.sec@brookes.ac.uk to discuss your requirements). 

Note any software or hardware "devices" not sanctioned found on the Oxford Brookes University network could potentially expose Oxford Brookes University to the risk of a personal data breach or security incident. 

To further support this future-facing position, there are three strategic recommendations made by the GenAI Working Group:

  1. Assure academic integrity: 
    Establish a cross-university position on the acceptable use of AI in assessment and embed this in the Assessment and Feedback policy.

  2. Build generative AI literacy:
    Offer academic development for staff and students, raising awareness of generative AI, its appropriate and critical use in teaching, learning and assessment and associated data security and ethical issues.

  3. Remain agile and responsive:
    Be alert to new developments, challenges, risks and opportunities and open to revisiting guidance, policy and practice.

Safe and secure use of AI

AI models and software tools offer exciting time-saving affordances for academic practice and professional service. However, they might store, use or distribute data uploaded to them. This means they are not safe and secure, or GDPR compliant.

When using AI tools, beware of uploading any sensitive, confidential or protected data. 

Ask yourself these guiding questions:

  1. Do I fully understand the data protection and privacy settings on this AI tool? 
  2. What data, in my prompts and in what I upload, am I giving them? 
  3. Do I have the right to give it to them, is it my information and not someone else's? 
  4. Am I happy for them to store, use and share this data with others? 
  5. Will sharing this data lead to harm or impact on mine or someone's freedoms and rights?

Please see the Guidance for Schools, Programme and Modules page for information about use of Microsoft Co-Pilot and Google Gemini using your Oxford Brookes login. 

If you are unsure about the data security of any AI tool you would like to use for Brookes’ academic practice or professional service, contact info.sec@brookes.ac.uk

The Brookes AI in HE Working Group

The Brookes AI in HE Working Group includes representatives from Oxford Centre for Academic Enhancement and Development (OCAED), Centre for Academic Development (CAD), Faculties, Learning Resources, and the Student Investigation and Resolution Team (SIRT).