Generative AI Security Standard

Overview

The purpose of this standard is to communicate expectations and requirements related to use of Generative AI throughout the 麻豆传媒 System, including what data is appropriate, the process for selecting and procuring Generative AI tools, as well as awareness of the risk associated with it.


Scope

Information Security and Assurance (ISA) Standards are mandatory and apply to the 麻豆传媒 System and all users of 麻豆传媒 computing resources.  This standard supplements and supports Board of Regents Policy & Regulation R02.07. These standards are reviewed and approved by the CIO Management Team (CMT), a system-wide governance group consisting of each university CIO, the System CITO, and the System CISO.  Business units maintaining their own security standards should utilize this standard as a baseline and may add additional requirements or detail as appropriate for their business needs, however, may not weaken any individual element of this standard without an approved Information Security Controls Exception.

This standard is periodically reviewed and updated to respond to emerging threats, changes in legal and regulatory requirements, and technological advances.


Definitions

Artificial Intelligence (AI)

The term 鈥淎I鈥 refers to computer systems that are capable of performing tasks traditionally associated with human intelligence, such as making predictions, interpreting speech and generating language, recognizing patterns, solving problems, and making decisions.

 

Generative AI (GenAI)

Services or applications that use deep learning models to create new content, including audio, code, images, text, simulations, and videos in response to user prompts.

 

Generative AI model(s)

Computational approaches that learn patterns and structure of input training data and use this in combination with statistical methodologies to generate new data.

 

Generative AI application(s)

Software or services that rely on Generative AI models or services.

 

Human-in-the-Loop (HITL) verification

Engaging humans in AI use and development to provide feedback, correct errors, and validate outputs helps AI systems learn, improve, and adapt over time.

 

Public Information

Data classified as Public as defined by Board of Regents R02.07.093 that can be freely shared with the public and posted on publicly viewable web pages.

 

Personally Identifiable Information (PII)

Any information that can be used to distinguish or trace an individual鈥檚 identity, either alone or when combined with other information that is linked or linkable to a specific individual.


Standard

What is Generative AI?

Generative AI is a subset of artificial intelligence that can learn from large amounts of data to generate content such as  text, images, music, videos, code, etc., and is often based on responses to specific inputs or prompts. 麻豆传媒  supports responsible exploration with Generative AI tools, while acknowledging the accompanying risk regarding information security, data privacy, compliance, copyright, data accuracy and reliability, and academic integrity.

 


Risk Awareness

Improper use of Generative AI represents an increased risk of data exposure, copyright infringement, reduced data accuracy or integrity and bias . University employees and students are responsible for considering and mitigating risks related to the use of Generative AI. Those using such systems must evaluate the intended use within their specific business or application context. Awareness should include but is not limited to the following:

  • Personal Information
    • Any personal information (such as name, location, and age) provided to Generative AI can potentially be stored on servers not controlled by the 麻豆传媒 and could potentially be shared with third parties.
  • Intellectual Property
    • Any copyrighted, proprietary, research, educational, or non-public information provided to Generative AI may be stored on servers not controlled by the 麻豆传媒 and could potentially be shared with third parties.
  • Bias
    • Generative AI programs can sometimes have biases that make existing inequalities worse, which can harm people or groups. If the AI is trained on biased data, it may produce results that repeat those same biases. Developers and users of Generative AI should be aware of this risk and work to reduce it.
  • Accuracy
    • Generative AI results can sometimes be wrong or unreliable because they are based on data that isn鈥檛 always clear or easy to check. AI can also 鈥渉allucinate,鈥 meaning it might create completely made-up information. It鈥檚 important to carefully check AI results before trusting them.
  • Data Privacy
    • Generative AI may gather information about how users interact with it, sometimes without telling them or asking for permission. This can include conversations, topics discussed, links clicked, screenshots, and data from devices or apps used to access the service. This information might be used to improve AI models, track user behavior, advertise, or for other reasons. It may also be shared with other companies or partners, who may have different privacy and security practices.
  • Data Breaches
    • Since any information provided to Generative AI may be stored without your knowledge, it may also be subject to data breaches and cyber-attacks.
  • Security Vulnerabilities
    • Generative AI may include links to outside websites that could have harmful software or be used to create and share code. It can also be vulnerable to attacks that could reveal personal information or spread false information. Users should avoid using services that haven鈥檛 been approved for use by the 麻豆传媒. Be cautious when clicking links, and make sure you have good antivirus protection.
  • Be  Content Conscious
    • Employees and students are responsible for what they discuss with Generative AI. Don鈥檛 use it for illegal activities, to violate others' rights, or to discuss sensitive topics that could risk others' privacy and security. Avoid sharing information that is legally or confidentially restricted.
  • Read and Understand Terms of Service
    • Before using Generative AI, make sure to read, understand, and follow the platform鈥檚 terms of service, privacy policies, and any supplemental AI data agreements. Ignoring these terms could lead to legal issues. Also, by agreeing to these terms, the University might lose certain legal protections. 麻豆传媒 has a list of vetted Generative AI applications to simplify this process.
  • Attribution and Ownership
    • When using Generative AI, think about how to give proper credit to both the AI and the human creators to ensure everyone鈥檚 contributions are recognized.
  • Keep a Human in the Loop
    • Using AI to collect data or make decisions that affect others can have harmful effects. Always ensure these systems are properly checked, monitored, and controlled before relying on them.

 


Acceptable Use of Generative AI

Any use of Generative AI via platforms, tools, and software must be consistent with the 麻豆传媒 Board of Regents鈥 policies, university regulations, university acceptable use policies and codes of conduct, and applicable state, federal, and international laws and regulations, including those dealing with copyrights and other intellectual property.

  • When using Generative AI, you should not enter data classified as Internal or Restricted (as described by BOR Policy) into Generative AI Tools that are not expressly approved for that data classification. This includes, but is not limited to:
    • Data protected by FERPA, PIPA, HIPAA, GDPR, PIPL, GLBA, PCI-DSS, CJIS, Export Control, or any other applicable regulation.
    • Information required to be protected by contract, research data including, incorporating, or referencing protected human subjects information, banking information, private and protectable personally identifiable information, or any information that is generally not available to parties outside the 麻豆传媒 community such as non-directory listings, minutes from non-confidential meetings, and internal websites.
  • While using Generative AI, University employees and students who become aware of additional risks, loss, or breach (potential or actual) associated with such use must immediately stop using the technology and contact the OIT Information Security and Assurance Office without delay.
  • University employees and students are prohibited from using Generative AI to reverse engineer, decompile, and/or develop competing AI models except as permitted by law or applicable licensing terms.
  • University employees and students are prohibited from using Generative AI to identify anonymized or pseudonymized research subjects or healthcare patients.
  • All work products developed with Generative AI must provide attribution to creators, including the utilized AI system(s).
  • University employees and students will use a human in-the-loop approach to guard against Generative AI output that could produce dangerous results or create/reinforce damaging biases. This includes implementing fairness safeguards, monitoring and auditing outcomes, and ensuring transparency with respect to how such systems are used.




Abide by Academic Policy

Review and understand your university鈥檚 policies on academic integrity. Instructors should be clear with students on what, if any, and how Generative AI is allowed to be used in their coursework. Students are encouraged to ask their instructors for guidance or clarification if needed.

 


Violations and Exceptions

In an effort to perform its requirements under Board of Regents Policy & Regulation R02.07.060 to secure University Information Resources, systems and services which fail to abide by approved information security controls may be subject to the implementation of compensating controls to effectively manage risk, up to and including disconnection from the 麻豆传媒 network or blocking of traffic to/from untrusted networks.

麻豆传媒 employees, students, and other affiliates who attempt to circumvent an approved information security control may be subject to sanctions or administrative action depending on their role and the nature of the violation, which:

  • may result in a reduction or loss of access privileges, or the imposition of other restrictions or conditions on access privileges;
  • may subject employees to disciplinary action, up to and including termination; 
  • may subject students to disciplinary action including expulsion according to the Student Code of Conduct procedures; and 
  • may also subject violators to criminal prosecution. 
Requesting an Exception

The process for requesting exceptions to this or other IT Security Standard are outlined in the Information Security Controls Standard.

 


Implementation

OIT Information Security and Assurance is responsible for the implementation, maintenance and interpretation of this IT Standard.

Related Standards

Data Classification Standard (coming soon!)
Acceptable Use of Information Resources 

References

Academic Integrity  at , , and

Student Code of Conduct at , , and



Lifecycle and Contacts

Standard Owner: OIT Information Security and Assurance

Standard Contact: Chief Information Security Officer

Phone: 907-474-5347

Email: ua-ciso@alaska.edu

Approved: January 2025

Effective: January 2025

Next Review: January 2027