This guidance is provided to U-M community members about the use of third-party Artificial Intelligence (AI), a rapidly evolving technology. The guidance does not apply to ITS AI services (UM GPT, Maizey, and GPT Toolkit), which meet the privacy and security standards of the university to be used with institutional data of Moderate sensitivity, including FERPA data. For details, see the ITS AI Services page in the Sensitive Data Guide.
While there are many chances to experiment and innovate using third-party AI services at the university, it is important to protect the U-M’s valuable institutional and research data appropriately. Follow these guidelines when considering using third-party AI services with institutional data:
- Only use approved AI services with U-M sensitive data.
- Do not place sensitive data in an AI service unless there is a U-M contract or data agreement in place that permits it.
- AI stand-alone tools and/or functionality embedded in a third-party service must have an appropriate data protection agreement with U-M in order to be used with sensitive data.
- AI services that do not have a contract or data agreement with U-M should only be used with institutional data classified as LOW. See U-M Data Classification Levels for descriptions and examples of each data classification.
- AI-generated code should not be used for institutional IT systems and services unless it is reviewed by a human, as well as meets the requirements of Secure Coding and Application Security (DS-18).
Please contact ITS Information Assurance through the ITS Service Center if you are in doubt about using a particular solution with U-M institutional data.