This guidance is provided to U-M community members about the use of third-party Artificial Intelligence (AI), a rapidly evolving technology. The guidance does not apply to ITS AI services (UM GPT, Maizey, and GPT Toolkit), which meet the privacy and security standards of the university to be used with institutional data of Moderate sensitivity, including FERPA data. For details, see the ITS AI Services page in the Sensitive Data Guide.
ChatGPT and similar AI programs pose myriad societal opportunities and challenges, including at U-M. While there are many chances to experiment and innovate using these tools at the university, at present U-M does not have a contract or agreement with any AI provider. This means that standardized U-M security and privacy provisions are not present for this technology.
Accordingly, as with any other IT service or product with no university contract or agreement, AI tools should only be used with institutional data classified as LOW. See U-M Data Classification Levels for descriptions and examples of each data classification.
Do not use ChatGPT or other AI with information such as student information regulated by FERPA, human subject research information, health information, HR records, etc.
In addition, AI-generated code should not be used for institutional IT systems and services unless it is reviewed by a human, as well as meets the requirements of Secure Coding and Application Security (DS-18).
Finally, know that Open AI Usage policies disallow the use of its products for other specific activities.
This guidance will change as U-M engages in broader institutional review and analysis. In the meantime, please do your part to use AI responsibly, including reviewing the data you input into it to ensure it meets the guidance above.
Resources: