Body
Who can use Copilot?
Microsoft Copilot, which provides AI-powered webchat, is currently available to those with select University of Oregon affiliations.
Copilot for Microsoft 365, which adds generative AI to Microsoft applications such as Teams, Outlook, and Word is currently being evaluated for deployment at UO.
How can I be a responsible AI user with Copilot?
- Understand AI capabilities and limitations.
- Be aware that potential biases can impact results.
- It is critical to know how to write prompts and how to verify output.
- Always evaluate output to determine if it is correct and if it is appropriate.
- Report sensitive information overshares to the Information Security Office.
- Report inappropriate or offensive content to Digital Work Experience.
Microsoft Copilot data does not leave our organization and is not used to train any external large language models (LLM).
- Microsoft Copilot does learn from our organization, but only uses UO data to train the UO LLM.
- UO data is and will remain UO data.
Please Note: AI can be usefully wrong. Because it synthesizes ideas and information in novel ways, it is inherently weird—so it can often propose some unconventional ideas.
- It is crucial to check that any AI-generated content is complete and correct.
- AI is not a replacement for your own creativity or judgment; it is a tool that can enhance and augment those innately human skills.
- Do not blindly accept or follow AI suggestions; instead, evaluate them carefully and objectively.
- Ask yourself:
- Is this content relevant and appropriate?
- How can I improve or refine it?
- How can I add my own voice or style to it?
How can I report sensitive information overshares?
While Copilot for Microsoft 365 can only access the data your logged in account has access to, on occasion oversharing occurs. This may mean a file with sensitive information was shared in a way that allows broader access than intended or appropriate.
If you encounter sensitive information, you believe you should not have access to it, please make a Information Security Consulting Request.
What are Microsoft’s Six AI Principles?
Microsoft is committed to making sure AI systems are developed responsibly and in ways that warrant people’s trust. Microsoft’s Six AI Principles are part of their approach to create principled and actionable norms to ensure organizations develop and deploy AI responsibly.
Accountability:
Microsoft AI systems include capabilities that support informed human oversight and control.
Transparency:
- Microsoft provides information about the capabilities and limitations of our AI systems to support stakeholders in making informed choices about those systems.
- Microsoft AI systems are designed to inform people that they are interacting with an AI system or are using a system that generates or manipulates image, audio, or video content that could falsely appear to be authentic.
Fairness:
- Microsoft AI systems are designed to provide a similar quality of service for identified demographic groups, including marginalized groups.
- Microsoft AI systems that allocate resources or opportunities in essential domains are designed to minimize disparities in outcomes for identified demographic groups, including marginalized groups.
- Microsoft AI systems that describe, depict, or otherwise represent people, cultures, or society are designed to minimize the potential for stereotyping, demeaning, or erasing identified demographic groups, including marginalized groups.
Reliability and Safety:
- Microsoft evaluates the operational factors and ranges within which AI systems are expected to perform reliably and safely, remediates issues, and provides related information to customers.
- Microsoft AI systems are designed to minimize the time to remediation of predictable or known failures.
- Microsoft AI systems are subject to ongoing monitoring, feedback, and evaluation so that we can identify and review new uses, identify, and troubleshoot issues, manage and maintain the systems, and improve them over time.
Privacy and Security:
- Microsoft AI systems are designed to protect privacy in accordance with the Microsoft Privacy Standard.
- Microsoft AI systems are designed to be secure in accordance with the Microsoft Security Policy.
Inclusiveness:
- Microsoft AI systems are designed to be inclusive in accordance with the Microsoft Accessibility Standards.
Addition information and protections
Microsoft Edge or Google Chrome browser are the recommended and supported browsers now.
- For Firefox and Safari users: Microsoft Copilot is accessible through a login prompt. Enter your full UO email address and password to proceed.
Users with Commercial Data Protection in Microsoft Copilot will see green text above the chat box that reads: “Your personal data and company data are protected.”
Commercial data protection means both user and organizational data are protected: Prompts and responses aren't saved, Microsoft has no eyes-on access, and chat data isn't used to train the underlying large language models. Unlike Copilot for Microsoft 365, Copilot has no access to organizational data in the Microsoft 365 Graph.
- This is a public preview of Copilot for Enterprise and may offer some surprises and mistakes. It is still in active development.
- Faculty and staff will have their personal and company data protected in accordance with the UO site-license agreement with Microsoft.
- Those with student affiliations may not currently have commercial data protection due to age restrictions for this feature set by Microsoft.
- Queries are limited to 30 responses per conversation style per browser session.