Generative AI

Microsoft Copilot with Commercial Data Protection
Enhancing the way we work

Overview

Microsoft Copilot with commercial data protection is an AI assistant available for RIT faculty, staff and students where you can ask questions and get detailed responses with footnotes that link back to original sources. Because it is connected to Microsoft’s search engine, it has the ability to provide users with up-to-date information and real links.

  • With commercial data protection your chat data is not saved and not used to train models. The data is not available to Microsoft.
  • In addition to text generation, there is an image creator integration (based on DALL-E 3).
  • Please note RIT does not currently have access to Microsoft Office 365 Copilot
  • Overview of Microsoft Copilot with commercial data protection

How to Sign in to Microsoft Copilot with Commercial Data Protection

Microsoft Copilot Logo

  1. Open Microsoft Edge browser (using other browsers may not work or deliver a degraded experience).
  2. Sign in using your RIT email address and password.

    This visual step by step guide Scribe shows how faculty,staff and students  can sign in to the protected version with their RIT Username and Password.
     

KB0043737 Accessing Microsoft Edge

KB0043730 How to Access Microsoft Copilot with Commercial Data Protection

Additional Information About Using Microsoft Copilot

Here are some additional links on how to write your prompts which tells Microsoft Copilot what you would like

Talent Development Training on Copilot with Enterprise Data Protection

Faculty and Staff - sign up for Talent Development Training on Copilot with Enterprise Data Protection

In this course, you will learn how to access the tool, how to turn off the Edge Browser "noise", some prompt tips and links, simple and complex use case examples for doing more with ease, what folks can do today, data handling and ISO Guidance, the P Card process for buying AI tools, and develop a tool inventory.
The course will be held on:
Tuesday, 10/29 and 11/19 from 2:00 PM - 3:00 PM 

You can watch the course on your own here, or register for the course with Talent Development and have this added to your Talent Development Professional Development Transcript in Talent roadmap.

AI Zoom Companion

Currently, a small team within ITS is exploring the new Zoom AI companion features tool's capabilities and reliability. They are also assessing the effort required to manage and operationalize the tool, including creating documentation and support materials.

If you decide to experiment with the AI companion on your own at this time, please make sure to take a look at the ISO's recommendations on using Generative AI tools. Additionally, please be aware that the tool has inconsistently provided accurate summaries or responses in our exploration.

Thank you for your patience and cooperation. For additional information please visit the Zoom Support Center.

Generative AI and Data Use

As we navigate the exciting potential of AI services, it’s essential to remember our commitment to data protection.

  1. Avoid Sharing Private or Confidential Information and Understanding the Risks
  2. Refrain from providing sensitive data such as student records, financial details, personal identifiable information (PII), intellectual property, or any other confidential material to AI systems.

    For additional details on different classifications of information at RIT, please refer to RIT’s Information Handling Matrix  (Anyone who handles private or confidential information is required to take the Information Handling Training.)

Generative AI Guidance

  1. Take time to understand the risks associated with GenAI technologies.
  2. Understand the limitations and biases embedded in GenAI tools.
  3. Familiarize yourself with data protection laws (HIPAA, FERPA, etc.) to ensure you remain compliant when using GenAI tools.
  4. Be cautious when paying for a GenAI tool to help ensure you are not being scammed.
  5. Stick to using reputable AI sources (OpenAI, Google, Microsoft, Amazon).
  6. Report any suspicious activity or potential breaches immediately to the RIT Service Center at 585-475-5000 or help.rit.edu
  7. Generative AI  is constantly evolving, the answers a Generative AI tool gives may not be correct (otherwise known as hallucinations).
  8. It will be up to you to determine if the results are acceptable for your needs.
  9. Results should never be considered as the authoritative source on a topic or issue.
  10. Please note that as Generative AI continues to evolve documentation, guidance and process will also continue to evolve.


    As we explore these new technologies and their capabilities, remain aware of data security and handling responsibilities. RIT supports the responsible use of AI services, and encourages our students, faculty and staff to do the same. Let’s move forward together, responsibly and securely.