Co-Pilot and Misconfigured Permissions – A Looming Threat?

Written by Hornetsecurity / 14.02.2024 /

You are currently viewing a placeholder content from Youtube. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

You are currently viewing a placeholder content from Libsyn. To access the actual content, click the button below. Please note that doing so will share data with third-party providers.

More Information

The use of Large Language Models (LLMs), like ChatGPT has skyrocketed, infiltrating multiple facets of modern life. In today’s podcast episode, Andy and Paul Schnackenburg explore Microsoft 365 Co-Pilot and some surprising risks it can surface. Microsoft 365 Co-Pilot is more than just a virtual assistant: it’s a powerhouse of productivity! It is a versatile generative AI tool that is embedded within various Microsoft 365 applications, and as such, it can execute various tasks across different software platforms in seconds. 

Amidst discussions about Co-Pilot’s unique features and functionalities, many wonder: How does M365 Co-Pilot differ from other LLMs, and what implications does this hold for data security and privacy? Tune in to learn more!

Timestamps:

(4:16) – How is Co-Pilot different from other Large Language Models? 

(11:40) – How are misconfigured permissions a special danger with Co-Pilot? 

(16:53) – How do M365 tenant permission get so “misconfigured”? 

(21:53) – How can your organization use Co-Pilot safely? 

(26:11) – How can you easily right-size your M365 permissions before enabling Co-Pilot? 

Episode Resources:

Paul’s article on preparing for Co-Pilot

Webinar with demo showcasing the theft of M365 credentials

Start your free trial of M365 Total Protection

Effortlessly manage your Microsoft 365 permissions

You might also be interested in