Microsoft’s artificial intelligence (AI) division accidentally exposed 38 TB of a company’s sensitive employee data after sharing a repository of AI models on GitHub.
A repository shared publicly on GitHub to make it easier to work with AI tools and models in image recognition contained a “URL” that provided access to an internal storage account.
The “URL” had a shared access signature (SAS) token that did not restrict access to storage resources, but rather enabled viewing of the information stored in Azure, Wiz researchers found.
In this way, 38 TB of confidential information belonging to Microsoft employees was exposed, such as backup copies of workstation profiles and internal Microsoft Teams messages, as noted in TechCrunch.
Additionally, Wiz researchers point out that the URL has been exposing the information since 2020 and that it has been configured to grant complete control, allowing anyone to make changes or inject malicious content.
Microsoft confirmed this security incident in a post on the Security Response Center (MSRC) blog and denied that customer data was affected.
Wiz notified Microsoft of the data disclosure on June 22nd. MSRC teams then revoked the SAS token and prevented external access to the storage. Microsoft says it mitigated the issue on June 24 and began an investigation concluding that this incident poses no risks to customers.