Influence operations | Latest Threats | Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog/threat-intelligence/influence-operations/ Expert coverage of cybersecurity topics Sun, 21 Dec 2025 22:09:44 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.3 ​​Cyber Signals Issue 8 | Education under siege: How cybercriminals target our schools​​ http://approjects.co.za/?big=en-us/security/blog/2024/10/10/cyber-signals-issue-8-education-under-siege-how-cybercriminals-target-our-schools/ Thu, 10 Oct 2024 11:00:00 +0000 ​This edition of Cyber Signals delves into the cybersecurity challenges facing classrooms and campuses, highlighting the critical need for robust defenses and proactive measures. From personal devices to virtual classes and research stored in the cloud, the digital footprint of school districts, colleges, and universities has multiplied exponentially.

The post ​​Cyber Signals Issue 8 | Education under siege: How cybercriminals target our schools​​ appeared first on Microsoft Security Blog.

]]>
Introduction | Security snapshot | Threat briefing
Defending against attacks | Expert profile 

Education is essentially an “industry of industries,” with K-12 and higher education enterprises handling data that could include health records, financial data, and other regulated information. At the same time, their facilities can host payment processing systems, networks that are used as internet service providers (ISPs), and other diverse infrastructure. The cyberthreats that Microsoft observes across different industries tend to be compounded in education, and threat actors have realized that this sector is inherently vulnerable. With an average of 2,507 cyberattack attempts per week, universities are prime targets for malware, phishing, and IoT vulnerabilities.¹ 

Security staffing and IT asset ownership also affect education organizations’ cyber risks. School and university systems, like many enterprises, often face a shortage of IT resources and operate a mix of both modern and legacy IT systems. Microsoft observes that in the United States, students and faculty are more likely to use personal devices in education compared to Europe, for example. Regardless of ownership however, in these and other regions, busy users do not always have a security mindset. 

A mortarboard with QR code design on top, next to the text

This edition of Cyber Signals delves into the cybersecurity challenges facing classrooms and campuses, highlighting the critical need for robust defenses and proactive measures. From personal devices to virtual classes and research stored in the cloud, the digital footprint of school districts, colleges, and universities has multiplied exponentially.  

We are all defenders. 

Section header with the text “Security Snapshot.”
Two icons, each beside a text bubble containing a stat about cyber threats against educational institutions.
Section header with the text “Threat briefing.”

A uniquely valuable and vulnerable environment 

The education sector’s user base is very different from a typical large commercial enterprise. In the K-12 environment, users include students as young as six years old. Just like any public or private sector organization, there is a wide swath of employees in school districts and at universities including administration, athletics, health services, janitorial, food service professionals, and others. Multiple activities, announcements, information resources, open email systems, and students create a highly fluid environment for cyberthreats.

Virtual and remote learning have also extended education applications into households and offices. Personal and multiuser devices are ubiquitous and often unmanaged—and students are not always cognizant about cybersecurity or what they allow their devices to access.

Education is also on the front lines confronting how adversaries test their tools and their techniques. According to data from Microsoft Threat Intelligence, the education sector is the third-most targeted industry, with the United States seeing the greatest cyberthreat activity.

Cyberthreats to education are not only a concern in the United States. According to the United Kingdom’s Department of Science Innovation and Technology 2024 Cybersecurity Breaches Survey, 43% of higher education institutions in the UK reported experiencing a breach or cyberattack at least weekly.² 

QR codes provide an easily disguised surface for phishing cyberattacks

Today, quick response (QR) codes are quite popular—leading to increased risks of phishing cyberattacks designed to gain access to systems and data. Images in emails, flyers offering information about campus and school events, parking passes, financial aid forms, and other official communications all frequently contain QR codes. Physical and virtual education spaces might be the most “flyer friendly” and QR code-intensive environments anywhere, given how big a role handouts, physical and digital bulletin boards, and other casual correspondence help students navigate a mix of curriculum, institutional, and social correspondence. This creates an attractive backdrop for malicious actors to target users who are trying to save time with a quick image scan. 

Recently the United States Federal Trade Commission issued a consumer alert on the rising threat of malicious QR codes being used to steal login credentials or deliver malware.³

Microsoft Defender for Office 365 telemetry shows that approximately more than 15,000 messages with malicious QR codes are targeted toward the educational sector daily—including phishing, spam, and malware. 

Legitimate software tools can be used to quickly generate QR codes with embedded links to be sent in email or posted physically as part of a cyberattack. And those images are hard for traditional email security solutions to scan, making it even more important for faculty and students to use devices and browsers with modern web defenses. 

Targeted users in the education sector may use personal devices without endpoint security. QR codes essentially enable the threat actor to pivot to these devices. QR code phishing (since its purpose is to target mobile devices) is compelling evidence of mobile devices being used as an attack vector into enterprises—such as personal accounts and bank accounts—and the need for mobile device protection and visibility. Microsoft has significantly disrupted QR code phishing attacks. This shift in tactics is evident in the substantial decrease in daily phishing emails intercepted by our system, dropping from 3 million in December 2023 to just 179,000 by March 2024. 

A pie chart in front of a blue background
Source: Microsoft incident response engagements.

Universities present their own unique challenges. Much of university culture is based on collaboration and sharing to drive research and innovation. Professors, researchers, and other faculty operate under the notion that technology, science—simply knowledge itself—should be shared widely. If someone appearing as a student, peer, or similar party reaches out, they’re often willing to discuss potentially sensitive topics without scrutinizing the source. 

University operations also span multiple industries. University presidents are effectively CEOs of healthcare organizations, housing providers, and large financial organizations—the industry of industries factor, again. Therefore, top leaders can can be prime targets for anyone attacking those sectors.

The combination of value and vulnerability found in education systems has attracted the attention of a spectrum of cyberattackers—from malware criminals employing new techniques to nation-state threat actors engaging in old-school spy craft.  

Microsoft continually monitors threat actors and threat vectors worldwide. Here are some key issues we’re seeing for education systems. 

Email systems in schools offer wide spaces for compromise 

The naturally open environment at most universities forces them to be more relaxed in their email hygiene. They have a lot of emails amounting to noise in the system, but are often operationally limited in where and how they can place controls, because of how open they need to be for alumni, donors, external user collaboration, and many other use cases.  

Education institutions tend to share a lot of announcements in email. They share informational diagrams around local events and school resources. They commonly allow external mailers from mass mailing systems to share into their environments. This combination of openness and lack of controls creates a fertile ground for cyberattacks.

AI is increasing the premium on visibility and control  

Cyberattackers recognizing higher education’s focus on building and sharing can survey all visible access points, seeking entry into AI-enabled systems or privileged information on how these systems operate. If on-premises and cloud-based foundations of AI systems and data are not secured with proper identity and access controls, AI systems become vulnerable. Just as education institutions adapted to cloud services, mobile devices and hybrid learning—which introduced new waves of identities and privileges to govern, devices to manage, and networks to segment—they must also adapt to the cyber risks of AI by scaling these timeless visibility and control imperatives.

Nation-state actors are after valuable IP and high-level connections 

Universities handling federally funded research, or working closely with defense, technology, and other industry partners in the private sector, have long recognized the risk of espionage. Decades ago, universities focused on telltale physical signs of spying. They knew to look for people showing up on campus taking pictures or trying to get access to laboratories. Those are still risks, but today the dynamics of digital identity and social engineering have greatly expanded the spy craft toolkit. 

Universities are often epicenters of highly sensitive intellectual property. They may be conducting breakthrough research. They may be working on high-value projects in aerospace, engineering, nuclear science, or other sensitive topics in partnership with multiple government agencies.  

For cyberattackers, it can be easier to first compromise somebody in the education sector who has ties to the defense sector and then use that access to more convincingly phish a higher value target.  

Universities also have experts in foreign policy, science, technology, and other valuable disciplines, who may willingly offer intelligence, if deceived in social-engineering cyberattacks employing false or stolen identities of peers and others who appear to be in individuals’ networks or among trusted contacts. Apart from holding valuable intelligence themselves, compromised accounts of university employees can become springboards into further campaigns against wider government and industry targets.

Nation-state actors targeting education 

Subsection header with Sandstorm icon and the text “Iran.”

Peach Sandstorm

Peach Sandstorm has used password spray attacks against the education sector to gain access to infrastructure used in those industries, and Microsoft has also observed the organization using social engineering against targets in higher education.  

Mint Sandstorm 

Microsoft has observed a subset of this Iranian attack group targeting high-profile experts working on Middle Eastern affairs at universities and research organizations. These sophisticated phishing attacks used social engineering to compel targets to download malicious files including a new, custom backdoor called MediaPl. 

Mabna Institute  

In 2023, the Iranian Mabna Institute conducted intrusions into the computing systems of at least 144 United States universities and 176 universities in 21 other countries.  

The stolen login credentials were used for the benefit of Iran’s Islamic Revolutionary Guard Corps and were also sold within Iran through the web. Stolen credentials belonging to university professors were used to directly access university library systems. 

Subsection header with Sleet icon and the text “North Korea.”

Emerald Sleet

This North Korean group primarily targets experts in East Asian policy or North and South Korean relations. In some cases, the same academics have been targeted by Emerald Sleet for nearly a decade.  

Emerald Sleet uses AI to write malicious scripts and content for social engineering, but these attacks aren’t always about delivering malware. There’s also an evolving trend where they simply ask experts for policy insight that could be used to manipulate negotiations, trade agreements, or sanctions. 

Moonstone Sleet 

Moonstone Sleet is another North Korean actor that has been taking novel approaches like creating fake companies to forge business relationships with educational institutions or a particular faculty member or student.  

One of the most prominent attacks from Moonstone Sleet involved creating a fake tank-themed game used to target individuals at educational institutions, with a goal to deploy malware and exfiltrate data. 

Subsection header with Storm icon and the text “Groups in development.”

Storm-1877  

This actor largely engages in cryptocurrency theft using a custom malware family that they deploy through various means. The ultimate goal of this malware is to steal crypto wallet addresses and login credentials for crypto platforms.  

Students are often the target for these attacks, which largely start on social media. Storm-1877 targets students because they may not be as aware of digital threats as professionals in industry. 

Section header with the text “Defending against attacks.”

A new security curriculum 

Due to education budget and talent constraints and the inherent openness of its environment, solving education security is more than a technology problem. Security posture management and prioritizing security measures can be a costly and challenging endeavor for these institutions—but there is a lot that school systems can do to protect themselves.  

Maintaining and scaling core cyberhygiene will be key to securing school systems. Building awareness of security risks and good practices at all levels—students, faculty, administrators, IT staff, campus staff, and more—can help create a safer environment.  

For IT and security professionals in the education sector, doing the basics and hardening the overall security posture is a good first step. From there, centralizing the technology stack can help facilitate better monitoring of logging and activity to gain a clearer picture into the overall security posture and any vulnerabilities. 

Oregon State University 

Oregon State University (OSU), an R1 research-focused university, places a high priority on safeguarding its research to maintain its reputation. In 2021, it experienced an extensive cybersecurity incident unlike anything before. The cyberattack revealed gaps in OSU’s security operations.

“The types of threats that we’re seeing, the types of events that are occurring in higher education, are much more aggressive by cyber adversaries.”

—David McMorries, Chief Information Security Officer at Oregon State University

In response to this incident, OSU created its Security Operations Center (SOC), which has become the centerpiece of the university’s security effort. AI has also helped automate capabilities and helped its analysts, who are college students, learn how to quickly write code—such as threat hunting with more advanced hunting queries. 

Arizona Department of Education 

A focus on Zero Trust and closed systems is an area that the Arizona Department of Education (ADE) takes further than the state requirements. It blocks all traffic from outside the United States from its Microsoft 365 environment, Azure, and its local datacenter.

“I don’t allow anything exposed to the internet on my lower dev environments, and even with the production environments, we take extra care to make sure that we use a network security group to protect the app services.”

—Chris Henry, Infrastructure Manager at the Arizona Department of Education 

Three icons on a whiteboard background, each beside a text bubble containing information on defending against cyberattacks.

Follow these recommendations:  

  • The best defense against QR code attacks is to be aware and pay attention. Pause, inspect the code’s URL before opening it, and don’t open QR codes from unexpected sources, especially if the message uses urgent language or contains errors. 
  • Consider implementing “protective domain name service,” a free tool that helps prevent ransomware and other cyberattacks by blocking computer systems from connecting to harmful websites. Prevent password spray attacks with a stringent password and deploy multifactor authentication.  
  • Educate students and staff about their security hygiene, and encourage them to use multifactor authentication or passwordless protections. Studies have shown that an account is more than 99.9% less likely to be compromised when using multifactor authentication.   
Section header with the text “Expert profile”

Corey Lee has always had an interest in solving puzzles and crimes. He started his college career at Penn State University in criminal justice, but soon realized his passion for digital forensics after taking a course about investigating a desktop computer break-in.  

After completing his degree in security and risk analysis, Corey came to Microsoft focused on gaining cross-industry experience. He’s worked on securing everything from federal, state, and local agencies to commercial enterprises, but today he focuses on the education sector.  

Headshot of Corey Lee next to his quote.

After spending time working across industries, Corey sees education through a different lens—the significantly unique industry of industries. The dynamics at play inside the education sector include academic institutions, financial services, critical infrastructure like hospitals and transportation, and partnerships with government agencies. According to Corey, working in such a broad field allows him to leverage skillsets from multiple industries to address specific problems across the landscape. 

The fact that education could also be called underserved from a cybersecurity standpoint is another compelling challenge, and part of Corey’s personal mission. The education industry needs cybersecurity experts to elevate the priority of protecting school systems. Corey works across the public and industry dialogue, skilling and readiness programs, incident response, and overall defense to protect not just the infrastructure of education, but students, parents, teachers, and staff. 

Today, Corey is focused reimagining student security operations centers, including how to inject AI into the equation and bring modern technology and training to the table. By growing the cybersecurity work force in education and giving them new tools, he’s working to elevate security in the sector in a way that’s commensurate with how critical the industry is for the future. 

Next steps with Microsoft Security

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.


¹Global Cyberattacks Continue to Rise with Africa and APAC Suffering Most, Check Point Blog. April 27, 2023.

²Cyber security breaches survey 2024: education institutions annex, The United Kingdom Department for Science, Innovation & Technology. April 9, 2024

³Scammers hide harmful links in QR codes to steal your information, Federal Trade Commission (Alvaro Puig), December 6, 2023.

Methodology: Snapshot and cover stat data represent telemetry from Microsoft Defender for Office 365 showing how a QR code phishing attack was disrupted by image detection technology and how Security Operations teams can respond to this threat. Platforms like Microsoft Entra provided anonymized data on threat activity, such as malicious email accounts, phishing emails, and attacker movement within networks. Additional insights are from the 78 trillion daily security signals processed by Microsoft each day, including the cloud, endpoints, the intelligent edge, and telemetry from Microsoft platforms and services including Microsoft Defender. Microsoft categorizes threat actors into five key groups: influence operations; groups in development; and nation-state, financially motivated, and private sector offensive actors. The new threat actors naming taxonomy aligns with the theme of weather.  

© 2024 Microsoft Corporation. All rights reserved. Cyber Signals is for informational purposes only. MICROSOFT MAKES NO WARRANTIES, EXPRESS, IMPLIED OR STATUTORY, AS TO THE INFORMATION IN THIS DOCUMENT. This document is provided “as is.” Information and views expressed in this document, including URL and other Internet website references, may change without notice. You bear the risk of using it. This document does not provide you with any legal rights to any intellectual property in any Microsoft product. 

The post ​​Cyber Signals Issue 8 | Education under siege: How cybercriminals target our schools​​ appeared first on Microsoft Security Blog.

]]>
Explore Microsoft’s AI innovations at RSA Conference 2024 http://approjects.co.za/?big=en-us/security/blog/2024/04/04/explore-microsofts-ai-innovations-at-rsa-conference-2024/ Thu, 04 Apr 2024 16:00:00 +0000 Will you be at the RSA Conference? Join us for Microsoft Pre-Day, sessions, and other events for insights on leading in AI. Keep reading for what to expect at the event.

The post Explore Microsoft’s AI innovations at RSA Conference 2024 appeared first on Microsoft Security Blog.

]]>
The security of your organization directly correlates with your ability to transform and achieve your business objectives. Microsoft can help you make that happen, with our powerful combination of large-scale data and threat intelligence, end-to-end protection, and responsible AI. ​

Recently at Microsoft Secure, we shared our latest innovations for securing and governing AI and announced the generative AI solution for cyberdefenders: Microsoft Copilot for Security. We’re excited to talk with you about how to bring these innovations to life in your organization at the RSA Conference (RSAC), May 6 to 9, 2024, in San Francisco.

At the conference, we’ll demonstrate how to secure and govern AI and benefit from end-to-end protection with solutions across the Microsoft Security portfolio, including Microsoft Copilot for Security. We’ll show you how we help security teams build their skills faster to protect their organizations.

Join us a day early, on Sunday, May 5, 2024, at Microsoft Pre-Day to kick-off RSA Conference 2024, and hear directly from our Microsoft Security Business leaders, including Vasu Jakkal, Corporate Vice President, Microsoft Security Business, and Charlie Bell, Executive Vice President, Microsoft Security. Plus, view live demos at a variety of Microsoft sessions happening throughout the conference in breakout rooms and at our booth #6044N.

Microsoft Pre-Day: Hear from Microsoft Security product leaders

Start the conference on a high note by joining us for the Microsoft Pre-Day at the Microsoft Security Hub beginning at 4:00 PM PT on Sunday, May 5, 2024. For chief information security officers (CISOs) and cybersecurity professionals, we invite you to dive deeper into the latest AI announcements, learn about new product capabilities, and gain peace of mind of how to secure AI as you introduce the technology into your organization.

Vasu Jakkal and other Microsoft leaders will share our perspectives on topics like AI-powered security, innovations in end-to-end protection, and solutions to secure AI. We’ll also be joined by Microsoft customers who will share how they have been successful in their security evolution.

Pre-Day will continue with a Q&A session with Vasu Jakkal, Charlie Bell, and other leaders. They’ll reflect on the latest developments in cybersecurity, AI, and how the global community of cyber professionals can work together for a more secure future.

a group of people sitting in chairs

The conclusion of Pre-Day will be an evening reception at 6:00 PM PT, where you will have an opportunity to network with other professionals over drinks and appetizers.

Microsoft keynote and sessions: Get valuable insights and inspiration

Once the RSA Conference begins, you’ll have several opportunities to attend demos and connect one-on-one with Microsoft product experts. Mark your calendar on Tuesday, May 7, 2024, to visit our keynote in the official conference line up from 3:40 PM PT to 4:00 PM PT at Moscone West. Vasu Jakkal will share insights on how AI is evolving, its impact on the threat landscape, and what every organization should do to keep it safe.

While there is a lot of hype around AI, most security professionals are taking a risk-averse approach. This means that employees will find workarounds to use generative AI. Join Brian Fielder, Vice President of Security Engineering at Microsoft, who will talk about Microsoft’s approach to securing and governing AI.  You will walk away with practical guidance on governing AI, how to ensure data privacy, and compliance.

Check out one or all of our Microsoft Security sessions included in the RSA Conference agenda. Here are just a few you won’t want to miss:

  • “Hiding in Plain Sight: Hunting Volt Typhoon Cyber Actors.” Monday, May 6, 2024, 2:20 PM PT to 3:10 PM PT. Explore how the private sector and United States government work together to identify activity of the Volt Typhoon cyberthreat. Get lessons learned from Volt Typhoon’s tactics, techniques, and procedures, and how network defenders can best defend themselves. Kelly Bissell, Deputy CISO and CVP, Security Services, Microsoft; Cynthia Kaiser, Deputy Assistant Director, FBI; Morgan Adamski, Chief NSA Cybersecurity Collaboration Center, DOD; and Andrew Scott, Associate Director for China Operations, CISA; will share insights.
  • “AI Safety: Where’s the Puck Headed?” Wednesday, May 8, 2024, 9:40 AM PT to 10:30 AM PT. Hear from a panel of experts—Ram Shankar Siva Kumar Data Cowboy, Microsoft; Vijay Bolina, CISO, Head of Cybersecurity Research, Google DeepMind; Rumman Chowdhury, Responsible AI Fellow, Berkman Klein Center, Harvard University; Dan Hendrycks, Founder, Center for AI Safety; and Daniel Rohrer, Vice President of Software Product Security—Architecture and Research, NVIDIA—on what AI safety means, why it rose to prominence, and what this means for the future of AI and cybersecurity.
  • “From Attribution to Accountability: Upholding International Rules Online.” Wednesday, May 8, 2024, 1:15 PM PT to 2:05 PM PT. Get insights from a panel of litigation experts on how governments and the private sector can improve their public attribution efforts and ensure they are working cooperatively to advance respect for international rules online. The panel will include Amy Hogan-Burney, Associate Counsel and General Manager, Cybersecurity Policy and Protection, Microsoft; Megan Stifel, Chief Strategy Officer, Institute for Security and Technology; Liesyl Franz, Deputy Assistant Secretary for International Cyberspace Security, United States Department of State; Jonathan Horowitz, Legal Advisor, International Committee of the Red Cross; and William Middleton of the Foreign, Cyber Director, Foreign, Commonwealth and Development Office.

You can also stop by our Security Hub, located at The Palace Hotel, at any time to view an additional lineup of sessions well worth exploring, highlighting a few:

  • “A Year of Microsoft Copilot for Security.” Monday, May 6, 2024,10:30 AM PT to 11:30 AM PT. Join us as we reflect on 12 months of learning from early customers, listen to their real-world experiences, dive into research on how Copilot for Security can elevate productivity with optimized security and catch a sneak peek into the future of generative AI in security. 
  • “Threat intelligence trends and insights breakfast panel.”: Tuesday, May 7, 2024, 8:00 AM PT to 9:00 AM PT. Attend an exclusive briefing featuring experts from the Microsoft Threat Intelligence team, who analyze 78 trillion signals daily to uncover emerging threats. They will share insights and guidance on nation-state actors, cybercrime takedowns, fraud and social engineering, and cyber influence operations. 
  • AI Safety lunch and fireside chat: Tuesday, May 7, 2024, 12:00 PM PT to 1:30 PM PT. Join Sarah Bird, Chief Product Officer of Responsible AI, and Bret Arsenault, Chief Cybersecurity Advisor, where we’ll address CISOs’ top AI concerns, the importance of responsible AI, and Microsoft’s commitment to AI safety. Walk away with practical guidance on implementing AI safely in your organization. 
  • “Zero Trust for AI Security Leaders session.” Tuesday, May 7, 2024, 2:30 PM PT to 3:15 PM PT. Gain a deeper understanding of the five top risks inherent to generative AI and how Zero Trust for AI can help your organization deploy and use AI securely. You will walk away from this session with a Zero Trust for AI framework and a copy of the book signed by the author and presenter Mark Simos.

Visit Microsoft Security Hub at The Palace Hotel  

Join us for these sessions and more at the Microsoft Security Hub. Don’t miss out on the opportunity to explore all our sessions and ancillary events, plus you can also engage in a gamified experience dedicated to AI for security and have the chance to win exciting prizes. Additionally, you can schedule meetings with Microsoft experts and delve into the Cyber Threat Intelligence Program’s (CTIP) interactive experience from the Microsoft Digital Crimes Unit (DCU), where you’ll be able to explore the world of the malware sinkhole. The CTIP collects actionable cyberthreat intelligence from its malware disruption operations and uses this data to inform Microsoft products and services. Leveraging unique insights from Microsoft Threat Intelligence, the DCU disrupts cybercriminals’ technical infrastructure through civil legal actions, technical measures, criminal referrals to law enforcement, and public and private partnerships.

Register now to attend a variety of sessions at the Microsoft Security Hub, hosted at the historical Palace Hotel.

Stop by Microsoft Security booth at Moscone North  

The Microsoft booth will be located this year in Moscone North, close to the entrance, and will feature demos of Microsoft Security portfolio, theater presentations, gamified experience focused on Security for AI, and interactive DCU experience. Have some refreshments amidst your busy conference day and get your copy of the books about Zero Trust and Threat Intelligence signed by the authors.  

Drop by the theater at the the Microsoft booth to hear from our experts on the latest news and demos on AI, threat protection, secure access, data governance, cloud security, privacy, Zero Trust, and more. 

Participate in conversations on the future of cybersecurity

While at RSAC, consider participating in other events that will connect you with cybersecurity professionals and spark interesting conversation about the future of cybersecurity and AI.

  • CSA AI Summit​: Monday, May 6, 2024, 12:10 PM PT to 12:30 PM PT. Get a front-row seat to Microsoft Security for AI innovations as part of the summit. Led by Microsoft Senior Product Marketing Manager Tina Ying, our session will focus on Security for AI. The CSA AI Summit, from 8:00 AM to 3:00 PM PT on Level 3 of Moscone Center South, will explore the intersection of AI and cloud and offer best practices on how to make the most of the AI revolution. More than 1,100 cybersecurity leaders and professionals are expected to attend the summit.
  • Women in Cybersecurity (WiCyS) Meetup: ​Tuesday, May 7, 2024, 6:30 PM PT to 7:30 PM PT. Learn how WiCyS is introducing more women to cybersecurity—and how you can support these endeavors. The meetup will spotlight the achievements of WiCyS, established in 2012 to increase the number of women in cybersecurity roles by giving them mentorships, networking opportunities, and access to training and resources.
a group of people looking at a cell phone

Microsoft Partners: Networking opportunity and Security Excellence Awards celebration

The Microsoft Intelligent Security Association (MISA), comprised of independent software vendors (ISV) and managed security service providers (MSSPs) that have integrated their solutions with Microsoft’s security products, will be back at RSAC 2024. MISA will again have a demo station at Microsoft Booth #6044N in Moscone North Expo among other events, including the fifth annual Microsoft Security Excellence Awards (presented by MISA).

MISA’s RSAC 2024 presence will include:

  • MISA Demo Station: Stop by Microsoft Booth #6044N Monday, May 6, 2024, to Thursday, May 9, 2024, for demonstrations of Microsoft products.
  • Theater sessions: Join one or more of our five theater sessions for valuable insights focused on how MISA members work together with Microsoft to protect customers from cyberthreats. Led by MISA members, these sessions will focus on strategies to protect customers from cyber threats. The sessions will feature expertise from partners Bulletproof, ContraForce, Darktrace, Avanade, Kovrr, and glueckkanja AG.
  • Hub sessions: Join MISA members for a one-hour session on top-of-mind security topics in the Microsoft Security Hub.
  • Partner awards: MISA members are invited to attend the Microsoft Security Excellence Awards on Monday, May 6, 2024, where winners will be announced in nine security award categories.

Congratulations to the finalists of the 2024 Excellence Awards!

Connect with Microsoft at RSAC

Register today for the Microsoft Security RSAC Pre-Day on May 5, 2024 from 4:00 PM PT to 6:00 PM PT. Explore our sessions, receptions, and other events. Leverage this opportunity to learn and connect. Stop by our booth #6044N to ask questions. Enjoy conversation or simply say hello. Looking forward to seeing you at RSAC!

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.

The post Explore Microsoft’s AI innovations at RSA Conference 2024 appeared first on Microsoft Security Blog.

]]>
New research, tooling, and partnerships for more secure AI and machine learning http://approjects.co.za/?big=en-us/security/blog/2023/03/02/new-research-tooling-and-partnerships-for-more-secure-ai-and-machine-learning/ Thu, 02 Mar 2023 16:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=126238 At Microsoft, we’ve been working on the challenges and opportunities of AI for years. Today we’re sharing some recent developments so that the community can be better informed and better equipped for a new world of AI exploration.

The post New research, tooling, and partnerships for more secure AI and machine learning appeared first on Microsoft Security Blog.

]]>
Today we’re on the verge of a monumental shift in the technology landscape that will forever change the security community. AI and machine learning may embody the most consequential technology advances of our lifetime, bringing huge opportunities to build, discover, and create a better world.

Brad Smith recently pointed out that 2023 will likely mark the inflection point for AI going mainstream, the same way we think of 1995 for browsing the internet or 2007 for the smartphone revolution. And while Brad outlines some major opportunities for AI across industries, he also calls out the deep responsibility involved for those who develop these technologies. One of the biggest opportunities is also a core responsibility for us at Microsoft – building a more secure digital future. AI has the incredible potential to reshape our security landscape and protect organizations and people in ways we have been unable to do in the past.

With all of AI’s potential to empower people and organizations, it also comes with risks that the security community must address. It is imperative that we as an industry and global technology community get this journey right, and that means looking at AI with diverse perspectives, taking it slowly, working with our partners across government and industry – and sharing what we’re learning.

At Microsoft, we’ve been working on the challenges and opportunities of AI for years. Today we’re sharing some recent developments so that the community can be better informed and better equipped for a new world of AI exploration:

  • New research: A dedicated AI Security Red Team within Microsoft Threat Intelligence explored how traditional software threats affect AI and how security professionals, developers, and machine learning engineers should think about securing and monitoring AI and machine learning models. This team will continue to research and test security in AI and machine learning as we learn more as a company and as an industry.
  • New tools for defenders: Microsoft recently released an open-source automation tool for security testing of AI systems called Counterfit. The tool is designed to help organizations conduct AI security risk assessments and help ensure that the algorithms used in their businesses are robust, reliable, and trustworthy. As of today, our Counterfit tool will now be part of MITRE’s new Arsenal plug-in.
  • Industry collaboration to help secure the AI supply chain: We worked with Hugging Face, one of the most popular machine learning model repositories, to mitigate threats to AI and machine learning frameworks by collaborating on an AI-specific security scanner. This tool will help the security community to better secure their software supply chain when it comes to AI and machine learning.

AI brings new capabilities – and familiar risks

AI and machine learning can provide remarkable efficiency gains for organizations and lift the burden from a work force overwhelmed by data.

As an example, these capabilities can be particularly helpful in cybersecurity. There are more than 1,200 brute-force password attacks per second, and according to McKinsey, many organizations have more than 100 security tools in place, each with its own portal and alerting system to be checked daily. AI will change the way we defend against threats by improving our ability to protect and respond at the speed of an attack.

This is why AI is popular right now across industries: it provides a way to solve sophisticated problems utilizing complex data relationships merely by human labeling of input and output examples. It uses the inherent advantages of computing to lift the burden of massive data and speed our path to insights and discoveries.

Diagram comparing traditional programming and the AI paradigm

But with its capabilities, AI also brings some risks that organizations may not be considering. Many businesses are pulling existing models from public AI and machine learning repositories as they work to apply AI models to their own operations. But often, either the software used to build AI systems or the AI models housed in the repositories have not been moderated. This creates the risk that anyone can put up a tampered model for consumption, which can poison any system that uses the model.

There is a misconception in the security community that attacking AI and machine learning systems involves exotic algorithms and advanced knowledge of machine learning. But while machine learning may seem like math and magic, at the core it runs on bits and bytes, and like all software, it can be vulnerable to security issues.

Within the Microsoft Threat Intelligence team, we have a group that focuses on understanding these risks. The AI Security Red Team is an interdisciplinary group of security researchers, machine learning engineers, and software engineers whose goal is to proactively identify failure points in AI systems and help remediate them. The AI Security Red Team works to see how attackers approach AI and how they might be able to compromise an AI or machine learning model, so we can understand those attacks and how to get ahead of them.

The research: Old threats take on new life with AI

Recently the AI Security Red Team investigated how easy it would be for an attacker to inject malicious code into AI and machine learning model repositories. Their central question was, how can an adversary with current-day, traditional hacking skills cause harm to AI systems? This question led us to prove that traditional software attack vectors can indeed be a threat.

The security community has long known about Python serialization threats, but not in the context of AI systems. Academic researchers have warned about the lack of security practices in machine learning software. Recently, there has been a wave of research (for example, here, here, and here) looking at serialization threats specifically in the context of machine learning. MITRE ATLAS, the ATT&CK-style framework for adversarial machine learning, specifically calls out machine learning supply chain compromise. Even AI frameworks’ security documentation explicitly points out that machine learning model files are designed to store generic programs.

What has been less clear is how far attackers could take this, which is what the Microsoft AI Security Red Team explored. The AI Security Red Team routinely emulates a range of adversaries, from script kiddies to advanced attackers, to understand attack vectors against AI and machine systems. To answer our question, we assumed the role of an adversary whose goal is to compromise machine learning systems using only traditional hacking tools and methodology. In other words, our adversary knew nothing about specifically hacking AI.

Our exercise allowed us to assess the impact of poor encryption in machine learning endpoints, improperly configured machine learning workspaces and environments, and overly broad permissions in the storage accounts containing the machine learning model and training data – all of which can be thought of as traditional software threats.

The team found that these traditional software threats can be particularly impactful in the context of AI systems. We looked at two of the AI frameworks most widely used by machine learning engineers and data scientists. These frameworks provide a convenient way to write mathematical expressions to transform data into the required format before running it through an algorithm. The team was able to repurpose one such function, Keras Lambda layer, to inject arbitrary code.

The security community is aware of how Python’s pickle module, which is used for serialization and deserialization of a python object, can be abused by adversaries. Our work, however, shows that machine learning model file formats, which may not use pickle format, are still flexible enough to store generic programs and can be abused. This also reduces the number of steps the adversary needs to include a backdoor in a model released to the internet or a popular repository. 

In our proof of concept, we were able to repurpose the mathematical expression processing function to load malware. An added advantage to the adversary: the attack is self-contained and stealthy; it does not require loading extra custom code prior to loading the model itself.

New tools with Counterfit, CALDERA, and ATLAS

In security, we are constantly investing and innovating to learn about attacker behaviors and bring that human-led intelligence to our products. Our mission is to combine the diversity of thinking and experience from our threat hunters and companies we’ve integrated with (like RiskIQ and CyberX), so our customers can benefit from both hyper-scale threat intelligence as well as AI.

With our announcement today that Microsoft Counterfit is now integrated into MITRE CALDERA, security professionals can now build threat profiles to probe how an adversary can attack AI systems both via traditional methods as well as through novel machine learning techniques.

This new tool integration brings together Microsoft Counterfit, MITRE CALDERA (the de facto tool for adversary emulation), and MITRE ATLAS to help security practitioners better understand threats to ML systems. This will enable security teams to proactively look for weaknesses in AI and machine learning models and fix them before an attacker can take advantage. Now security professionals can get a holistic and automated security assessment of their AI systems using a tool that they are already familiar with.

“With the rise in real world attacks on machine learning systems that we’ve seen through the MITRE ATLAS collaboration, it’s more important than ever to create actionable tools for security professionals to prepare for these growing threats across the globe. We are thrilled to release a new adversary emulation tool, Arsenal, in partnership with Microsoft and their Counterfit team. These open-sourced tools will enhance the ability of security professionals and ML engineers across the community to test the vulnerability of their ML models through the MITRE CALDERA tools they already know and love.”

Doug Robbins, VP Engineering & Prototyping, MITRE

Investment and innovation with partners

In theory, once a machine learning model is embedded with malware, it can be posted in popular ML hosting repositories for anyone to download. An unsuspecting ML engineer could then download the backdoored ML model, which could lead to the adversary gaining foothold into the organization environment.

To help prevent this, we worked with Hugging Face, one of the most popular ML model repositories, to mitigate such threats by collaborating on an AI-specific security scanner.

We also recommend Software Bill of Materials (SBOM) for AI systems. We have amended the package URL (purl) specification to include Hugging Face, as well as MLFlow. Software Package Data Exchange (SPDX) and CycloneDX, the leading SBOM standards which leverage purl spec, allow tracking of ML models. Now any Azure ML, Databricks, or Hugging Face user leveraging Microsoft’s recommended SBOM will have the option to track ML models as part of supply chain security. 

Threat Intelligence in this space will continue to be a team sport, which is why we have partnered with MITRE and 11 other organizations to empower security professionals to track these novel forms of attack via the MITRE ATLAS initiative.

Given we distribute hundreds of millions of ML models every month, corrupted artifacts can cause great harm as well as damage the trust in the open-source community. This is why we at Hugging Face actively develop tools to empower users of our platform to secure their artefacts, and greatly appreciate Microsoft’s community contributions in advancing the security of ML models.

Luc Georges, ML Engineer, Hugging Face

It’s imperative that we as an industry and global technology community are thoughtful and diligent in our approach to securing AI and machine learning systems. At Microsoft, this is core to our focus on AI and our security culture. Because of the nature of emerging technology, in that it’s exactly that – emerging – there are many unknowns. In security, we are constantly investing and innovating to learn about attacker behaviors and bring that human-led intelligence to our products.

The reason we invest in research, tools and industry partnerships like those we’re announcing today is so we can understand the nature of what those attacks would entail, do our best to get ahead of them, and help others in the security community do the same. There is still so much to learn about AI, and we are continuously investing across our platforms and in red-team like research to learn about this technology and to help inform how it will be integrated into our platform and products.

Recommendations and resources

The following recommendations for security professionals can help minimize the risks for AI and ML systems:

  1. Encourage ML engineers to inventory, track and update ML models by leveraging model registries. This will help with keeping track of the models in an organization and their software dependencies.
  2. Apply existing security best practices to AI systems. This includes sandboxing the environment running ML models via containers and machine virtualization, network monitoring, and firewalls. We have outlined guidance here to get started. By doing this, we treat AI assets as yet another crown jewel that security teams should protect from adversaries.
  3. Leverage MITRE ATLAS to understand threats to AI systems, and emulate them using Microsoft Counterfit via MITRE CALDERA. This will help security analysts ground their effort in a realistic, numbers-driven approach to protecting AI systems.

This proof of concept that we pursued is part of broader investment at Microsoft to empower the wide range of stakeholders who play an important role to securely develop and deploy AI systems:

  • For security analysts to orient themselves with threats against AI systems, Microsoft, in collaboration with MITRE, released an ATT&CK-style framework Adversarial ML Threat Matrix, complete with case studies of attacks on production machine learning systems, which has evolved into MITRE ATLAS.
  • For security professionals, Microsoft open-sourced Counterfit to help with assessing the posture of AI systems.
  • For security incident responders, we released a bug bar to systematically triage attacks on ML systems.
  • For ML engineers, we released a checklist to complete AI risk assessment.
  • For developers, we released threat modeling guidance specifically for ML systems.
  • For engineers and policymakers, Microsoft, in collaboration with Berkman Klein Center at Harvard University, released a taxonomy documenting various machine learning failure modes.
  • For the broader security community, Microsoft hosted the annual Machine Learning Evasion Competition.
  • For Azure machine learning customers, we provided guidance on enterprise security and governance.

Contributors: Ram Shankar Siva Kumar with Gary Lopez Munoz, Matthieu Maitre, Amanda Minnich, Shiven Chawla, Raja Sekhar Rao Dheekonda, Lu Zhang, Charlotte Siska, Sudipto Rakshit.

The post New research, tooling, and partnerships for more secure AI and machine learning appeared first on Microsoft Security Blog.

]]>
Join us at InfoSec Jupyterthon 2022 http://approjects.co.za/?big=en-us/security/blog/2022/11/22/join-us-at-infosec-jupyterthon-2022/ Tue, 22 Nov 2022 18:00:00 +0000 http://approjects.co.za/?big=en-us/security/blog/?p=124902 Join our community of analysts and engineers at the third annual InfoSec Jupyterthon 2022, an online event taking place on December 2 and 3, 2022.

The post Join us at InfoSec Jupyterthon 2022 appeared first on Microsoft Security Blog.

]]>
Notebooks are gaining popularity in InfoSec. Used interactively for investigations and hunting or as scheduled processing jobs, notebooks offer plenty of advantages over traditional security operations center (SOC) tools. Sitting somewhere between scripting/macros and a full-blown development environment, they offer easy entry to data analyses and visualizations that are key to modern SOC engagements.

Join our community of analysts and engineers at the third annual InfoSec Jupyterthon 2022, where you’ll meet and engage with security practitioners using notebooks in their daily work. This is an online event taking place on December 2 and 3, 2022. It is organized by our friends at Open Threat Research, together with folks from Microsoft Security research teams and the Microsoft Threat Intelligence Center (MSTIC).

Infosec jupyterthon

Although this is not a Microsoft event, our Microsoft Security teams are delighted to be involved in helping organize it and deliver talks. Registration is free and it will be streamed on YouTube Live both days from 10:30 AM to 5:00 PM Eastern Time. We’ll also have a dedicated Discord channel for discussions and session Q&A.

Do you have a cool notebook or some interesting techniques or technology to talk about? There are still openings for talks and mini talks (30-minute, 15-minute, and 5-minute sessions). 

For more information, visit the InfoSec Jupyterthon page at: https://infosecjupyterthon.com

We’re looking forward to seeing you there!

The post Join us at InfoSec Jupyterthon 2022 appeared first on Microsoft Security Blog.

]]>
Collaborative innovation on display in Microsoft’s insider risk management strategy http://approjects.co.za/?big=en-us/security/blog/2020/12/17/collaborative-innovation-on-display-in-microsofts-insider-risk-management-strategy/ Thu, 17 Dec 2020 22:00:04 +0000 http://approjects.co.za/?big=en-us/security/blog//?p=92417 Partnering with organizations like Carnegie Mellon University allows us to bring their rich research and insights to our products and services, so customers can fully benefit from our breadth of signals.

The post Collaborative innovation on display in Microsoft’s insider risk management strategy appeared first on Microsoft Security Blog.

]]>
The disrupted work environment, in which enterprises were forced to find new ways to enable their workforce to work remotely, changed the landscape for operations as well as security. One of the top areas of concern is managing insider risks, a complex undertaking even before the pandemic, and even more so in the new remote or hybrid work environment.

Because its scope goes beyond security, insider risk management necessitates diverse perspectives and thus inherently requires collaboration among key stakeholders in the organization. At Microsoft, our insider risk management strategy was built on insights from legal, privacy, and HR teams, as well as security experts and data scientists, who use AI and machine learning to sift through massive amounts of signals to identify possible insider risks.

It was also important for us to extend this collaboration beyond Microsoft. For example, for the past few years, Microsoft has partnered with Carnegie Mellon University to bring in their expertise and experience in insider risks and provide insights about the nature of the broader landscape. (Read: Using Endpoint Signals for Insider Threat Detection [PDF].)

Our partnership with Carnegie Mellon University has helped shape our mindset and influenced our Insider Risk Management product, a Microsoft 365 solution that enables organizations to leverage machine learning to detect, investigate, and act on malicious and unintentional activities. Partnering with organizations like Carnegie Mellon University allows us to bring their rich research and insights to our products and services, so customers can fully benefit from our breadth of signals.

This research partnership with Carnegie Mellon University experiments with innovative ways to identify indicators of insider risk. The output of these experiments become inputs to our research-informed product roadmap. For example, our data scientists and researchers have been looking into using threat data from Microsoft 365 Defender to gain insights that can be used for managing insider risks. Today, we’d like to share our progress on this research in the form of Microsoft 365 Defender advanced hunting queries, now available in a GitHub repo:

  1. Detecting exfiltration to competitor organization: This query helps enterprises detect instances of a malicious insider creating a file archive and then emailing that archive to an external “competitor” organization. Effective query use requires prior knowledge of email addresses that may pose a risk to the organization if data is sent to those addresses.
  2. Detecting exfiltration after termination: This query explores instances in which a terminated individual (i.e., one who has an impending termination date, but has not left the company) downloads many files from a non-domain network address.
  3. Detecting steganography exfiltration: This query detects instances of malicious users who attempt to create steganographic images and then immediately browse to a webmail URL. It requires additional investigation to determine indication of a malicious event through the co-occurrence of a) generating a steganographic image; and b) browsing to a webmail URL

As these queries demonstrate, industry partnerships allow us to enrich our own intelligence with other organizations’ depth of knowledge, helping us address some of the bigger challenges of insider risks through the product, while bringing scientifically proven solutions to our customers more quickly through this open-source library.

Microsoft will continue investing in partnerships like Carnegie Mellon University to learn from experts and deliver best-in-class intelligence to our customers. Follow our insider risk podcast and join us in our Insider Risk Management journey!

The post Collaborative innovation on display in Microsoft’s insider risk management strategy appeared first on Microsoft Security Blog.

]]>
TLS version enforcement capabilities now available per certificate binding on Windows Server 2019 http://approjects.co.za/?big=en-us/security/blog/2019/09/30/tls-version-enforcement-capabilities-now-available-certificate-binding-windows-server-2019/ Mon, 30 Sep 2019 16:00:00 +0000 Microsoft is pleased to announce a powerful new feature in Windows to make your transition to a TLS 1.2+ world easier.

The post TLS version enforcement capabilities now available per certificate binding on Windows Server 2019 appeared first on Microsoft Security Blog.

]]>
At Microsoft, we often develop new security features to meet the specific needs of our own products and online services. This is a story about how we solved a very important problem and are sharing the solution with customers. As engineers worldwide work to eliminate their own dependencies on TLS 1.0, they run into the complex challenge of balancing their own security needs with the migration readiness of their customers. Microsoft faced this as well.

To date, we’ve helped customers address these issues by adding TLS 1.2 support to older operating systems, by shipping new logging formats in IIS for detecting weak TLS usage by clients, as well as providing the latest technical guidance for eliminating TLS 1.0 dependencies.

Now Microsoft is pleased to announce a powerful new feature in Windows to make your transition to a TLS 1.2+ world easier. Beginning with KB4490481, Windows Server 2019 now allows you to block weak TLS versions from being used with individual certificates you designate. We call this feature “Disable Legacy TLS” and it effectively enforces a TLS version and cipher suite floor on any certificate you select.

Disable Legacy TLS also allows an online or on-premise web service to offer two distinct groupings of endpoints on the same hardware: one which allows only TLS 1.2+ traffic, and another which accommodates legacy TLS 1.0 traffic. The changes are implemented in HTTP.sys, and in conjunction with the issuance of additional certificates, allow traffic to be routed to the new endpoint with the appropriate TLS version. Prior to this change, deploying such capabilities would require an additional hardware investment because such settings were only configurable system-wide via registry.

For a deep dive on this important new feature and implementation details and scenarios, please see Technical Guidance for Disabling Legacy TLS. Microsoft will also look to make this feature available in its own online services based on customer demand.

Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post TLS version enforcement capabilities now available per certificate binding on Windows Server 2019 appeared first on Microsoft Security Blog.

]]>
Deep learning rises: New methods for detecting malicious PowerShell http://approjects.co.za/?big=en-us/security/blog/2019/09/03/deep-learning-rises-new-methods-for-detecting-malicious-powershell/ Tue, 03 Sep 2019 16:00:03 +0000 http://approjects.co.za/?big=en-us/security/blog//?p=89808 We adopted a deep learning technique that was initially developed for natural language processing and applied to expand Microsoft Defender ATP's coverage of detecting malicious PowerShell scripts, which continue to be a critical attack vector.

The post Deep learning rises: New methods for detecting malicious PowerShell appeared first on Microsoft Security Blog.

]]>
Scientific and technological advancements in deep learning, a category of algorithms within the larger framework of machine learning, provide new opportunities for development of state-of-the art protection technologies. Deep learning methods are impressively outperforming traditional methods on such tasks as image and text classification. With these developments, there’s great potential for building novel threat detection methods using deep learning.

Machine learning algorithms work with numbers, so objects like images, documents, or emails are converted into numerical form through a step called feature engineering, which, in traditional machine learning methods, requires a significant amount of human effort. With deep learning, algorithms can operate on relatively raw data and extract features without human intervention.

At Microsoft, we make significant investments in pioneering machine learning that inform our security solutions with actionable knowledge through data, helping deliver intelligent, accurate, and real-time protection against a wide range of threats. In this blog, we present an example of a deep learning technique that was initially developed for natural language processing (NLP) and now adopted and applied to expand our coverage of detecting malicious PowerShell scripts, which continue to be a critical attack vector. These deep learning-based detections add to the industry-leading endpoint detection and response capabilities in Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP).

Word embedding in natural language processing

Keeping in mind that our goal is to classify PowerShell scripts, we briefly look at how text classification is approached in the domain of natural language processing. An important step is to convert words to vectors (tuples of numbers) that can be consumed by machine learning algorithms. A basic approach, known as one-hot encoding, first assigns a unique integer to each word in the vocabulary, then represents each word as a vector of 0s, with 1 at the integer index corresponding to that word. Although useful in many cases, the one-hot encoding has significant flaws. A major issue is that all words are equidistant from each other, and semantic relations between words are not reflected in geometric relations between the corresponding vectors.

Contextual embedding is a more recent approach that overcomes these limitations by learning compact representations of words from data under the assumption that words that frequently appear in similar context tend to bear similar meaning. The embedding is trained on large textual datasets like Wikipedia. The Word2vec algorithm, an implementation of this technique, is famous not only for translating semantic similarity of words to geometric similarity of vectors, but also for preserving polarity relations between words. For example, in Word2vec representation:

Madrid – Spain + Italy ≈ Rome

Embedding of PowerShell scripts

Since training a good embedding requires a significant amount of data, we used a large and diverse corpus of 386K distinct unlabeled PowerShell scripts. The Word2vec algorithm, which is typically used with human languages, provides similarly meaningful results when applied to PowerShell language. To accomplish this, we split the PowerShell scripts into tokens, which then allowed us to use the Word2vec algorithm to assign a vectorial representation to each token .

Figure 1 shows a 2-dimensional visualization of the vector representations of 5,000 randomly selected tokens, with some tokens of interest highlighted. Note how semantically similar tokens are placed near each other. For example, the vectors representing -eq, -ne and -gt, which in PowerShell are aliases for “equal”, “not-equal” and “greater-than”, respectively, are clustered together. Similarly, the vectors representing the allSigned, remoteSigned, bypass, and unrestricted tokens, all of which are valid values for the execution policy setting in PowerShell, are clustered together.

2D visualization of 5,000 tokens using Word2vec

Figure 1. 2D visualization of 5,000 tokens using Word2vec

Examining the vector representations of the tokens, we found a few additional interesting relationships.

Token similarity: Using the Word2vec representation of tokens, we can identify commands in PowerShell that have an alias. In many cases, the token closest to a given command is its alias. For example, the representations of the token Invoke-Expression and its alias IEX are closest to each other. Two additional examples of this phenomenon are the Invoke-WebRequest and its alias IWR, and the Get-ChildItem command and its alias GCI.

We also measured distances within sets of several tokens. Consider, for example, the four tokens $i, $j, $k and $true (see the right side of Figure 2). The first three are usually used to represent a numeric variable and the last naturally represents a Boolean constant. As expected, the $true token mismatched the others – it was the farthest (using the Euclidean distance) from the center of mass of the group.

More specific to the semantics of PowerShell in cybersecurity, we checked the representations of the tokens: bypass, normal, minimized, maximized, and hidden (see the left side of Figure 2). While the first token is a legal value for the ExecutionPolicy flag in PowerShell, the rest are legal values for the WindowStyle flag. As expected, the vector representation of bypass was the farthest from the center of mass of the vectors representing all other four tokens.

Image
3D visualization of selected tokens

Figure 2. 3D visualization of selected tokens

Linear Relationships: Since Word2vec preserves linear relationships, computing linear combinations of the vectorial representations results in semantically meaningful results. Below are a few interesting relationships we found:

high – $false + $true ≈’ low
‘-eq’ – $false + $true ‘≈ ‘-neq’
DownloadFile – $destfile + $str ≈’ DownloadString ‘
Export-CSV’ – $csv + $html ‘≈ ‘ConvertTo-html’
‘Get-Process’-$processes+$services ‘≈ ‘Get-Service’

In each of the above expressions, the sign ≈ signifies that the vector on the right side is the closest (among all the vectors representing tokens in the vocabulary) to the vector that is the result of the computation on the left side.

Detection of malicious PowerShell scripts with deep learning

We used the Word2vec embedding of the PowerShell language presented in the previous section to train deep learning models capable of detecting malicious PowerShell scripts. The classification model is trained and validated using a large dataset of PowerShell scripts that are labeled “clean” or “malicious,” while the embeddings are trained on unlabeled data. The flow is presented in Figure 3.

High-level overview of our model generation process

Figure 3 High-level overview of our model generation process

Using GPU computing in Microsoft Azure, we experimented with a variety of deep learning and traditional ML models. The best performing deep learning model increases the coverage (for a fixed low FP rate of 0.1%) by 22 percentage points compared to traditional ML models. This model, presented in Figure 4, combines several deep learning building blocks such as Convolutional Neural Networks (CNNs) and Long Short-Term Memory Recurrent Neural Networks (LSTM-RNN). Neural networks are ML algorithms inspired by biological neural systems like the human brain. In addition to the pretrained embedding described here, the model is provided with character-level embedding of the script.

Network architecture of the best performing model

Figure 4 Network architecture of the best performing model

Real-world application of deep learning to detecting malicious PowerShell

The best performing deep learning model is applied at scale using Microsoft ML.Net technology and ONNX format for deep neural networks to the PowerShell scripts observed by Microsoft Defender ATP through the AMSI interface. This model augments the suite of ML models and heuristics used by Microsoft Defender ATP to protect against malicious usage of scripting languages.

Since its first deployment, this deep learning model detected with high precision many cases of malicious and red team PowerShell activities, some undiscovered by other methods. The signal obtained through PowerShell is combined with a wide range of ML models and signals of Microsoft Defender ATP to detect cyberattacks.

The following are examples of malicious PowerShell scripts that deep learning can confidently detect but can be challenging for other detection methods:

Heavily obfuscated malicious script

Figure 5. Heavily obfuscated malicious script

Obfuscated script that downloads and runs payload

Figure 6. Obfuscated script that downloads and runs payload

Script that decrypts and executes malicious code

Figure 7. Script that decrypts and executes malicious code

Enhancing Microsoft Defender ATP with deep learning

Deep learning methods significantly improve detection of threats. In this blog, we discussed a concrete application of deep learning to a particularly evasive class of threats: malicious PowerShell scripts. We have and will continue to develop deep learning-based protections across multiple capabilities in Microsoft Defender ATP.

Development and productization of deep learning systems for cyber defense require large volumes of data, computations, resources, and engineering effort. Microsoft Defender ATP combines data collected from millions of endpoints with Microsoft computational resources and algorithms to provide industry-leading protection against attacks.

Stronger detection of malicious PowerShell scripts and other threats on endpoints using deep learning mean richer and better-informed security through Microsoft Threat Protection, which provides comprehensive security for identities, endpoints, email and data, apps, and infrastructure.

 

Shay Kels and Amir Rubin
Microsoft Defender ATP team

 

Additional references:

 

 


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft Defender ATP community.

Read all Microsoft security intelligence blog posts.

Follow us on Twitter @MsftSecIntel.

The post Deep learning rises: New methods for detecting malicious PowerShell appeared first on Microsoft Security Blog.

]]>
From unstructured data to actionable intelligence: Using machine learning for threat intelligence http://approjects.co.za/?big=en-us/security/blog/2019/08/08/from-unstructured-data-to-actionable-intelligence-using-machine-learning-for-threat-intelligence/ Thu, 08 Aug 2019 16:30:12 +0000 Machine learning and natural language processing can automate the processing of unstructured text for insightful, actionable threat intelligence.

The post From unstructured data to actionable intelligence: Using machine learning for threat intelligence appeared first on Microsoft Security Blog.

]]>
The security community has become proficient in using indicators of compromise (IoC) feeds for threat intelligence. Automated feeds have simplified the task of extracting and sharing IoCs. However, IoCs like IP addresses, domain names, and file hashes are in the lowest levels of the threat intelligence pyramid; they are relatively easy to access and consume, but they’re also easy for attackers to change to evade detection. IoCs are not enough.

Tactics, techniques, and procedures (TTPs) can enable organizations to extract valuable insights like patterns of attack on an enterprise or industry vertical, or trends of attacker techniques in the overall ecosystem. However, TTPs are at the highest level of the threat intelligence pyramid; this information often comes in the form of unstructured texts like blogs, research papers, and incident response (IR) reports, and the process of gathering and sharing these high-level indicators has remained largely manual.

Automating the processing of unstructured text for threat intelligence can benefit threat analysts and customers alike. At my Black Hat session “Death to the IOC: What’s Next in Threat Intelligence“, I presented a system that automates this process using machine learning and natural language processing (NLP) to identify and extract high-level patterns of attack from unstructured text.

 Basic structure of system

Figure 1. Basic structure of system

Trained on documentation of known threats, this system takes unstructured text as input and extracts threat actors, attack techniques, malware families, and relationships to create attacker graphs and timelines.

Data extraction and machine learning

In natural language processing, named entity extraction is a task that aims to classify phrases into pre-defined categories. This is usually a preprocessing step for other more complex tasks like identifying aliases, relationship extraction between actors and TTPs, etc. In our use case, the categories we want to identify are threat actors, malware families, attack techniques, and relationships between entities.

To train our model, our corpus was comprised of about 2,700 publicly available documents that describe the actions, behaviors, and tools of various threat actors. On average, each document in this corpus contained about two thousand tokens.

Training data distributions

Figure 2. Training data distributions

We also see that the distribution of tokens that fall into one of our predefined categories is very low. On average, only 1% of the tokens are relevant entities. This tells us that we have class imbalance in our data.

Therefore, in addition to using traditional features that are common to natural language processing tasks (for example, lemma, part of speech, orthographic features), we experimented with using custom word embeddings, which allow the identification of relationships between two words that mean the same thing or are used in similar contexts.

Word embeddings are vector representations of words such that the semantic context in which a word appears is captured in the numeric vector. If two words mean the same thing, or are used in the same context frequently, then we would expect the cosine similarity of their word embedding vectors to be high. In other words, in a graphical representation, datapoints for words that mean the same thing or are used in the same context frequently would be relatively close together.

For example, we looked at some clusters of points formed around APT28 and found that the four closest points to it were either aliases (Sofacy, TG-4127) of the threat or were related by attribution (APT29, Dymalloy).

Tensorboard visualization of custom trained embeddings

Figure 3. Tensorboard visualization of custom trained embeddings

We experimented with several models that are suited for a sequence labelling problem and measured performance in two ways—on the test dataset and on only the unseen tokens in the test dataset. We found that the experiments trained using conditional random fields (CRFs) trained on traditional and word embedding features have the best performance for both these scenarios.

 Architecture of training pipeline for extractor system

Figure 4. Architecture of training pipeline for extractor system

Machine learning for insightful, actionable intelligence

Using the system we developed, we automatically extracted the techniques known to be used by Emotet, a prominent commodity malware family, as well as a spread of APT actors that public documents refer to as Saffron Rose, Snake, and Muddy Water, and generated the following graph, which shows that there is a significant overlap between some techniques used by commodity malware and those used by APTs.

Overlaps in techniques used by commodity malware and APTs

Figure 5. Overlaps in techniques used by commodity malware and APTs

In this graph, we can see that techniques like obfuscated PowerShell, spear-phishing, and process hollowing are not restricted to APTs, but are prevalent in commodity malware. Insights like this can be used by organizations to guide security investments. Organizations can place defensive choke points to detect or prevent these attacker techniques so that they can stop not only annoying commodity malware, but also the high-profile targeted attacks.

At Microsoft, we are continuing to push the boundaries on how machine learning can improve the security posture of our customers. The output of machine learning-backed threat intelligence will show up in the effectiveness of the protection we deliver through Microsoft Defender Advanced Threat Protection (Microsoft Defender ATP) and the broader Microsoft Threat Protection.

In recent months, we have extensively discussed how we’re using machine learning to continuously innovate protections in Microsoft Defender ATP, particularly in hardening against evasion and adversarial attacks. In this blog we showed another application of machine learning: processing the vast amounts of threat intelligence that organizations receive and identifying high-level patterns. More importantly, we’re sharing our approaches so organizations can be inspired to explore more applications of machine learning to improve overall security.

 

Bhavna Soman (@bsoman3)
Microsoft Defender ATP Research

 

 


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft Defender ATP community.

Read all Microsoft security intelligence blog posts.

Follow us on Twitter @MsftSecIntel.

The post From unstructured data to actionable intelligence: Using machine learning for threat intelligence appeared first on Microsoft Security Blog.

]]>
DART: the Microsoft cybersecurity team we hope you never meet http://approjects.co.za/?big=en-us/security/blog/2019/03/25/dart-the-microsoft-cybersecurity-team-we-hope-you-never-meet/ Tue, 26 Mar 2019 00:12:11 +0000 http://approjects.co.za/?big=en-us/security/blog//?p=89193 Meet Microsoft’s Detection and Response Team (DART) and read their advice that may help you avoid working with them in future.

The post DART: the Microsoft cybersecurity team we hope you never meet appeared first on Microsoft Security Blog.

]]>
If you spent 270 days away from home, not on vacation, you’d want it to be for a good reason. When boarding a plane, sometimes having been pulled out of bed to leave family for weeks on end, it’s because one of our customers is in need. It means there is a security compromise and they may be dealing with a live cyberattack.

As the Microsoft Detection and Response Team (DART), our job is to respond to compromises and help our customers become cyber-resilient. This is also our team mission. One we take very seriously. And it’s why we are passionate about what we do for our customers.

Our unique focus within the Microsoft Cybersecurity Solutions Group allows DART to provide onsite reactive incident response and remote proactive investigations. DART leverages Microsoft’s strategic partnerships with security organizations around the world and with internal Microsoft product groups to provide the most complete and thorough investigation possible. Our response expertise has been leveraged by government and commercial entities around the world to help secure their most sensitive, critical environments.

How DART works with Microsoft customers

Our team works with customers globally to identify risks and provide reactive incident response and proactive security investigation services to help our customers manage their cyber-risk, especially in today’s dynamic threat environment.

In one recent example, our experts were called in to help several financial services organizations deal with attacks launched by an advanced threat actor group that had gained administrative access and executed fraudulent transactions, transferring large sums of cash into foreign bank accounts.

When the attackers realized they had been detected, they rapidly deployed destructive malware that crippled the customers’ operations for three weeks. Our team was on site within hours, working around the clock, side-by-side with the customers’ security teams to restore normal business operations.

Incidents like these are a reminder that trust remains one of the most valuable assets in cybersecurity and the role of technology is to empower defenders to stay a step ahead of well-funded and well-organized adversaries.

Overlooking a single security threat can create a serious event that could severely erode community and consumer confidence, can tarnish reputation and brand, negatively impact corporate valuations, provide competitors with an advantage, and create unwanted scrutiny.

That’s why our DART team also offers The Security Crisis and Response Exercise. This is a hands-on two-day custom, interactive experience on understanding security crisis situations and how to respond in the event of a cybersecurity incident. We examine our customers’ security posture and implement proactive readiness training with the objective of helping customers prepare for incident response through practice exercises.

The simulation is based on real-life scenarios from recent cybersecurity incident response engagements. The exercise focuses on topics such as Ransomware, Office 365 compromises, and compromises via industry-specific malware with complex backdoor software. Each scenario focuses on the key areas of cybersecurity: Identify, Protect, Detect, Respond, and Recover and covers a broad eco-system including supply chain vulnerabilities such as software vendors, IT service vendors, and hardware vendors.

DART basic recommendations

To help you become more cyber-resilient, below are a few recommendations from our team based on our experiences of what customers can be doing now to help harden their security posture.

Standardize—The cost of security increases as the complexity of the environment increases. To reduce the total cost of ownership (TCO), standardization is key. It also reduces the number of secure configurations the organization must maintain.

  • Domain controllers should be nearly identical to each other in both the operating system (OS) level and the apps running on them.
  • Member server groups should be standardized based on other similar or same functions.
    • File servers on the same OS with the same apps.
    • SQL servers on the same OS with the same apps.
    • Exchange servers on the same OS with the same apps.
  • Reduce the number of disjoined security products.
    • It is not possible to manage the security of an enterprise from 15 different security consoles that are not integrated.
    • Find a partner that covers multiple layers of security with integrated products.

Modernize—Consider this analogy: In WWII, the battleship was a fearsome ship bristling with guns, big and small, and built to take a hit. Today, a single missile cruiser could sink an entire fleet of WWII battleships. Technology evolves quickly. If you put off modernizing your environment, you could be missing critical technologies that protect your organization.

  • Accelerate adoption plans for Server 2016 and Windows 10.
    • Start with Domain Controllers and workstations of admins/VIPs.
    • Follow on with line of business (LOB) member servers and easy win upgrades like file servers.
    • Finalize with all other member servers and workstations.
  • Accelerate cloud adoption plans, while understanding the shared-risk model between customers’ cloud vendors and their retained risk you must continue to manage.
  • Evaluate security tools based on their ability to succeed in the modern threat landscape. Cloud-enabled security solutions need to base capability on four key pillars:
    • Endpoint telemetry—Windows, Android, iOS, Linux, etc. are the initial points from which data is collected.
    • Compute—Datacenter power. This is the compute power needed to organize all the endpoint telemetry.
    • Machine learning and artificial intelligence (AI)—Once we have all this endpoint telemetry organized, we use machine learning and AI to make sense of it.
    • Threat intelligence—Generated from the combination of the three previously mentioned pillars, the human interaction/feedback loop (the DART team) is used to make this data actionable and can help product groups course correct the machine learning and AI algorithms when needed.

Develop a comprehensive patching strategy

  • Update both Microsoft and all third-party apps.
  • Employ a software inventory solution like System Center Configuration Manager (SCCM).
  • Reboot after patching.
  • Avoid policy exceptions for business units to avoid patching where possible.
    • Short term: Enforce vulnerable machine/application isolation.
    • Long term: Adjust the acquisitions process to include a new vendor for the needed functionality.

Develop a comprehensive backup strategy

  • Always have a backup policy in place.
  • Test to ensure backups work.
  • Check to see if successful backups are online. If so, ensure they are not vulnerable to online threats.

Credential hygiene

  • Most modern attacks are identity based.
  • Read the Pass-the-Hash white papers that explains the exposure of privileged credentials on lower trusted tier systems.
  • Run through a Security Development Lifecycle (SDL) on internally developed apps to look for vulnerabilities and/or hard coded credentials.
  • Look for privileged accounts that are being used as service accounts.
    • At the very least, change them manually on a regular basis.
    • If you upgrade to 2012R2 or higher, you can use managed service accounts (MSA) where supported.

As the DART team, we have engaged with the most well-run IT environments in the world. Yet, even these networks get penetrated from time to time. The challenge of cybersecurity is one we must face together. While we hope you never have to call on our DART team, we are a trusted partner ready to help.

Learn more

To learn more about DART, our engagements, and how they are delivered by experienced cybersecurity professionals who devote 100 percent of their time to providing cybersecurity solutions to customers worldwide, please contact your account executive. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post DART: the Microsoft cybersecurity team we hope you never meet appeared first on Microsoft Security Blog.

]]>
Microsoft AI competition explores the next evolution of predictive technologies in security http://approjects.co.za/?big=en-us/security/blog/2018/12/13/microsoft-ai-competition-explores-the-next-evolution-of-predictive-technologies-in-security/ http://approjects.co.za/?big=en-us/security/blog/2018/12/13/microsoft-ai-competition-explores-the-next-evolution-of-predictive-technologies-in-security/#respond Thu, 13 Dec 2018 19:00:54 +0000 https://cloudblogs.microsoft.com/microsoftsecure/?p=87244 Predictive technologies are already effective at detecting and blocking malware at first sight. A new malware prediction competition on Kaggle will challenge the data science community to push these technologies even further—to stop malware before it is even seen.

The post Microsoft AI competition explores the next evolution of predictive technologies in security appeared first on Microsoft Security Blog.

]]>
Predictive technologies are already effective at detecting and blocking malware at first sight. A new malware prediction competition on Kaggle will challenge the data science community to push these technologies even further—to stop malware before it is even seen.

The Microsoft-sponsored competition calls for participants to predict if a device is likely to encounter malware given the current machine state. Participants will build models using 9.4GB of anonymized data from 16.8M devices, and the resulting models will be scored by their ability to make correct predictions. Winning teams get $25,000 in total prizes.

The competition provides academics and researchers with varied backgrounds a fresh opportunity to work on a real-world problem using a fresh set of data from Microsoft. Results from the contest will help us identify opportunities to further improve Microsoft’s layered defenses, focusing on preventative protection. Not all machines are equally likely to get malware; competitors will help build models for identifying devices that have a higher risk of getting malware so that preemptive action can be taken.

Cybersecurity is the central challenge of our digital age. Today, Windows Defender Advanced Threat Protection (Windows Defender ATP) uses intelligent systems to protect millions of devices against cyberattacks every day. Machine learning and artificial intelligence drive cloud-delivered protections that catch and predict new and emerging threats.

We also believe in the power of working with the broader research community to stay ahead of threats. Microsoft’s 2015 malware classification competition on Kaggle was a huge success, with the dataset provided by Microsoft cited in more than 50 research papers in multiple languages. To this day, the 0.5TB dataset from that competition is still used for research and continues to produce value for Microsoft and the data science community. This new competition is organized by the Windows Defender ATP Research team, in cooperation with Northeastern University and Georgia Institute of Technology as academic partners, with the goal of bringing new ideas to the fight against malware attacks and breaches.

Kaggle is a platform for data scientists to create data science projects, download datasets, and participate in contests. Microsoft is happy to use the Kaggle platform to engage a rich community of amazing thinkers. We think this collaboration will result in better protection for Microsoft customers and the Internet at large. Stay tuned for the results, we can’t wait to see what the data science community comes up with!

Click here to join the competition.

 

Chase Thomas and Robert McCann
Windows Defender Research team


Talk to us

Questions, concerns, or insights on this story? Join discussions at the Microsoft community and Windows Defender Security Intelligence.

Follow us on Twitter @WDSecurity and Facebook Windows Defender Security Intelligence.

The post Microsoft AI competition explores the next evolution of predictive technologies in security appeared first on Microsoft Security Blog.

]]>
http://approjects.co.za/?big=en-us/security/blog/2018/12/13/microsoft-ai-competition-explores-the-next-evolution-of-predictive-technologies-in-security/feed/ 0