Natalia Godyla, Author at Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog Expert coverage of cybersecurity topics Thu, 10 Aug 2023 21:27:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Build a stronger cybersecurity team through diversity and training http://approjects.co.za/?big=en-us/security/blog/2022/01/20/build-a-stronger-cybersecurity-team-through-diversity-and-training/ Thu, 20 Jan 2022 17:00:00 +0000 Heath Adams, Chief Executive Officer at TCM Security, offers practical advice on how to build a security team.

The post Build a stronger cybersecurity team through diversity and training appeared first on Microsoft Security Blog.

]]>
The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest post of our Voice of the Community blog series, Microsoft Security Product Marketing Manager Natalia Godyla talks with Heath Adams, Chief Executive Officer (CEO) at TCM Security about being a mentor, hiring new security talent, certifications, upskilling, the future of cybersecurity training, and lots more.

Natalia: What do you recommend to security leaders concerned with the talent shortfall?

Heath: There needs to be more openness and getting away from gatekeeping. In this industry, there’s a lot of, “I went through this path, so you need to go through this path.” Or “I did these certifications, so you need to do these certifications.” Everybody wants this perfect candidate—somebody who has 10 years of experience—even when they don’t necessarily need it. We need to be able to take somebody that’s more junior, who we can help train. Or take someone with a clean slate.

As a manager, be open to more than just what’s on the Human Resources job description. And be open to new people with different backgrounds. People are coming from all walks of life and age groups. So, if you put those biases aside and just consider the person that’s in front of you, that will help with the job shortage and help close the talent gap.

Natalia: And how has the pandemic and the shift to hybrid work changed cybersecurity skilling?

Heath: I think it’s been a positive. In our field, the ability to work remotely was always there. But the pandemic shifted things, so more companies are starting to realize that fact. I’ve worked jobs as a penetration tester where I had to relocate, even though I was working out of my home 95 percent of the time. Now, more companies are opening their eyes to talent that isn’t local. You no longer have to look in big markets; you can look at somebody on the other side of the country who’s studying cybersecurity, and they can be an asset to your team.

I was doing a lot of Twitch streaming during the shutdown, and I noticed our streams were way bigger than before. We had more people watching, more people interested. There’s a lot of people who took advantage of the shutdown to say, “Hey, this is my time to get focused. I want a new career.” There are high-paying jobs and there’s remote work. And as I mentioned, you don’t need a specific background or degree to get into this field. People can come from all walks of life. I think the pandemic helped shine a light on that.

Natalia: You’re well known as The Cyber Mentor™. How has mentoring impacted your career?

Heath: It keeps me on top of my game. I have to be able to give people direction and I don’t want to give out bad information, so, I’m making sure that I stay on top of what the industry changes are, where the jobs are heading, and how to interview properly—all of which seem to change from year to year. It helps me stay in touch with the next generation that’s coming into the security field as well.

Natalia: Do you have your own mentors that help you progress in your career?

Heath: I came up with what I call “community mentorship.” I have a Discord community, and we use that to encourage other people to give back. You want to be able to help people when they need it or get help when you need it while learning from each other. When it’s time for networking or needing a job, that goes a long way. For me, it’s more about being where there are groups of like-minded people. I’ve got a lot of friends that own penetration test companies, and we’ll get together, have lunch, talk strategies. What are you doing? What am I doing? That’s the kind of mentorship that we have with each other; just making sure we’re keeping each other in check, thinking about new things.

Natalia: What are the biggest struggles for early career mentees who are trying to grow their skills? And how can leaders address those challenges?

Heath: For a person looking to get a role, there are a few things to remember. One is to make sure you’re crawling before you walk, walking before you run. I’ll use hacking as an example. A lot of people get excited about hacking and think it sounds awesome. “You can get paid money to hack something? I want to do that!” And they try to jump right into it without building foundational skill sets, learning the parts of a computer, or learning how to do computer networking or basic troubleshooting. What I tell people is to break and fix computers. Understand basic hardware, basic computer networking, what IP addresses are, what a subnet is. Understand some coding, like Python. You don’t need a computer science background but having those foundational skills will go a long way.

If you don’t put a foundation under a house, it’s going to collapse. So, you need to think about your career in the same way. You must make sure you’re building a foundation. People don’t realize the amount of effort that goes into getting into the field. Do your due diligence beforehand.

There’s also a lot of imposter syndrome in cybersecurity. I tell people not to concern themselves with others, especially on social media. They say comparison is the thief of joy, and I truly believe that. You have to make sure you’re running your own race. Even if you run the same mile as somebody else, and they finish it in 5 minutes, and you finish it in 10; you still finish the same mile. What matters is that you got there. As long as you’re trying to be better than you were yesterday, you’re going to make it a lot farther than you think.

Finally, cybersecurity is a field that’s constantly changing. For somebody who is complacent—who wants to get a degree, get a job, and then is set—cybersecurity is not the right fit. Cybersecurity is for somebody who’s interested in constantly learning because there are always new vulnerabilities. There was just the Log4J vulnerability that caused everyone concern. I had a meeting today with a client, and if I’m not prepared, I’m letting them down. I’m letting their security down as well. I spent the weekend studying because I had to. That’s the business we’re in.

You must stay on top of this from an employer side as well—being able to train people and keep them up to date. TCM Security has a base foundation where we want our employees to be, and then we encourage them to gain knowledge where they’re most interested. I’ve been sent to a training that I had no interest in whatsoever and wanted to pull my hair out. As a manager, I ask, “What do you want to learn?” When I send an employee to a cybersecurity training that they’re interested in, they’re going to retain that information a lot better. They can then bring that information back to us, and we can use that in real-world scenarios.

Natalia: How can security leaders recruit security professionals to their teams better? What should they look out for? For example, how important are certifications?

Heath: For an entry-level role, certifications are important. Their importance diminishes once you get into the field. But I’m an advocate for them; they help prove some knowledge—so does having a blog, attending a conference, building a home lab, speaking at a conference, speaking at a local community group—anything that says, “I’m passionate about security.”

I have seen some entry-level roles where the interviewers have you code something, or have you fix broken code, just to make sure you logically understand what’s going on. You don’t have to be a developer or be able to code, but you must be able to understand what’s in front of you. Having some coding challenges during the hiring process can be beneficial—but it should be open book. For a security professional, using search is 90 percent of our job, honestly. If you’re limiting somebody from searching online, you’re setting false expectations.

I go back and re-watch videos and re-read blogs all the time, because there are so many different commands, and there’s no way of memorizing all of them. But you need to understand the concepts. If you understand the tool they might need to run or the concept of it, then you can search that, find the tool, and run it. That’s more important.

Natalia: We’ve all read the statistics about burnout in the security industry. What do you recommend for leaders who want to better retain their talent?

Heath: You must be pro-mental health. Make sure there’s ample paid time off (PTO) and encourage employees to use it. Also, make sure that your employees can take time off beyond PTO. If they’re sick, they shouldn’t feel like they’re letting people down. That’s why we have flexible schedules; we run on a 32-hour workweek. We try to give people as much time back and have a work-life balance. We also pay for training, so people can go and focus on topics they’re interested in. We make sure that we’re investing in our employees. It’s so much more expensive to rehire and retrain. I’d rather invest in an employee and keep their mental health at a high level, and make sure I’m giving them all the tools and training they need to perform successfully.

Natalia: What trends have you seen in cybersecurity skilling? What do you think is coming next in terms of how security professionals are trained up, recruited, and retained?

Heath: There are more people interested in the field, and that’s great. We’re starting to see a lot more training providers and training options. Back when I started, a lot of it was just reading blog posts, and there were maybe one or two training providers. Now, there are 10 or 15.

Misinformation can be out there, or outdated information. If you search online for certification companies—or even look at an online post from a year ago—that information could be outdated. So again, this comes back to due diligence and making sure that you’re doing your research, not just relying on one source. If I was going to look for certifications to get into this field, I’d look at 20 or 30 different resources, get a consensus of what polls the highest, then do my own research on those organizations. It’s great job skills practice to research and make sure you understand where you need to go.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

Disclaimer: The views expressed here are solely those of the author and do not represent the views of Microsoft Corporation.

The post Build a stronger cybersecurity team through diversity and training appeared first on Microsoft Security Blog.

]]>
Align your security and network teams to Zero Trust security demands http://approjects.co.za/?big=en-us/security/blog/2022/01/10/align-your-security-and-network-teams-to-zero-trust-security-demands/ Mon, 10 Jan 2022 18:00:00 +0000 Get expert advice on how to bridge gaps between your SOC and NOC and enable Zero Trust security in today’s rapidly evolving threat landscape.

The post Align your security and network teams to Zero Trust security demands appeared first on Microsoft Security Blog.

]]>
The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Security Product Marketing Manager Natalia Godyla talks with Jennifer Minella, Founder and Principal Advisor on Network Security at Viszen Security about strategies for aligning the security operations center (SOC) and network operations center (NOC) to meet the demands of Zero Trust and protect your enterprise.

Natalia: In your experience, why are there challenges bringing together networking and security teams?

Jennifer: Ultimately, it’s about trust. As someone who’s worked on complex network-based security projects, I’ve had plenty of experience sitting between those two teams. Often the security teams have an objective, which gets translated into specific technical mandates, or even a specific product. As in, we need to achieve X, Y, and Z level security; therefore, the networking team should just go make this product work. That causes friction because sometimes the networking team didn’t get a voice in that.

Sometimes it’s not even the right product or technology for what the actual goal was, but it’s too late at that point because the money is spent. Then it’s the networking team that looks bad when they don’t get it working right. It’s much better to bring people together to collaborate, instead of one team picking a solution.

Natalia: How does misalignment between the SOC and NOC impact the business?

Jennifer: When there’s an erosion of trust and greater friction, it makes everything harder. Projects take longer. Decisions take longer. That lack of collaboration can also introduce security gaps. I have several examples, but I’m going to pick healthcare here. Say the Chief Information Security Officer’s (CISO) team believes that their bio-medical devices are secured a certain way from a network perspective, but that’s not how they’re secured. Meaning, they’re secured at a lower level that would not be sufficient based on how the CISO and the compliance teams were tracking it. So, there’s this misalignment, miscommunication. Not that it’s malicious; nobody is doing it on purpose, but requirements aren’t communicated well. Sometimes there’s a lack of clarity about whose responsibility it is, and what those requirements are. Even within larger organizations, it might not be clear what the actual standards and processes are that support that policy from the perspective of governance, risk, and compliance (GRC).

Natalia: So, what are a few effective ways to align the SOC and NOC?

Jennifer: If you can find somebody that can be a third partysomebody that’s going to come in and help the teams collaborate and build trustit’s invaluable. It can be someone who specializes in organizational health or a technical third party; somebody like me sitting in the middle who says, “I understand what the networking team is saying. I hear you. And I understand what the security requirements are. I get it.” Then you can figure out how to bridge that gap and get both teams collaborating with bi-directional communication, instead of security just mandating that this thing gets done.

It’s also about the culturethe interpersonal relationships involved. It can be a problem if one team is picked (to be in charge) instead of another. Maybe it’s the SOC team versus the NOC team, and the SOC team is put in charge; therefore, the NOC team just gives up. It might be better to go with a neutral internal person instead, like a program manager or a digital-transformation leadersomebody who owns a program or a project but isn’t tied to the specifics of security or network architecture. Building that kind of cross-functional team between departments is a good way to solve problems.

There isn’t a wrong way to do it if everybody is being heard. Emails are not a great way to accomplish communication among teams. But getting people together, outlining what the goal is, and working towards it, that’s preferable to just having discrete decision points and mandates. Here’s the big goalwhat are some ideas to get from point A to point B? That’s something we must do moving into Zero Trust strategies.

Natalia: Speaking of Zero Trust, how does Zero Trust figure into an overarching strategy for a business?

Jennifer: I describe Zero Trust as a concept. It’s more of a mindset, like “defense in depth,” “layered defense,” or “concepts of least privilege.” Trying to put it into a fixed model or framework is what’s leading to a lot of the misconceptions around the Zero Trust strategy. For me, getting from point A to point B with organizations means taking baby stepsidentifying gaps, use cases, and then finding the right solutions.

A lot of people assume Zero Trust is this granular one-to-one relationship of every element on the network. Meaning, every user, every endpoint, every service, and application data set is going to have a granular “allow or deny” policy. That’s not what we’re doing right now. Zero Trust is just a mindset of removing inherent trust. That could mean different things, for example, it could be remote access for employees on a virtual private network (VPN), or it could be dealing with employees with bring your own device (BYOD). It could mean giving contractors or people with elevated privileges access to certain data sets or applications, or we could apply Zero Trust principles to secure workloads from each other.

Natalia: And how does Secure Access Service Edge (SASE) differ from Zero Trust?

Jennifer: Zero Trust is not a product. SASE, on the other hand, is a suite of products and services put together to help meet Zero Trust architecture objectives. SASE is a service-based product offering that has a feature set. It varies depending on the manufacturer, meaning, some will give you these three features and some will give you another five or eight. Some are based on endpoint technology, some are based on software-defined wide area network (SD-WAN) solutions, while some are cloud routed.

Natalia: How does the Zero Trust approach fit with the network access control (NAC) strategy?

Jennifer: I jokingly refer to Zero Trust as “NAC 4.0.” I’ve worked in the NAC space for over 15 years, and it’s just a few new variables. But they’re significant variables. Working with cloud-hosted resources in cloud-routed data paths is fundamentally different than what we’ve been doing in local area network (LAN) based systems. But if you abstract thatthe concepts of privilege, authentication, authorization, and data pathsit’s all the same. I lump the vendors and types of solutions into two different categories: cloud-routed versus traditional on-premises (for a campus environment). The technologies are drastically different between those two use cases. For that reason, the enforcement models are different and will vary with the products. 

Natalia: How do you approach securing remote access with a Zero Trust mindset? Do you have any guidelines or best practices?

Jennifer: It’s alarming how many organizations set up VPN remote access so that users are added onto the network as if they were sitting in their office. For a long time that was accepted because, before the pandemic, there was a limited number of remote users. Now, remote access, in addition to the cloud, is more prevalent. There are many people with personal devices or some type of blended, corporate-managed device. It’s a recipe for disaster.

The threat surface has increased exponentially, so you need to be able to go back in and use a Zero Trust product in a kind of enclave model, which works a lot like a VPN. You set up access at a point (wherever the VPN is) and the users come into that. That’s a great way to start and you can tweak it from there. Your users access an agent or a platform that will stay with them through that process of tweaking and tuning. It’s impactful because users are switching from a VPN client to a kind of a Zero Trust agent. But they don’t know the difference because, on the back end, the access is going to be restricted. They’re not going to miss anything. And there’s lots of modeling engines and discovery that products do to map out who’s accessing what, and what’s anomalous. So, that’s a good starting point for organizations.

Natalia: How should businesses think about telemetry? How can security and networking teams best use it to continue to keep the network secure?

Jennifer: You need to consider the capabilities of visibility, telemetry, and discovery on endpoints. You’re not just looking at what’s on the endpointwe’ve been doing thatbut what is the endpoint talking to on the internet when it’s not behind the traditional perimeter. Things like secure web gateways, or solutions like a cloud access security broker (CASB), which further extends that from an authentication standpoint, data pathing with SD-WAN routing—all of that plays in.

Natalia: What is a common misconception about Zero Trust?

Jennifer: You don’t have to boil the ocean with this. We know from industry reports, analysts, and the National Institute of Standards and Technology (NIST) that there’s not one product that’s going to meet all the Zero Trust requirements. So, it makes sense to chunk things into discrete programs and projects that have boundaries, then find a solution that works for each. Zero Trust is not about rip and replace.

The first step is overcoming that mental hurdle of feeling like you must pick one product that will do everything. If you can aggregate that a bit and find a product that works for two or three, that’s awesome, but it’s not a requirement. A lot of organizations are trying to research everything ad nauseum before they commit to anything. But this is a volatile industry, and it’s likely that with any product’s features, the implementation is going to change drastically over the next 18 months. So, if you’re spending nine months researching something, you’re not going to get the full benefit in longevity. Just start with something small that’s palatable from a resource and cost standpoint.

Natalia: What types of products work best in helping companies take a Zero Trust approach?

Jennifer: A lot of requirements stem from the organization’s technological culture. Meaning, is it on-premises or a cloud environment? I have a friend that was a CISO at a large hospital system, which required having everything on-premises. He’s now a CISO at an organization that has zero on-premises infrastructure; they’re completely in the cloud. It’s a night-and-day change for security. So, you’ve got that, combined with trying to integrate with what’s in the environment currently. Because typically these systems are not greenfield, they’re brownfield—we’ve got users and a little bit of infrastructure and applications, and it’s a matter of upfitting those things. So, it just depends on the organization. One may have a set of requirements and applications that are newer and based on microservices. Another organization might have more on-premises legacy infrastructure architectures, and those aren’t supported in a lot of cloud-native and cloud-routed platforms.

Natalia: So, what do you see as the future for the SOC and NOC?

Jennifer: I think the message moving forward is—we must come together. And it’s not just networking and security; there are application teams to consider as well. It’s the same with IoT. These are transformative technologies. Whether it’s the combination of operational technology (OT) and IT, or the prevalence of IoT in the environment, or Zero Trust initiatives, all of these demand cross-functional teams for trust building and collaboration. That’s the big message.

Learn more

Get key resources from Microsoft Zero Trust strategy decision makers and deployment teams. To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Align your security and network teams to Zero Trust security demands appeared first on Microsoft Security Blog.

]]>
What you need to know about how cryptography impacts your security strategy http://approjects.co.za/?big=en-us/security/blog/2022/01/04/what-you-need-to-know-about-how-cryptography-impacts-your-security-strategy/ Tue, 04 Jan 2022 17:00:00 +0000 Taurus SA Co-founder and Chief Security Officer Jean-Philippe “JP” Aumasson shares how the discipline of cryptography will impact cybersecurity strategy.

The post What you need to know about how cryptography impacts your security strategy appeared first on Microsoft Security Blog.

]]>
The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest post of our Voice of the Community blog series post, Microsoft Security Product Marketing Manager Natalia Godyla talks with Taurus SA Co-founder and Chief Security Officer Jean-Philippe “JP” Aumasson, author of “Serious Cryptography.” In this blog post, JP shares insights on learning and applying cryptography knowledge to strengthen your cybersecurity strategy.

Natalia: What drew you to the discipline of cryptography?

JP: People often associate cryptography with mathematics. In my case, I was not good at math when I was a student, but I was fascinated by the applications of cryptography and everything that has to do with secrecy. Cryptography is sometimes called the science of secrets. I was also interested in hacking techniques. At the beginning of the internet, I liked reading online documentation magazines and playing with hacking tools, and cryptography was part of this world.

Natalia: In an organization, who should be knowledgeable about the fundamentals of cryptography?

JP: If you had asked me 10 to 15 years ago, I might have said all you need is to have an in-house cryptographer who specializes in crypto and other people can ask them questions. Today, however, cryptography has become substantially more integrated into several components that we work with and those engineers must develop.

The good news is that crypto is far more approachable than it used to be, and is better documented. The software libraries and APIs are much easier to work with for non-specialists. So, I believe that all the engineers who work with software—from a development perspective, a development operations (DevOps) perspective, or even quality testing—need to know some basics of what crypto can and cannot do and the main crypto concepts and tools.

Natalia: Who is responsible for educating engineering on cryptography concepts?

JP: It typically falls on the security team—for example, through security awareness training. Before starting development, you create the functional requirements driven by business needs. You also define the security goals and security requirements, such as personal data, that must be encrypted at rest and in transit with a given level of security. It’s truly a part of security engineering and security architecture. I advocate for teaching people fundamentals, such as confidentiality, integrity, authentication, and authenticated encryption.

As a second step, you can think of how to achieve security goals thanks to cryptography. Concretely, you have to protect some data, and you might think, “What does it mean to encrypt the data?” It means choosing a cipher with the right parameters, like the right key size. You may be restricted by the capability of the underlying hardware and software libraries, and in some contexts, you may have to use Federal Information Processing Standard (FIPS) certified algorithms.

Also, encryption may not be enough. Most of the time, you also need to protect the integrity of the data, which means using an authentication mechanism. The modern way to realize this is by using an algorithm called an authenticated cipher, which protects confidentiality and authenticity at the same time, whereas the traditional way to achieve this is to combine a cipher and a message authentication code (MAC).

Natalia: What are common mistakes practitioners tend to make?

JP: People often get password protection wrong. First, you need to hash passwords, not encrypt them—except in some niche cases. Second, to hash passwords you should not use a general-purpose hash function such as SHA-256 or BLAKE2. Instead, you should use a password hashing function, which is a specific kind of hashing algorithm designed to be slow and sometimes use a lot of memory, to make password cracking harder.

A second thing people tend to get wrong is authenticating data using a MAC algorithm. A common MAC construction is the hash-based message authentication code (HMAC) standard. However, people tend to believe that HMAC means the same thing as MAC. It’s only one possible way to create a MAC, among several others. Anyway, as previously discussed, today you often won’t need a MAC because you’ll be using an authenticated cipher, such as AES-GCM.

Natalia: How does knowledge of cryptography impact security strategy?

JP: Knowledge of cryptography can help you protect the information more cost-effectively. People can be tempted to put encryption layers everywhere but throwing crypto at a problem does not necessarily solve it. Even worse, once you choose to encrypt something, you have a second problem—key management, which is always the hardest part of any cryptographic architecture. So, knowing when and how to use cryptography will help you achieve sound risk management and minimize the complexity of your systems. In the long run, it pays off to do the right thing.

For example, if you generate random data or bytes, you must use a random generator. Auditors and clients might be impressed if you tell them that you use a “true” hardware generator or even a quantum generator. These might sound impressive, but from a risk management perspective, you’re often better off using an established open-source generator, such as that of the OpenSSL toolkit.

Natalia: What are the biggest trends in cryptography?

JP: One trend is post-quantum cryptography, which is about designing cryptographic algorithms that would not be compromised by a quantum computer. We don’t have quantum computers yet, and the big question is when, if ever, will they arrive? Post-quantum cryptography consequently, can be seen as insurance.

Two other major trends are zero-knowledge proofs and multi-party computation. These are advanced techniques that have a lot of potential to scale decentralized applications. For example, zero-knowledge proofs can allow you to verify that the output of a program is correct without re-computing the program by verifying a short cryptographic proof, which takes less memory and computation. Multi-party computation, on the other hand, allows a set of parties to compute the output of a function without knowing the input values. It can be loosely described as executing programs on encrypted data. Multi-party computation is proposed as a key technology in managed services and cloud applications to protect sensitive data and avoid single points of failure.

One big driver of innovation is the blockchain space, where zero-knowledge proofs and multi-party computation are being deployed to solve very real problems. For example, the Ethereum blockchain uses zero-knowledge proofs to improve the scalability of the network, while multi-party computation can be used to distribute the control of cryptocurrency wallets. I believe we will see a lot of evolution in zero-knowledge proofs and multi-party computation in the next 10 to 20 years, be it in the core technology or the type of application.

It would be difficult to train all engineers in these complex cryptographic concepts. So, we must design systems that are easy to use but can securely do complex and sophisticated operations. This might be an even bigger challenge than developing the underlying cryptographic algorithms.

Natalia: What’s your advice when evaluating new cryptographic solutions?

JP: As in any decision-making process, you need reliable information. Sources can be online magazines, blogs, or scientific journals. I recommend involving cryptography specialists to:

  1. Gain a clear understanding of the problem and the solution needed.
  2. Perform an in-depth evaluation of the third-party solutions offered.

For example, if a vendor tells you that they use a secret algorithm, it’s usually a major red flag. What you want to hear is something like, “We use the advanced encryption standard with a key of 256 bits and an implementation protected against side-channel attacks.” Indeed, your evaluation should not be about the algorithms, but how they are implemented. You can use the safest algorithm on paper, but if your implementation is not secure, then you have a problem.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post What you need to know about how cryptography impacts your security strategy appeared first on Microsoft Security Blog.

]]>
Your guide to mobile digital forensics http://approjects.co.za/?big=en-us/security/blog/2021/12/14/your-guide-to-mobile-digital-forensics/ Tue, 14 Dec 2021 17:00:08 +0000 Cellebrite Senior Director of Digital Intelligence Heather Mahalik offers an inside look at mobile forensics and shares how to think about the function in your organization.

The post Your guide to mobile digital forensics appeared first on Microsoft Security Blog.

]]>
The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Security Product Marketing Manager Natalia Godyla talks with Cellebrite Senior Director of Digital Intelligence Heather Mahalik. In this blog post, Heather talks about digital forensics, from technical guidance to hiring best practices, with a special focus on mobile forensics. 

Natalia: What is digital forensics and why is it important?

Heather: Cybersecurity is more about prevention, protection, and defense. Digital forensics is the response and is typically triggered by an incident. There are some people who say, “Oh no, we do things proactively.” For example, someone might be traveling to a foreign country, and they want to know if something is going to land on their mobile device. You may proactively scan or carry out forensics on that device before and then see what changed after. That would be a rare situation, but usually, it’s when an incident occurs and you need someone to come in and clean it up.

Natalia: How does mobile forensics differ from other forms of forensics?

Heather: Mobile forensics is fast-moving. Mobile device companies update devices and operating systems all the time. The applications we rely upon are updating. When I did digital forensics as a whole—computers, PC, and macOS—the updates weren’t the same as on mobile. There are also levels and encryption that keep us out, and they are different on every mobile device.

When I learned forensics in 2002, it was: “Here’s a hard drive. This is how the data is laid out. This is what you can expect every single time.” You can never expect the same thing every single time with mobile forensics. In every single case you work on, there will be a variance that requires you to learn something new. I love it because I can’t get bored, but it’s also frustrating. It’s so hard to say, “OK, I’m now a master.” You’re never a master of mobile forensics.

Natalia: What does the workflow for mobile forensics look like?

Heather: I always use the terminology cradle-to-grave forensics—you get it when it first starts, and you put it to rest with your report. If you are doing beginning to end, you’re starting with the mobile device in front of you. One thing to consider is remote access, which can be good and bad. Some of the third-party applications require that a device connects to a network to extract information, but that goes against everything you’ll read about forensics. Isolate from a network. Make sure it’s protected. No connections to the device.

The next phase is to acquire the data from the device, and there are many different tools and methods to do that. You need as much access to that file system as you can get because we need all the logs in the background to do a thorough analysis.

After that, I recommend triage. Consider how you’re going to solve the who, what, where, when, why, and how. Are there any clues that you can get immediately from that device? Then dive in deeper with your forensics and analytical tools.

Natalia: What’s the best way to approach an investigation?

Heather: There was a study where they had people work on the same case in different ways. One person was given the whole case scenario—“This is what we think happened”—and another person was just asked specific questions—“Please find these things.” In the middle is the best—“We are trying to solve for X. These are the questions that I think will help us get to X. Can you answer them?”

If other people start shooting holes in your report, you need additional evidence, and that’s usually what will force validation. If someone sees that report and they’re not fighting it, it’s because they know that it’s the truth.

Natalia: What common mistakes do forensics investigators make?

Heather: The biggest mistake I see is trusting what a forensics tool reports without validating the evidence. Think about your phone. Did the artifact sync from a computer that your roommate is using and now it’s on your phone? Is it a suggestion, like when you’re typing into a search browser and it makes recommendations? Is it a shared document that you didn’t edit? There are all these considerations of how the evidence got there. You should not go from extracting a phone to reporting. There is a big piece in between. Verify and validate with more than one method and tool before you put it in your report.

Natalia: Are forensics investigation teams typically in-house or consultants?

Heather: There could be both. It depends on how frequently you need someone. I’ve been a consultant to big companies that offer incident response services. They don’t typically see mobile incidents, so they wanted me there just in case. If you do hire one person, don’t expect them to be a master of mobile, macOS, PC, and network security.

If you’re doing incident response investigations, you want someone with incident response, memory forensics, and network forensics experience. In the environments I’ve been in, we need dead disk forensics experience, so we need people who are masters of PC, macOS, and mobile because it’s usually data at rest that’s collected. It’s more terrorism and crime versus ransomware and hacking. You must weigh what you’re investigating, and if it’s all those things—terrorism/crime and ransomware/hacking —you need a forensics team because it’s rare that people are on both sides of that spectrum and really good at both.

Natalia: What advice would you give a security leader looking to hire and manage a forensics team?

Heather: When hiring people, question what they know. I’ve worked at many places where I was on the hiring team, and someone would say, “If they have X certification, they can skip to the next level.” Just because I don’t have a certification doesn’t mean I don’t know it. You also don’t know how someone scored. Make sure it’s a good cultural fit as well because with what we do in forensics, you need to rely on your teammates to get you through some of the things you come across.

When it comes to skill-building, I recommend encouraging your team to play in any free Capture the Flag provided by vendors, like SANS Institute. An employer could even put people together and say, “I want you three to work together and see how you do.” Letting your employees take training that inspires them and makes them want to keep learning is important.

Natalia: I appreciate you mentioning the difficulties of the role. It’s important to openly discuss the mental health challenges of being an investigator. How do you handle what you find in your investigations? And how do tools, like DFIR review, help?

Heather: I lean on my coworkers a lot. Especially if it’s a big case—like a missing person, someone going to trial, or someone losing their job—it’s a lot of pressure on you. You need people who understand that pressure and help you leave it behind because if it’s constantly going through your mind, it’s not healthy.

Digital Forensics and Incident Response (DFIR) review came out about two years ago. I have put many of my whitepapers and research through the deeper review process because it’s a group of other experts that validate your work. That makes a lot of organizations feel comfortable. “I know this device was wiped on X date and someone tried to cover their tracks because Heather wrote a paper, and it was peer-reviewed, and it got the gold seal.” That relieves a lot of pressure.

Learn more

Explore Microsoft’s technical guidance to help build and implement cybersecurity strategy and architecture.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Your guide to mobile digital forensics appeared first on Microsoft Security Blog.

]]>
How to assess and improve the security culture of your business http://approjects.co.za/?big=en-us/security/blog/2021/11/11/how-to-assess-and-improve-the-security-culture-of-your-business/ Thu, 11 Nov 2021 18:00:49 +0000 Cygenta Co-Founder and Co-Chief Executive Officer Dr. Jessica Barker talks about the importance of security culture and how to build a solid one in your organization.

The post How to assess and improve the security culture of your business appeared first on Microsoft Security Blog.

]]>
The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Security Product Marketing Manager Natalia Godyla talks with Cygenta Co-founder and Co-Chief Executive Officer Dr. Jessica Barker, author of “Confident Cyber Security: How to Get Started in Cyber Security and Futureproof Your Career” and co-author of “Cybersecurity ABCs: Delivering awareness, behaviors and culture change.” In this blog post, Jessica talks about how to build a security culture.

Natalia: How are most organizations doing? What is the state of cybersecurity culture?

Jessica: It varies—a lot of it comes down to resourcing and the emphasis placed on security from the leadership level. It can also come down to the experiences of the security team, security leadership, and the organization in terms of security incidents or near-misses. We’ve seen a lot of improvement in recent years, and that’s largely because there’s more awareness among leaders that security culture is important. Just 5 years ago, but particularly 10 years ago, there was very little discussion around culture and security culture.

Every year, we ask ClubCISO, a private group for senior information security professionals and security leaders, about security. For the last three years, they said security culture is their number one hot topic for the year ahead. They even said it in March 2021, tying with cloud. When I think about the year that cloud has had and the forced digital transformation many organizations have been through, it speaks volumes that security culture is as important of a priority as securely moving to the cloud.

Natalia: What does a cybersecurity culture assessment entail?

Jessica: In a cybersecurity culture assessment, we listen to the organization and the people who work there and understand security assumptions. When I speak to people about security culture, there’s often this idea that it is about how people behave, and that if we collect metrics around phishing, for example, it will tell us about the security culture. However, that will tell us something superficial. It’ll tell us what people are doing, not why they’re doing it.

Understanding the “why” is absolutely crucial because that’s your point of influence to change behavior. The “why” helps in understanding underlying assumptions and determining what you can do if there are gaps between what the security team wants and what people are doing.

The first stage is to understand the organizational culture, mission, and values and review the cultural symbols in the organization, including the branding, training, and messaging. Then, we run surveys, focus groups, and one-on-one interviews to encourage conversation, facilitate discussion, and understand what’s happening on a day-to-day basis, and most importantly, why.

Natalia: What are the indicators that a company needs a cybersecurity culture assessment?

Jessica: One prompt for most of our clients is that they feel like they need to do more to manage human risk, but they don’t know what. There may be incidents or near-misses. There may be indications around phishing or how people are managing passwords. There may be behavioral indicators—what they want from the people in the organization doesn’t match reality. Another key prompt is not understanding why their current culture isn’t developing in the way that they would want. Often, the organizations will have tried to deal with this in one way or another through awareness-raising, and there’s frustration because they’re telling people what to do, and they’re still not doing it. It takes a level of maturity, and it often takes organizations that aspire to be people-centric, to help their workforce be more security-conscious.

We measure security culture by gathering a lot of qualitative data to understand why people are doing what they’re doing. It goes back to the classic “start with why,” and then crunching numbers from surveys. We use grounded theory to qualify the data we get back. We immerse ourselves in that data and identify patterns. We also use anonymous quotes, comments, and keywords from workshops, focus groups, and one-on-one interviews to bring that story to life.

Natalia: What are typical challenges to establishing a positive security culture?

Jessica: I’m working with a financial services client that has a very positive organizational culture and lives by their values. But there have been challenges around security culture in this organization for many reasons, including fast digital transformation and growth. It’s taken them until this year to understand what a security culture means for their organization.

Because the people who work there felt loyalty to the organization, they wanted to behave in a secure way. They understood the importance of it, but there were blockers, including a lack of communication on why certain security controls were in place. It’s an entrepreneurial organization that moves quickly, so there were underlying cultural influences encouraging people to behave in less secure ways while prioritizing productivity. We’ve been undertaking a program to help the security team better communicate the “why,” and the organization has been receptive to it.

It’s also very hard to change behavior if the security leadership or organizational leadership team is not on board. Another consideration is the perception of a just culture. If somebody clicks a malicious link or makes a mistake, do they feel that they can put their hand up and report it without being unduly blamed? If people have a perception that the culture is about retribution and “pointing the finger,” that’s damaging to security culture.

Natalia: What’s the biggest mistake organizations make when trying to build and foster a security culture?

Jessica: To try to build a security culture that is not aligned with the business culture. One organization I worked with a few years ago was a very positive and people-centric healthcare organization. They were always seeking to say, “Yes,” to people in their wider organizational culture, but the security team was pushing a security culture that said, “No,” and was perceived as the “Department of No,” like many security teams. That’s a really common problem because the organizational culture will always win out, and if you try and bolt on a security culture that runs against the wider organization, it won’t work.

Often, the organizational culture of a company is not prepared to build a positive cybersecurity culture, and change requires patience. It’s a slow journey. That kind of client isn’t ready for a security culture assessment, so the work focuses on influencing the senior leadership to show them the importance of security culture. When organizations want a security culture assessment, that’s when they’re ready for it.

Natalia: How does the psychological well-being of the security team impact the security culture?

Jessica: At one organization, there was a lack of communication around security. The security team was so stressed, burnt-out, busy, and overworked that they didn’t have time to engage with their colleagues in the rest of the business. It led to the impression that the security team was not friendly or approachable, and it created a barrier to a positive security culture. Taking care of the well-being of the security function is fundamental.

To immediately improve the well-being of their team, managers can talk about the issues. If you’re comfortable doing so, this can include talking about your own mental well-being or acknowledging burnout stress and impostor syndrome. These are real issues in the industry, and it can be a relief for people to hear that they’re not alone and to have this safe space. It makes everyone feel more comfortable saying, “Hey, I need a day off for my mental health.” Mental health days are crucial in organizations, but leadership must show that they’re a priority.

Natalia: Besides an assessment, how can security teams improve their understanding of human risk?

Jessica: Behavioral economics, neuroscience, and psychology are all disciplines that can teach us about the human side of security and security culture. I’d recommend books like “Nudge,” “Thinking Fast and Slow”, and the work of Tali Sharot, a neuroscientist, whose work on the optimism bias is very relevant to security. There’s also a lot of great work being done in academia on security culture—papers and research that are advancing the field. It was interesting as well to see this year that Verizon did a shout-out to security culture for the first time in their data breach investigation report. Security culture is going more mainstream and is now higher up on the agenda in the security profession.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post How to assess and improve the security culture of your business appeared first on Microsoft Security Blog.

]]>
Get career advice from 7 inspiring leaders in cybersecurity http://approjects.co.za/?big=en-us/security/blog/2021/10/18/get-career-advice-from-7-inspiring-leaders-in-cybersecurity/ Mon, 18 Oct 2021 16:00:46 +0000 In this blog, top experts in the industry will share insights on their careers in cybersecurity.

The post Get career advice from 7 inspiring leaders in cybersecurity appeared first on Microsoft Security Blog.

]]>
Are you currently studying information security? Or are you considering transitioning to a career in cybersecurity? According to the US Bureau of Labor Statistics, cybersecurity jobs will grow 31 percent from 2019 to 2029—more than six times the national average job growth.1 Cybersecurity skills are clearly in high demand. But more than that, cybersecurity is a rewarding career attracting many bright, passionate practitioners and leaders who are invested in making the world a better, more secure place.

As part of Cybersecurity Awareness Month and this week’s theme on cybersecurity careers, we are focusing this blog on top experts in the industry who will share insights on their careers in cybersecurity. In this post, we’ve asked seven cybersecurity leaders six questions about their career experiences to help you navigate and grow your career in the industry and foster new talent. Be sure to also check out our career guidance and educational resources to help you navigate your cybersecurity career.

What’s one thing you know now that you wish you knew when you first started your career in cybersecurity?

“The diversity of backgrounds that drive the best teams of cybersecurity professionals; not everyone needs to have a computer science background as there are so many prisms of skills needed for cybersecurity.”—Valecia Maclin, Partner General Manager, Sovereign Cloud, Microsoft

“When I started in cybersecurity, the internet was not what it is now with such ubiquitous access across so many diverse layers from culture, geography, access to connectivity, language, geopolitics, and more. What I know is that I would have better embraced the different layers of diversity by pausing and taking time to learn about different cultures to better understand motivations and challenges. Also, I would have spent time learning languages as there are local thoughts that cannot be translated. But by living in a community, we learn to understand differences.”—Peter Anaman, Principal Investigator, Digital Crimes Unit, Microsoft

“It’s not the technical issues that are the biggest hurdles; rather motivating, inspiring, and rallying people is and always will be our greatest challenge.”—Edna Conway, Vice President, Chief Security and Risk Officer, Microsoft

“I began my career as a national security policy analyst shortly after September 11, 2001, amid the rise of pressing cybersecurity challenges and related privacy and civil liberties concerns. Much of my learning about cybersecurity was grounded in national security law. When I reflect back on that time, one of the overarching challenges I remember observing as a green college graduate surrounded by amazing security leaders was around how the United States can advance a cybersecurity strategy that responds to emerging threats while also upholding the Constitution. I wish I had a clearer sense then that cybersecurity law was a pathway! I still find these crucial tensions to be the most compelling.”—Lauren Bean Buitta, Founder and Chief Executive Officer, Girl Security

What has been the most enjoyable aspect of your career in cybersecurity?

“I love teaching things to people. I love discovering something new, and then sharing that knowledge with as many people as possible. I was a musical performer for most of my life, and somehow it never occurred to me to do public speaking. It had never occurred to me that even though I was a songwriter for 17 years, that I could also write blog articles and share knowledge that way. And that I could be good at it. It’s kind of funny because it seems so obvious in retrospect, but I love writing and I love speaking. Sharing knowledge is my favorite part of cybersecurity.”—Tanya Janca, Founder and Chief Executive Officer, We Hack Purple Academy

“Breaking stuff. I think that’s the same answer for many people in this industry because there’s a natural curiosity to see if they can make things behave in ways that they weren’t designed to do.”—Troy Hunt, Founder of Have I Been Pwned, information security author, and instructor at Pluralsight

“It has been a privilege to contribute to the safety of internet users and growth of business and socioeconomic opportunities across many jurisdictions. I have also enjoyed learning new technologies and working with many amazing colleagues and friends as we collaborate to solve new online security challenges.”—Peter Anaman

The most surprising?

“That a former homicide prosecutor with an undergraduate degree in Medieval and Renaissance Literature could bring a unique and valuable perspective to the cyber arena.”—Edna Conway

“The most surprising thing to me about cybersecurity is that time and again, our adversaries underestimate the power of the human spirit.”—Ann Johnson, Corporate Vice President, Security, Compliance, and Identity, Microsoft

“The fact that we keep finding the same things. There’s so much work to do, you know. Often, I get asked, ‘Are we winning the battle against the hackers?’ And I say, ‘It’s a little like looking at your fingernails and going, ‘I’m losing the battle.’’ No, you’re just maintaining equilibrium.”—Troy Hunt

Cybersecurity is a vast industry. How do you decide your focus or specialization within cybersecurity?

“Well, I don’t think you have to decide as a premeditated decision. I think what’s great about the tech industry, in general, is that there are so many places to get started with nothing. It’s very much a meritocracy. People are interested in what you have done. What are your achievements? Start somewhere and the path sort of reveals itself.”—Troy Hunt

“Ahh—that is the beauty of cybersecurity—I don’t have to. Rather, I can consistently drive a comprehensive approach that is customizable to the needs of the entire ecosystem. That lured me to the complexity of supply chain security and third-party risk.”—Edna Conway

“My career has always focused on delivering solutions that protect national security and critical infrastructure, so I tend to gravitate to focus in this area. I enjoy knowing my work is making a difference on a global scale.“—Valecia Maclin

“I think finding one’s focus is a balance of trial and error and trusting instincts. I always recommend trying new learning and training on the technical skill sets and there are plenty of amazing organizations that offer those programs, as well as the types of skills we teach at Girl Security, which include ethical decision making, critical thinking, risk management, strategy, and innovation, for example. In addition, identify those who might describe careers that appeal to you. Browse LinkedIn, or your favorite companies or organizations, consider role models or inspiring individuals and learn about their paths. You might be surprised at the twists and turns. Lastly, and most importantly, value your unique strengths. Everyone has some unique contribution to make in life and to cybersecurity. Don’t doubt your strengths and contributions, which may very well be potential pathways. My mom always reinforced the importance of active listening, which has benefitted me tremendously in my service and leadership roles.”—Lauren Bean Buitta

If you could recommend one skill to learn, what would it be?

“One skill is hard, and I would suggest two. As we live in the Fourth Industrial Revolution, we need to learn the skills associated with machine-enabled systems. I would encourage learning about databases, such as SQL or Kusto, and a programming language, such as C# or Python.”—Peter Anaman

“Be comfortable with change and the unknown.”—Valecia Maclin

“If there was one cybersecurity skill I’d recommend that people learn, it’s how to be empathetic to the end user. When we are empathetic, we innovate for people, not the end product.”—Ann Johnson

“The one skill I would suggest people learn is how to behave in a professional setting. Understand the basics, such as responding to emails in a timely manner, being precise and concise when you write or speak, and showing up on time and prepared.”—Tanya Janca

What’s the best career advice you’ve ever been given?

“Learn how to influence, set, and navigate policy, regulations, and legislation.”—Edna Conway

“If I could talk to my younger self, I’d say get out of your comfort zone. Whether it’s our professional or personal life, we learn more when we challenge ourselves to push our boundaries.”—Ann Johnson

“Have integrity. Life and professions can present challenging opportunities that test the bounds of our values and ethics. In addition, in competitive spaces, people might question one’s ‘credentials,’ their education, whether they know a particular person, or even how they run their business or do their job. If you maintain your integrity, you can weather the challenges and rise above the critics.”—Lauren Bean Buitta

“I was told to switch from software development to security. I was told I would make more money, the job security would be great, and I would always get to learn lots of new things. They were right!”—Tanya Janca

Learn more

Throughout the month of October, we will be sharing blog posts with in-depth information and helpful tips for each themed week of Cybersecurity Awareness Month 2021. Read our current posts:

To access training, certifications, and other resources that you can share with your organization, visit the Microsoft Security Cybersecurity Awareness education page.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

 


1Information Security Analysts, US Bureau of Labor Statistics. 8 September 2021.

The post Get career advice from 7 inspiring leaders in cybersecurity appeared first on Microsoft Security Blog.

]]>
Practical tips on how to use application security testing and testing standards http://approjects.co.za/?big=en-us/security/blog/2021/10/05/practical-tips-on-how-to-use-application-security-testing-and-testing-standards/ Tue, 05 Oct 2021 16:00:40 +0000 Banco Santander Global Head of Security Research Daniel Cuthbert talks with Microsoft about how to use application security testing and testing standards to increase application security.

The post Practical tips on how to use application security testing and testing standards appeared first on Microsoft Security Blog.

]]>
The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Daniel Cuthbert, Global Head of Security Research at Banco Santander. Daniel discusses how to use application security testing and testing standards to improve security.

Natalia: What is an application security test and what does it entail?

Daniel: Let’s say I have a traditional legacy banking application. Users can sign in using their web browser to gain access to financial details or funds, move money around, and receive money. Normally, when you want an application assessment done for that type of application, you’re looking at the authentication and authorization processes, how the application architecture works, how it handles data, and how the user interacts with it. As applications have grown from a single application that interacts with a back-end database to microservices, all the ways that data is moved around and installed—and the processes—become more important.

Generally, an application test makes sure that at no point can somebody gain unauthorized access to data or somebody else’s money. And we want to make sure that an authorized user can’t impersonate another user, gain access to somebody else’s funds, or cause a system in the architecture to do something that the developers or engineers never expected to happen.

Natalia: What is the Open Web Application Security Project (OWASP) Application Security Verification Standard (ASVS), and how should organizations be using the standard?

Daniel: ASVS stands for Application Security Verification Standard1. The idea was to normalize how people conduct and receive application security tests. Prior to it, there was no methodology. There was a lot of ambiguity in the industry. You’d say, “I need an app test done,” and you’d hope that the company you chose had a methodology in place and the people doing the assessment were capable of following a methodology.

In reality, that wasn’t the case. It varied across various penetration test houses. Those receiving consultancy for penetration tests and application tests didn’t have a structured idea of what should be tested or what constituted a secure robust application. That’s where the ASVS comes in. Now you can say, “I need an application test done. I want a Level 2 assessment of this application.” The person receiving the test knows exactly what they’re expecting, and the person doing the test knows exactly what the client is expecting. It gets everybody on the same page, and that’s what we were missing before.

Natalia: How should companies prioritize and navigate the ASVS levels and controls?

Daniel: When they first look at the ASVS, many people get intimidated and overwhelmed. First, stay calm. The three levels are there as a guideline. Level 1 should be the absolute bare minimum. That’s the entrance to play if you’re putting an application on the Internet, and we try to design Level 1 to be capable of being automated. As far as tools to automate Level 1, OWASP Zed Attack Proxy (ZAP) is getting there. In 2021, an application should be at Level 2, especially if we take into consideration privacy. Level 3 is unique. Most people never need Level 3, which was designed for applications that are critical and have a strong need for security—where if it goes down, there’s a loss of life or massive impact. Level 3 is expensive and time-consuming, but you expect that if it’s, say, a power plant. You don’t want it to be quickly thrown together in a couple of hours.

With all the levels, you don’t have to go through every single control; this is where threat modeling comes in. If your application makes use of a back-end database, and you have microservices, you take the parts that you need from Level 2 and build your testing program. Many people think that you have to test every single control, but you don’t. You should customize it as much as you need.

Natalia: What’s the right cadence for conducting application security tests?

Daniel: The way we build applications has changed drastically. Ten years ago, a lot of people were doing the waterfall approach using functional specifications like, “I want to build a widget to sells shoes.” Great. Somebody gives them money and time. Developers go develop, and toward the end, they start going through functional user acceptance testing (UAT) and get somebody to do a penetration test. Worst mistake ever. In my experience, we’d go live on Monday, and the penetration test would happen the week before.

What we’ve seen with the adoption of agile is the shifting left of the software development lifecycle (SDLC). We’re starting to see people think about security not only as an add-on at the end but as part of the function. We expect the app to be secure, usable, and robust. We’re adopting security standards. We’re adopting the guardrails for our continuous integration and continuous delivery pipeline. That means developers write a function, check the code into Git, or whatever repository, and the code is checked that it’s robust, formatted correctly, and secure. In the industry, we’re moving away from relying on that final application test to constantly looking during the entire lifecycle for bugs, misconfigurations, or incorrectly used encryption or encoding.

Natalia: What common mistakes do companies make that impact the results of an application security assessment?

Daniel: The first one is companies not embracing the lovely world of threat modeling. A threat model can save you time and give you direction. When people bypass the fundamental stage of threat modeling, they’re burning cycles. If you adopt the threat model and say, “This is every single way some bad person is going to break our favorite widget tool,” then you can build upon that.

The second mistake is not understanding what all the components do. We no longer build applications that are a single web server, Internet Information Services (IIS), or NGINX that is stored in the database. It’s rare to see that today. Today’s applications are complex. Because multiple teams are responsible for individual parts of that process, they don’t all work together to understand simple things like the data flow. Where’s the data held? How does this application process that data? Often, everyone assumes the other team is doing it. This is a problem. Either the scrum master or product owner should own full visibility of the application, especially if it’s a large project. But it varies depending on the organization. We’re not in a mature enough stage yet for it to be a defined role.

Also, the gap between security and development is still too wide. Security didn’t make many friends. We were constantly belittling developers. I was part of that, and we were wrong. At the moment, we’re trying to bridge the two teams. We want developers to see that security is trying to help them.

We should be building a way for developers to be as creative and cool as we expect them to be while setting guardrails to stop common mistakes from appearing in the code pipeline. It’s very hard to write secure code, but we can embrace the fourth generation of continuous integration and continuous delivery (CI/CD). Check your code in; then do a series of tests. Make sure that at that point and at that commit, the code is as robust, secure, and proper as it should be.

Natalia: How should the security team work with developers to protect against vulnerabilities?

Daniel: I don’t expect developers to understand all the latest vulnerabilities. That’s the role of the security or security engineering team. As teams mature, the security engineering or security team acts as the go-to bridge; they understand the vulnerabilities and how they’re exploited, and they translate that into how people are building code for their organization. They’re also looking at the various tools or processes that could be leveraged to stop those vulnerabilities from becoming an issue.

One of the really cool things that I’m starting to see with GitHub is GitHub insights. Let’s say there’s a large organization that has thousands of repositories. You’ll probably see a common pattern of vulnerabilities if you looked at all those repositories. We’re getting to the stage where we’re going to have a “Minority Report” style function for security.

On a monthly basis, I can say, “Show me the teams that are checking in bugs—let’s say deserialization.” I want to understand a problem before it becomes a major one and work with those teams to say, “Of the last 10 arguments, 4 of them have been flagged as being vulnerable for deserialization bugs. Let’s sit down and understand how you’re building, what you’re building toward, and what frameworks you’re trying to adopt. Can we make better tools for you to protect against the vulnerability? Do you need to understand the vulnerability itself?” The tools, pipelines, and education are out there. We can start being that bridge.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

 


1OWASP Application Security Verification Standard, OWSAP.

The post Practical tips on how to use application security testing and testing standards appeared first on Microsoft Security Blog.

]]>
Cybersecurity’s next fight: How to protect employees from online harassment http://approjects.co.za/?big=en-us/security/blog/2021/08/25/cybersecuritys-next-fight-how-to-protect-employees-from-online-harassment/ Wed, 25 Aug 2021 16:00:12 +0000 Tall Poppy CEO and Co-founder Leigh Honeywell talks with Microsoft about how companies can support employees who have been targeted for online harassment.

The post Cybersecurity’s next fight: How to protect employees from online harassment appeared first on Microsoft Security Blog.

]]>
The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Leigh Honeywell, CEO and Co-founder of Tall Poppy, which builds tools and services to help companies protect their employees from online harassment and abuse. In this blog, Leigh talks about company strategies for fighting online harassment.

Natalia: What are some examples of online harassment experienced in the workplace?

Leigh: Online harassment breaks down into two types. The first is harassment related to your job. One example of this would be that an ex-employee has a conflict with the company and is harassing former colleagues. In other cases, it has to do with a policy decision or a moderation decision that the company made, resulting in people within the organization experiencing harassment.

The other type of harassment has nothing to do with somebody’s day job. For instance, an employee had a bad breakup and their ex is bothering them at work. It’s not strictly related to the employee’s day-to-day work, but it’s going to impact their ability to be present at work and participate in work life. Many folks who are dealing with harassment—whether related to work or not—experience lost productivity, attrition, and burnout.

Discover how communication compliance in Microsoft 365 can help you detect harassing or threatening language and take action to protect your employees.

Natalia: How widespread of a problem is online harassment?

Leigh: Online harassment is a significant phenomenon. In 2020, 41 percent of Americans experienced it and 28 percent experienced the more severe kinds, like threats of violence, stalking, sexual harassment, and persistent harassment, according to the Pew Online Harassment Update1. That’s a huge number of people experiencing these issues. It has made us prioritize motivating people to improve their security hygiene around personal accounts.

Your employees’ personal accounts are part of the attack surface of the company. Social engineering attacks are when cybercriminals use psychological manipulation on their targets. If someone is being extorted based on their personal life, it has the potential to impact the company. In a classic CEO scam, somebody breaks into an executive’s personal email account, emails a person in accounting posing as the executive, and asks them to send a wire transfer to a bank account controlled by the scammer.

Natalia: What are recent trends in online harassment?

Leigh: According to the most recent Pew study, online harassment went up. Project Include just published a study2 on the internal company harassment landscape during COVID-19, and there has been a sharp uptick in workplace harassment.

Even though the numbers are stable in terms of how many people are experiencing online harassment, before COVID-19, if you were dealing with harassment from outside the company in the course of your work, you still got to go home and have that mental separation. When people work remotely, it’s a different experience, and it feels a lot more personal and vulnerable for those dealing with this kind of harassment.

Natalia: What should organizations understand about online harassment?

Leigh: It’s clear under US and Canadian law that organizations have a duty to ensure that employees don’t harass each other within the organization. When harassment in the workplace comes from outside the company, such as internet harassment, there isn’t a ton of clarity. I think it’s important to make sure that employees have clear policies and internal recourse.

In a typical harassment scenario, an employee says something controversial on Twitter, and people try to get them fired from their company. Sometimes, the things that people say that get them fired are racist or homophobic or biased in some way. When people talk about cancel culture, they are typically talking about consequences. You say something, and you get held to that word.

However, it’s hard to arbitrate. Is the controversial statement fireable, or is it controversial because they are members of an underrepresented group and are being targeted for standing up for themselves? That’s one of the lenses I use to unpack these situations.

Natalia: How can online harassment lead to hacking?

Leigh: After abuse on social channels and unwanted emails, online harassment sometimes gets more aggressive. You see password reset attempts that you have not requested. The next level is credential stuffing, where an attacker obtains a person’s email and password combo from old breaches and tries the credentials on different accounts. Another potential escalation is SIM swapping, which involves the attacker impersonating the victim to a phone company and porting their phone number away to a fresh SIM card. This attack usually targets folks who are high profile and is less common in stalking situations.

Natalia: What does the incident response process look like when an employee is under attack?

Leigh: When dealing with an urgent incident in a workplace, such as somebody hacking into a printer at a branch office, there are known playbooks for responding to different attacks. Likewise, we have different playbooks based on the type of harassment situation an employee is dealing with, for example, harassment by an ex-employee or an employee being targeted due to a company policy decision.

We also pay a lot of attention to the adversaries. We’ll typically make sure the person has safe devices and ensure the adversary does not have access to their personal accounts. We’ll walk them through changing relevant passwords and checking authorized applications. From there, it’s about making sure that the person is OK, and that includes making sure they know about internal resources like an employee assistance program for counseling services.

Natalia: What are the best practices a company can institute to mitigate online harassment or assist those impacted by it?

Leigh: First, have clear internal policies and escalation points around acceptable social media use. There are some industries where it’s understandable that you don’t want employees having a social media presence, but those are rare these days. In general, it’s not realistic to tell employees not to exist online in public, so what’s important is to make boundaries, expectations, and guardrails clear via a written social media policy. Employees want to have long-lived careers and build their personal brands—trying to shut that down wholesale will end up with unfair enforcement and isn’t realistic.

The second best practice is to make sure people have tools and resources available to secure their personal lives, whether it’s a hardware security key such as a Yubikey or a quality password manager. All those day-to-day tools are as important in the workplace as they are in people’s personal lives. Online harassment training teaches employees how to keep attackers out of their personal accounts such as email, bank accounts, and social media. It can be overwhelming trying to understand all the information available about staying safe online. And there’s an argument to be made that you shouldn’t have to become an expert on personal cybersecurity to be able to live your life with an internet presence in the modern world.

The third one would be to ensure there are available resources within the organization that are clear and accessible, so it’s understood where the escalation paths are—whether it’s providing training to management and having management communicate to frontline staff or using internal communications tools to inform employees of resources.

Helping employees improve their personal cybersecurity can help them feel confident that their personal digital infrastructure is secure and helps ensure that online harassment isn’t going to escalate to an incident like an account takeover.

Learn more

Learn how communication compliance in Microsoft 365 can help you detect harassing or threatening language and take action to foster a culture of safety.

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

 


1The State of Online Harassment, Emily A. Vogels, Pew Research Center, 13 January 2021.

2Remote work since COVID-19 is exacerbating harm: What companies need to know and do, Yang Hong, McKensie Mack, Ellen Pao, Caroline Sinders, Project Include, March 2021.

The post Cybersecurity’s next fight: How to protect employees from online harassment appeared first on Microsoft Security Blog.

]]>
How security can keep media and sources safe http://approjects.co.za/?big=en-us/security/blog/2021/08/10/how-security-can-keep-media-and-sources-safe/ Tue, 10 Aug 2021 18:00:24 +0000 In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Runa Sandvik, an expert on journalistic security and the former Senior Director of Information Security at The New York Times. In this blog, Runa introduces the unique challenges and fundamentals of journalistic security.

The post How security can keep media and sources safe appeared first on Microsoft Security Blog.

]]>
The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Runa Sandvik, an expert on journalistic security and the former Senior Director of Information Security at The New York Times. In this blog, Runa introduces the unique challenges and fundamentals of journalistic security.

Natalia: What is journalistic security? 

Runa: Being a reporter is not a 9-to-5 job. You’re not just a reporter when you step through the doors of The Washington Post or The Wall Street Journal or CNN. It becomes something that you do before work, at the office, at home, or after work at the bar. In some ways, you’re always on the job, so securing a journalist is about securing their life and identity. You’re not just securing the accounts and the systems that they’re using at work, which would fall under the enterprise; you’re securing the accounts and the systems that they use on a personal basis.

In addition, reporters travel. They cover protests and war zones. You will have to account for their physical and emotional safety. Journalistic security for me is effectively the umbrella term for digital security, physical security, and emotional safety.

Natalia: What is unique about securing a media organization?  

Runa: A media organization, whether it’s a smaller nonprofit newsroom or a larger enterprise, needs the same type of security tools and processes as any other organization. However, with a media organization, you must consider the impact. We’re not just talking about data belonging to the enterprise being encrypted or stolen and dumped online; we’re also talking about data from subscribers, readers, and sources. As a result, the potential ramifications of an attack against a media organization—whether it’s a targeted attack, like a nation-state actor looking for the sources of a story, or opportunistic ransomware—can be greater and involve far more people in a more sensitive context. Privacy-preserving monitoring is also important for newsrooms. I believe in helping the journalist understand what’s happening on their devices. If we aren’t teaching them to threat model and think about the digital security risks of their stories and communications with sources, we’re going to have a gap.

The other major difference is the pace. Newsrooms are incredibly deadline-driven, and security’s job is to enable journalists to do their job safely, not block their work. If a journalist tells their security team that they’re going to North Korea and need to secure setup, the team needs to shift their to-do list around to accommodate that—whether it means providing training or new hardware.

Natalia: What’s the biggest challenge to securing a media organization? 

Runa: The one thing that continues to be a challenge for media organizations is the lack of trust and collaboration between the internal IT and security teams and the newsroom. The newsroom doesn’t necessarily trust or go to those departments for help or tools to secure reporters, their material, and their work. If you’re building a defensive posture, you can’t secure what you don’t understand. If you don’t have a good relationship with the newsroom or know what kind of work they do, you’re going to have gaps. I’ve found it helpful to involve the newsroom when making decisions around tools and processes that impact their work. Involving the newsroom in discussions that affect it, even if they’re technical, will do a lot to build a trusting relationship.

Natalia: How do you build a process to evaluate and mitigate risk?  

Runa: If you’re writing about the best chocolate chip cookies, you’re probably fine. You’re probably not going to run into any issues with sources or harassment. If you decide to report on politics though, chances are you’ll face the risk of online threats and harassment that could escalate to physical threats and harassment. The context for a specific project and story becomes a set of risks that need to be accounted for.

Typically, the physical risk assessment process has already been established. Newsrooms have been sending reporters on risky assignments, such as to war zones, for a long time. In most newsrooms, a reporter will talk to the editor and assess the risk of any work-related travel. They get input from their physical security adviser, legal, and HR.

Building a similar process for the digital space becomes a challenge of education and awareness. In some cases, newsrooms have established and documented well-functioning processes, and security teams can become part of that decision tree. In other cases, you must start by introducing yourself to the newsroom and making sure people know you’re there to help. I’ve talked with news organizations in the United States, United Kingdom, and Norway that have cross-functional teams with representatives from the newsroom, IT, security, HR, communications, and legal to ensure no stories fall through the cracks.

Natalia: What processes, protocols, or technologies do you use to protect journalists and their investigations?

Runa: In a newsroom, you typically have “desks.” You have the investigations desk. You have style. You have sports. Different desks will have different needs from a technology and education perspective. Whenever I’m talking to a newsroom, I try to first cover security basics. We’re talking passwords, multifactor authentication updates, and phishing. I cover the baseline; then look at the kind of work each desk is doing to drill in more. For investigations, this could involve setting up a tool to receive tips from the public, or air-gapped (offline) machines to securely review information.

For international travel, it could involve establishing an internal process with the IT team so a journalist can quickly request a new laptop or a new phone. In many cases, the tools that end up being used are popular and well-known. The journalist usually must use the same tools as the source.

Making the security team available to the newsroom also goes a long way. Reporters know how to ask questions—whether they’re doing an interview or trying to understand how a password manager works, or how to use a YubiKey. Give them an opportunity to ask questions through an internal chat channel or weekly meetings. It all goes back to relationship building and awareness.

Natalia: How has working in journalistic security shaped your perspective on security? 

Runa: When I first started working for The Tor Project, which develops free and open-source software for online anonymity, I was curious about how it’s possible to use lines of code to achieve that. I didn’t think much about the people who use it or what they use it for. But through that work, I learned a lot about the global impact The Tor Project has: from activists and journalists to security researchers and law enforcement. In interacting with reporters, I had to accept that there’s a difference between the ideal setup from a security standpoint and what’s going to get the job done. It would be great to give everyone a laptop with Tails or Qubes OS configured, but are they going to be able to use it for their work? At what point do we say that we’ve found a happy middle between securing the data or systems, enabling the reporter, and accepting risk?

Natalia: How can we continue to enhance security in the newsroom?  

Runa: We need more of a focus on security attacks that target and impact media organizations and reporters. Typically, when you read information about security attacks, it usually highlights the industries affected. You’ll see references to government, education, and healthcare, but what about media?

If you’re working at a media organization trying to understand what kind of digital threats you’re facing, where do you go to find information? I would love to see an organization or individual build a resource with a timeline of the kind of digital attacks we’ve seen against media organizations in the United States from 2015 to 2021. This would be a way to get a pulse on what’s happening to educate journalists of the risks, identify impact and risk to operations, and inform leadership.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post How security can keep media and sources safe appeared first on Microsoft Security Blog.

]]>
A guide to balancing external threats and insider risk http://approjects.co.za/?big=en-us/security/blog/2021/07/22/a-guide-to-balancing-external-threats-and-insider-risk/ Thu, 22 Jul 2021 17:00:40 +0000 Rockwell Automation Vice President and Chief Information Security Officer Dawn Cappelli talks about assessing, measuring, and protecting against insider risk.

The post A guide to balancing external threats and insider risk appeared first on Microsoft Security Blog.

]]>
The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Rockwell Automation Vice President and Chief Information Security Officer Dawn Cappelli. In this blog post, Dawn talks about the importance of including insider risk in your cybersecurity plan. 

Natalia: What is the biggest barrier that organizations face in addressing insider risk?

Dawn: The biggest barrier is drawing attention to insider risk. We heard about the ransomware group bringing down the Colonial Pipeline. We hear about ransomware attacks exposing organizations’ intellectual property (IP). We’re not hearing a lot about insider threats. We haven’t had a big insider threat case since Edward Snowden so that sometimes makes it hard to get buy-in for an insider risk program. But I guarantee insider threats are happening. Intellectual property is being stolen and systems are being sabotaged. The question is whether they are being detected—are companies looking?

Natalia: How do you assess the success of an insider risk program?

Dawn: First, we measure our success by significant cases. For instance, we have someone leaving the company to go to a competitor, we catch them copying confidential information that they clearly want to take with them, and we get it back.

Second, we measure success by looking at the team’s productivity. Everyone in the company has a risk score based on suspicious or anomalous activity as well as contextual data, for instance, they are leaving the company. Every day we start at the top of the dashboard with the highest risk and work our way down. We look at how many cases have no findings because that means we’re wasting time, and we need to adjust our risk models to eliminate false positives.

We also look at the reduction in cases because we focus a lot on deterrence, communication, and awareness, as well as cases by business unit and by region. We run targeted campaigns and training for specific business units or types of employees, regions, or countries, and then look at whether those were effective in reducing the number of cases.

Natalia: How does measuring internal threats differ from measuring external threats?

Dawn: From an external risk perspective, you need to do the same thing—see if your external controls are working and if they’re blocking significant threats. Our Computer Security Incident Response Team (CSIRT) also looks at the time to contain and the time to remediate. We should also measure how long it takes to respond and recover IP taken by insiders.

By the way, I like using the term “insider risk” instead of “insider threat” because we find that most suspicious insider activity we detect and respond to is not intentionally malicious. Especially during COVID-19, we see more employees who are concerned about backing up their computer, so they pull out their personal hard drive and use it to make a backup. They don’t have malicious intent, but we still must remediate the risk. Next week they could be recruited by a competitor, and we can’t take the chance that they happen to have a copy of our confidential information on a removable media device or in personal cloud storage.

Natalia: How do you balance protecting against external threats and managing insider risks?

Dawn: You need to consider both. You should be doing threat modeling for external threats and insider risks and prioritizing your security controls accordingly. An insider can do anything an external attacker can do. There was a case in the media recently where someone tried to bribe an insider to plug in an infected USB drive to get malware onto the company’s network or open an infected attachment in an email to spread the malware. An external attacker can get in and do what they want to do much easier through an insider.

We use the National Institute of Standards and Technology (NIST) Cybersecurity Framework (CSF) for our security program, and we use it to design a holistic security program that encompasses both external and insider security risks. For example, we identify our critical assets and who should have access to them, including insiders and third parties. We protect those assets from unauthorized access—including insiders and outsiders. We detect anomalous or suspicious behavior from insiders and outsiders. We respond to all incidents and recover when necessary. We have different processes, teams, and technologies for our insider risk program, but we also use many of the same tools as the CSIRT, like our Security Information and Event Management (SIEM) and Microsoft Office 365 tools.

Natalia: What best practices would you recommend for data governance and information protection?

Dawn: Don’t think about insider threats only from an IP perspective. There’s also the threat of insider cyber sabotage, which means you need to detect and respond to activities like insiders downloading hacking tools or sabotaging your product source code.

Think about it: an external attacker has to get into the network, figure out where the development environment is, get the access they need to compromise the source code or development environment, plant the malicious code or backdoor into the product—all without being detected. It would be a lot easier for an insider to do that because they know where the development environment is, they have access to it, and they know how the development processes work.

When considering threat types, I wouldn’t say that you need to focus more on cyber sabotage than IP; you need to focus on them equally. The mitigations and detections are different for IP theft versus sabotage. For theft of IP, we’re not looking for people trying to download malware, but for sabotage, we are. The response processes are also different depending on the threat vector.

Natalia: Who needs to be involved in managing and reducing insider risk, and how?

Dawn: You need an owner for your insider risk program, and in my opinion, that should be the Chief Information Security Officer (CISO). HR is a key member of the virtual insider risk team because happy people don’t typically commit sabotage; it’s employees who are angry and upset, and they tend to come to the attention of HR. Every person in Rockwell HR takes mandatory insider risk training every year, so they know the behaviors to look for.

Legal is another critical member of the team. We can’t randomly take people’s computers and do forensics for no good reason, especially in light of all the privacy regulations around the world. The insider risk investigations team is in our legal department and works with legal, HR, and managers. For any case involving personal information and any case in Europe, we go to our Chief Privacy Officer and make sure that we’re adhering to all the privacy laws. In some countries, we also have to go to the Works Council and let them know we’re investigating an employee. The security team is responsible for all the controls—preventive, detective—technology, and risk models.

Natalia: What’s next in the world of data regulation?

Dawn: Privacy is the biggest issue. The Works Councils in Europe are becoming stronger and more diligent. They are protecting the privacy of their fellow employees, and the privacy review processes make the deployment of monitoring technology more challenging.

In the current cyber threat environment, we must figure out how to get security and privacy to work together. My advice to companies operating in Europe is to go to the Works Councils as soon as you’re thinking about purchasing new technology. Make them part of the process and be totally transparent with them. Don’t wait until you’re ready to deploy.

Natalia: How will advancements like cloud computing and AI change the risk landscape?

Dawn: We have a cloud environment, and our employees are using it to develop products. From inception, the insider risk team worked to ensure that we’re always threat modeling the environment. We go through the entire NIST CSF for that cloud environment and look at it from both an external and insider risk perspective.

Companies use empirical, objective data to create and train AI models for their products. The question becomes, “Do you have controls to identify an insider who deliberately wants to bias your models or put something malicious into your AI models to make it go off course later?” With any type of threat, ask if an insider could facilitate this type of attack. An insider can do anything an outsider can do, and they can do it much easier.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post A guide to balancing external threats and insider risk appeared first on Microsoft Security Blog.

]]>