Matthew Hickey, Author at Microsoft Security Blog http://approjects.co.za/?big=en-us/security/blog Expert coverage of cybersecurity topics Thu, 12 Sep 2024 20:59:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.6.2 Identifying cyberthreats quickly with proactive security testing http://approjects.co.za/?big=en-us/security/blog/2022/11/03/identifying-cyberthreats-quickly-with-proactive-security-testing/ Thu, 03 Nov 2022 16:00:00 +0000 Hacker House co-founder and Chief Executive Officer Matthew Hickey offers recommendations for how organizations can build security controls and budget.

The post Identifying cyberthreats quickly with proactive security testing appeared first on Microsoft Security Blog.

]]>
The security community is continuously changing, growing, and learning from each other to better position the world against cyberthreats. In the latest post of our Community Voices blog series, Microsoft Security Senior Product Marketing Manager Brooke Lynn Weenig talks with Matthew Hickey, Co-founder, Chief Executive Officer (CEO), and hacker of Hacker House. The thoughts below reflect Matthew’s views, not the views of Matthew’s employer, and are not legal advice. In this blog post, Matthew talks about application security.

Brooke: How did you get into cybersecurity?

Matthew: If your dad is a car mechanic, you grow up learning about cars. During the 1980s, my dad was super into computers. He used to go to my grandma’s school and bring home the computers prior to anyone really understanding what they were. These were the filing cabinet days and the days of carbon paper. Only very academic people and fringe technologists were interested in cybersecurity. When I was in high school, I had networks in my house with networked games. I started picking apart how the phone network worked and how internet access worked. My dad was supportive. He said, “If a 13-year-old kid can break into it, maybe we should not be using it.”

I pushed hard to get myself in front of as many people as I could and ended up working for a group from the National Computing Center. They had begun selling cybersecurity assurance services and penetration testing. I built a portfolio of my work publishing papers and showing people how computer systems were broken and how you could hack into them. At the time, you could not go to college and do cybersecurity. I dealt with a lot of rejection letters and a lot of people saying no and then I got my first job—that was 20 years ago. Now, I run my own company and I have written a book on the subject.

Brooke: What is most fascinating to you about cybersecurity?

Matthew: For me, it is the exciting element of offensive security testing. I take a low-privileged user on the system and say, “I want to make this user become a high-privileged user without authorization” and I will poke and probe my way through the system, testing all the boundaries and controls in place until I find ways to break it.

I began on an interesting journey; looking at things like state machines, where a computer will go through a lifecycle of a connection. When you connect your system to a server in the office, the computer will keep track of different states. For example, “Did you enter the right password?” and “Should it give you access?” I find these kinds of problems intellectually challenging and quite enjoyable.

Brooke: How do you help clients define and set goals for security control?

Matthew: There is a saying that this industry is run on fear, uncertainty, and doubt. I often ask clients: “If a hacker broke in tomorrow and had free rein of all your systems, what are you most concerned about?” We identify all the assets in the environment and their sensitive data and then review controls based on their concerns. Usually, they are most concerned about payment information and commercially sensitive information, or they are storing things that they perhaps should not have been storing, including credit card data and anything that could cause brand reputational damage.

It’s important to get board buy-in and foster a culture of cybersecurity in the organization and make it something that everybody in the company talks about regularly, like with phishing awareness.

Another key thing is to never punish the user. If they are at work and opening emails, that is what you are asking that person to do. Even the best cybersecurity professionals will click on a phishing link eventually. It’s human nature. These psychological lures are designed to get people to click on them. One of the most effective is a fake FedEx or UPS notification. Nine times out of 10, people will click on the link to track that parcel because they want to know. The attackers know our psychology and our natural human behaviors and how to get attacks through our radar in a way that does not alert us that we are being attacked. Proper cybersecurity in an organization takes human error into account.

Brooke: How do you reduce assessment times and identify threats faster?

Matthew: The MITRE ATT&CK® Framework has been massively advantageous. It is a spreadsheet-based approach to understanding how an attacker behaves in an environment and it stems back to a paper written by Lockheed Martin. Lockheed Martin and the defense sector obviously were big targets for advanced persistent threats and cyber-enabled economic espionage, where nation-state actors break into their systems to steal information for espionage purposes.

Lockheed Martin came up with what they call the cyber kill chain, a timeline of an attack that starts at the very point that the attacker starts their breach into the network to the end—where they have exfiltrated and stolen the information. They modeled this and identified that the earlier you stop the attacker along this kill chain, the better, because they must start over again. The further along the chain they are, stopping the attack will cost the attacker more resources in terms of time and exploits used.

MITRE then came up with tools, techniques, and procedures. You can look at the threats in your industry and the known behaviors of threats targeting your sectors and begin unit testing those individual items. Instead of running a six-month engagement where we break into the client’s environment and do all this stealthy stuff, like monitor your network, we test against the actual threats and against these component items. That narrows the time involved in assessment activities and they get the result quicker.

Brooke: At what stage do clients bring your organization into the process?

Matthew: We work with a whole range of different clients, including people who have already built their product and people who have started to build their product. These kinds of strategies are usually very effective against large organizations—multinational corporations and Fortune 500 companies.

If you want to be effective in cybersecurity, the costs need to be on the attackers. We encourage organizations to move away from this longstanding engagement model and instead focus on doing unit tests against the actual situations they face. We call them cyber preparedness drills. We mimic the attacker’s behavior utilizing tools we’ve built, like these items we have published on GitHub for User Account Control (UAC) bypass testing:

These types of common attacker behaviors should be well-detected and even better detected by Microsoft Defender than they were previously. Simply scripting, even if it’s in the PowerShell command shell or the .NET developer platform and creating standard individual tests for specific items in the ATT&CK® framework and running those as simulations gives you better results.

Brooke: What advice would you give to cybersecurity leaders on how to manage their budgets?

Matthew: There is a big push in the industry to do what is most interesting. Clients will say, “I want you to simulate a real attacker. I want the best hackers to throw everything you have at the system.” They want to spend a ton of money simulating a real attacker and I usually discover they have not covered any of the basics, like telemetry, alerting, or network defense.

It is easy to bring people on board, but if you have not looked at your environment and the basics, there is no point hiring a team to mimic your attacker and do a full six-month red team engagement. Your attacker is going to break into your network for free anyway, so you might as well focus on how you can use that budget to build better defenses to alert your team. So many companies do not know how many systems or databases they have, for instance. They do not have an accurate picture of what is happening in their environment. They look to the penetration testers who end up telling them more than they know about their network. 

Leaders should always ask: Do you have an accurate picture of the patch levels in your environment? If someone opens malware, can you see the events? Do you get the telemetry?

You could buy the best security system around and if it is getting 150 alerts a day but nobody is paying attention, it is useless because no one is going to ever act. When looking at your budget and how to spend it effectively, focus on granular engagement. When you hire a firm, hire one that has a good background and good understanding that can make effective use of that budget.

There are three approaches. There is a black box assessment methodology, where we know nothing about the environment, the target, or the target network. Then, you have a gray box methodology, where a client might share a little bit of information, such as what is given to a new starting staff member in an area where there is a high employee turnover rate. And third, there is a white box assessment, where they give us anything we want to know and we can see what they see. From our experience, you get the best results from white box assessments and from doing bite-sized exercises as your security provider is better informed and not reliant on guesswork achieved through the other two common methodologies.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post Identifying cyberthreats quickly with proactive security testing appeared first on Microsoft Security Blog.

]]>
How purple teams can embrace hacker culture to improve security http://approjects.co.za/?big=en-us/security/blog/2021/06/10/how-purple-teams-can-embrace-hacker-culture-to-improve-security/ Thu, 10 Jun 2021 16:00:02 +0000 Hacker House co-founder and CEO Matthew Hickey introduces the concept of purple teaming and how a purple team can benefit an organization.

The post How purple teams can embrace hacker culture to improve security appeared first on Microsoft Security Blog.

]]>
The security community is continuously changing, growing, and learning from each other to better position the world against cyber threats. In the latest Voice of the Community blog series post, Microsoft Product Marketing Manager Natalia Godyla talks with Matthew Hickey, co-founder, CEO, and writer for Hacker House. In this blog post, Matthew talks about the benefits of a purple team and offers best practices for building a successful one.

Natalia: What is a purple team, and how does it bridge red and blue teams?

Matthew: The traditional roles involve a blue team that acts as your defenders and a red team that acts as your attackers. The blue team wants to protect the network. The red team works to breach the network. They want to highlight the security shortcomings of the blue team’s defenses. The two teams aren’t always working on the same objective to secure information assets and eliminate information risk, as each is focused on the objective of their respective team—one to prevent breaches, the other to succeed in a breach.

Purple teaming is an amalgamation of the blue and red teams into a single team to provide value to the business. With a successful purple team, two groups of people normally working on opposite ends of the table are collaborating on a unified goal—improving cybersecurity together. It can remove a lot of competitiveness from security testing processes. Purple teams can replace red and blue teams, and they’re more cost-effective for smaller organizations. If you’re a big conglomerate, you might want to consider having a blue team, a red team, and a purple team. Purple teams work on both improving knowledge of the attacks an organization faces and building better defenses to defeat them.

Natalia: Why do companies need purple teams?

Matthew: Computer hacking has become much more accessible. If one clever person on the internet writes and shares an exploit or tool, everyone else can download the program and use it. It doesn’t have a high barrier of entry. There are high school kids exploiting SQL injection attacks and wiping millions from a company valuation. Because hacking information is more widely disseminated, it’s also more accessible to the people defending systems. There have also been significant improvements in how we understand attacker behavior and model those behaviors. The MITRE ATT&CK framework, for instance, is leveraged by most red teams to simulate attackers’ behavior and how they operate.

When red and blue teams work together as a purple team, they can perform assessments in a fashion similar to unit tests against frameworks, like MITRE ATT&CK, and use those insights on attacker behavior to identify gaps in the network and build better defenses around critical assets. Adopting the attackers’ techniques and working with the system to build more comprehensive assessments, you have advantages your attacker does not. Those advantages come from your business intelligence and people.

Natalia: What are the benefits of bringing everything under one team?

Matthew: The benefits of a purple team include speed and cost reduction. Purple teams are typically constructed as an internal resource, which can reduce reaching out to external experts for advice. If they get alerts in their email, purple teams can wade through them and say, “Oh, this is a priority because attackers are going to exploit this quickly since there’s a public exploit code available. We need to fix this.” Unit testing specific attacker behaviors and capabilities against frameworks on an ongoing basis as opposed to performing periodic, full-blown simulated engagements that last several weeks to several months is also a huge time reduction for many companies.

Red teams can often be blindsided by wanting to build the best phishing attack. Blue teams want to make sure their controls are working correctly. They’ll achieve much more in a shorter timeframe as a purple team because they are more transparent with one another, sharing their expertise and understanding of the threats. You’ll still need to occasionally delve into the world of a simulated, scenario-driven exercise where one team is kept in the dark to ensure processes and practices are effective.

Natalia: How do purple teams provide security assurance?

Matthew: Cybersecurity assurance is the process of understanding what the information risk is to a business—its servers, applications, or any supporting IT infrastructure. Assurance work is essentially demonstrating whether a system has a level of security or risk management that is comfortable to an organization. No system in the world is 100 percent infallible. There’s always going to be an attack you weren’t expecting. The assurance process is meant to make attacks more complex and costly for an attacker to pull off. Many attackers are opportunistic and will frequently move onto an easier target when they encounter resistance, and strong resistance comes from purple teams. Purple teams are used to provide a level of assurance that what you’ve built is resilient enough to withstand modern network threats by increasing the visibility and insights shared among typically siloed teams.

Natalia: What are best practices for building a successful purple team?

Matthew: You don’t need to be an expert on zero-day exploitation or the world’s best programmer, but you should have a core competency of cybersecurity and an understanding of foundational basics like how an attacker behaves, how a penetration test is structured, which tools are used for what, and how to review a firewall or event log. Purple teams should be able to review malware and understand its objectives, review exploits to understand their impact, and make use of tools like nmap and mitmproxy to scan for vulnerabilities. They also should understand how to interpret event logs and translate the attack side of hacking into defenses like firewall rules and policy enforcement. People come to me and say, “I didn’t know why we were building firewalls around these critical information assets until I saw somebody exploit a PostgreSQL server and get a root shell on it, and suddenly, it all made sense why I might need to block outgoing internet control message protocol (ICMP).”

Hiring hackers to join your purple team used to be taboo, yet hackers often make excellent defenders. Embrace hacking because it’s a problem-solving mentality. The information is out there, and your attackers already know it. You might as well know it too, so hire hackers. I’ve heard people say hackers are the immune system for the internet when describing how their behavior can be beneficial. Hackers are following what’s going on out there and are going to be the people who see an attack and say, “We use Jenkins for our production build. We better get that patched because this new 9.8 CVSS scoring vulnerability came out two hours ago. Attackers are going to be on this really quickly.” Breaking into computers is done step-by-step, it’s a logical process. Attackers find a weakness in the armor. They find another weakness in the armor. They combine those two. They get access to some source code. They get some credentials from a system. They hop onto the next system. Once you understand the workflow of what your attacker is doing, you get better at knowing which systems will need host intrusion, enhanced monitoring, and the reasons why. Hackers are the ones who have a handle on your risks as an organization and can provide insight as to what threats your teams should be focused on addressing.

Natalia: How should managers support the training and education needs of their purple team?

Matthew: Making sure people have the right training and the right tooling for their job can be hard. You walk through any expo floor, and there are hundreds of boxes with fancy lights and a million product portfolios. You could buy every single box off that expo floor, and none of it’s going to do you any good unless you’ve got the right person operating how that box works and interpreting that data. Your people are more important in some respects than the technology because they’re your eyes and ears on what’s happening on the network. If you’ve got a system that sends 50 high-risk alerts, and no one is picking up and reacting to those alerts, you’ve just got an expensive box with flashing lights.

If you’re hiring someone onto a purple team, make sure they are supported to attend conferences or network with industry peers and invest in their training and education. That is always going to give you better results as they learn and are exposed to more insights, and your people will feel more valued as well. If you want to learn about adversarial behavior and how you can use computer hacking to provide assurance outputs to businesses, read Hands-on Hacking: What can you expect? by Hacker House.

Learn more

To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us at @MSFTSecurity for the latest news and updates on cybersecurity.

The post How purple teams can embrace hacker culture to improve security appeared first on Microsoft Security Blog.

]]>