Mark Harrison, Author at Microsoft Industry Blogs - United Kingdom http://approjects.co.za/?big=en-gb/industry/blog Tue, 07 Jun 2022 17:25:26 +0000 en-US hourly 1 Considerations for Securing your Applications http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2022/06/07/considerations-for-securing-your-applications/ Tue, 07 Jun 2022 14:42:23 +0000 As organisations navigate digital transformation – there is no topic more important than defending yourself from attack.

The post Considerations for Securing your Applications appeared first on Microsoft Industry Blogs - United Kingdom.

]]>

This article is based on my Microsoft Build 2022 session ‘Securing Applications’. I only had 15 minutes allocated to speak – this is what I squeezed in and delivered!

As organisations navigate digital transformation – there is no topic more important than defending yourself from attack.

Security is a complex topic because its wide ranging and has many facets, and therefore needs different skills and roles to be involved.

To be successful, organisations must eliminate the silos between the different teams and embed a security first culture into their processes and tooling.

Information Security

A black and white photo of a castle

Security is about protecting an organisation to ensure the resilience and continuity of its business operations.

A subset of security is Information Security (infosec). Infosec is about the protection of data and associated applications, and it’s so critical for the ongoing existence and success of an organisation.

There are three areas at the core of infosec – these are:

  • Confidentiality – making sure that data is protected from unauthorised access.
  • Integrity – making sure that data is kept accurate/consistent, and protected from
    unauthorised modification.
  • Availability – making sure that data is available when and where it is needed.

Whilst infosec is often associated with defending against malicious attackers, it also needs to consider other types of events that can cause loss, such as ‘acts of god’. For example a lightning storm which might cause a power outage and bring down systems/corrupt data.

It’s also about making sure everyone does the right things and that those things are right. For example, there is no point having a manual backup policy if people aren’t doing it regularly in accordance with the defined schedule.

Cybercrime

A black and white photo of a person sitting behind a computer, wearing a mask

Cybercrime is a global threat and huge industry. Bad actors will attack organisations of any size, whatever their purpose. In most cases the motivation is financial gain, but some have political objectives and others just enjoy the challenge of causing mayhem and getting publicity.

Exploits are traded on the dark web at low cost, enabling less skilled people to be involved in malicious activity and swelling the number of attacks.

It’s often the case that exploiting just one vulnerability can open the door and provide a stepping stone into a network. An attacker can then move laterally through the network and systems to unleash further wide-ranging hostile actions, ultimately impacting the confidentiality, integrity, and availability of data.

Bad actors do bad things:

  • In many cases it will cause severe financial impact (for example – loss of customer trust, loss of intellectual property, severe compliancy fines, corrupted data).
  • In the worst cases, it will cause business ruin.
  • In the most catastrophic case, a malicious cyber-attack can cause loss of life.

The following is the Attacker’s Advantage and the Defender’s Dilemma:

  • Defender must defend all points – Attacker only needs one weak point.
  • Defender must defend against known attacks – Attacker can probe for unknown vulnerabilities.
  • Defender must be constantly vigilant – Attacker can strike at will.
  • Defender must play by the rules – Attacker can play dirty.

This problem can be compounded when there are huge numbers of attackers targeting the defender.

Risk Management

A black and white photo of a beach and a warning sign

A key part of information security is the practice of protecting systems/data by mitigating risks. The risk management process identifies risks, and for each risk the following is assessed:

  • The likelihood of that risk being exercised.
  • The impact that it will cause.

This process will result in a register of risks with a wide spectrum of ‘level of concern’. It’s then a business decision, based on their appetite for risk, to decide how to address each risk – the options are:

  • Avoid – is to resolve the risk so as to completely eliminate it.
  • Accept – is to acknowledge the risk and choose not to avoid, transfer or mitigate. Might do this if the assessed impact is small or the likelihood of it happening is remote.
  • Transfer – is to move the risk to a third-party, perhaps by taking out insurance.
  • Mitigate – is to do something to reduce the likelihood or impact of the risk.

This will be an iterative process, so that the results of ongoing monitoring are fed back into subsequent cycles of the process.

We can reduce risk by doing the right things, and this breaks down into four distinct categories:

  • Secure by Design.
  • Secure the Code.
  • Secure the Environment.
  • Secure the Operations.

Secure by Design

Four photos in circles representing 'design'

Security starts with the design before any code is written.

The design will transform user requirements into an architecture containing the platform components and software modules to be developed, and will define how they interact.

Threat modelling is done by people with a mindset that will think like an attacker rather than a defender. They will use a threat modelling methodology to tease out flaws in the design – which can then be addressed early, when they are relatively easy and cost-effective to resolve.

Zero Trust is a response to the way networks have changed. We used to have an internal network and an external network with a firewall between to keep the bad guys out. Today, your trusted people are working on untrusted networks, and most likely there will be untrusted people on your trusted networks. Today you have to protect resources and not network segments.

The core principal is that everything is a threat – don’t trust anything or anyone at anytime until identity has been fully verified and you have a high level of assurance that it is what it claims to be.

  • Verify explicitly – Always authenticate and authorise based on all available data points, including user identity, location, device health, service or workload, data classification, and anomalies.
  • Use least privileged access – Limit user access with just-in-time and just-enough-access (JIT/JEA), risk-based adaptive polices, and data protection to help secure both data and productivity.
  • Assume breach – Minimise blast radius and segment access. Verify end-to-end encryption and use analytics to get visibility, drive threat detection, and improve defences.

Identity is key and has become a popular attack vector. Do not reinvent your own identity – instead use industry standards and products/services from specialised organisations that have invested substantially in this and have offerings that are battle proven.

Data Classification is knowing your data – is it Highly Confidential, Confidential, Public, Business or Non-Business? Protect personal identifiable information (PII) because the Data Protection laws and regulations around that can impose heavy fines for any breach.

Confidential data should always be protected using encryption when sending over the wire or when stored at rest.

Secure The Code

Four photos in circles representing 'code'

This is about your code and external code – namely open source.

Use code scanning tools to ensure submitted code is high quality, safe and reliable – and conforms to best practice. You can do this with automated code reviews, that previously peers would have done.

Static code analysis tools helps identify areas in which areas of the code under analysis is suspect and may be compromised.

Secure your secrets and put in the source code. Once application source code is loaded into source control, it can spread widely and potentially be read by many. Secrets – like API keys, security tokens, certificates and passwords – are extremely sensitive as they open doors, so they must not be embedded into source code or they will leak. Put such secrets in a safe storage service and then get the application to retrieve them at run time.

Software Composition Analysis tools should be employed to analyse the dependency graph and keep an inventory of third-party components being used to build applications. More on this later when we discuss the software supply chain.

For private development, store source code in well-secured code repositories. Fully understand your branch management so you know what code is in dev/test/production, and have processes for hotfixes.

Secure The Environment

Four photos in circles representing 'environment'

When using cloud there are shared responsibilities for securing the environment. The cloud vendor will handle the physical security but the users must secure their own environments and resources. Analogy is a castle – doesn’t matter how high the walls are or how many crocodiles are in the moat around the castle – if you leave the drawbridge down then an intruder can easily walk in.

Access controls are used to impose rules on who can access what and what level of access they have. Dev environments may be more relaxed, but access to production environments should be strictly controlled and limited. Access controls need to combine with a strong identity foundation and use features such as conditional access, multi-factor authentication, just-in-time and just-enough-access (JIT/JEA).

Policy services are used to centrally set guardrails throughout your resources to help ensure cloud compliance, avoid misconfigurations, and practice consistent resource governance.

‘Infrastructure as Code’ is the practice of keeping infrastructure topology specified in scripts/templates and are stored in version control – in a similar fashion to the way developers manage code and deploy solutions. It then enables consistency, quality, repeatability and accountability of the configuration.

Network security controls and devices may be implemented to mitigate against various known attack vectors. Application security often involves discussion around networking such as firewalls, gateways and load balancers – ensuring the infrastructure is locked down from certain types of network attacks.

Patching ensures all known vulnerabilities in virtual machines and operating system instances are resolved. If you’re using PaaS then it’s not an issue, as it’s handled automatically/transparently by the underlying system. But it’s still a relevant topic in some cloud-native services – such as if using Kubernetes.

Secure The Operations

Operations are responsible for managing the live environments – their duties can be summarised as Protect | Detect | Respond. It’s important that response actions are scripted and so can be triggered as needed, as opposed to having to think and act on the fly/under the pressure of a live incident.

They must monitor everything that is happening and should be looking for the unexpected events or failures, and should it happen implement incident response protocols to take the appropriate preventative measures or contain any damage.

Threat intelligence is knowing the latest security landscape and possible threats, and so can help by planning in advance how to respond. It’s the need to avoid surprises and the unexpected.

After any incident, forensics and root cause analysis should be done – in particular to determine if there is any compromise to the confidentiality, integrity and availability of the data and associated applications.

And finally there’s business continuity – having processes in place to keep the business running during major disruption or disaster, such as earthquake, power outage, fire, cyber attack, etc.

Automation

Four photos in circles representing 'automation'

There’s a lot here and this wasn’t an exhaustive list of all things to do, but it highlights the variety of skills needed. The only way to be effective and achieve success is to automate as much as possible.

Embrace Everything as Code changes the focus from manual, repetitive tasks to workflows based on end-goals and desired states. Store things like configuration rules in version control – so enabling that consistency, quality, and accountability that DevOps offers.

Software Supply Chain

A black and white photo of an open sign

Open source is software made available with source code that anyone can inspect, modify, and enhance. It’s provided with a license that dictates how the software can be used, for example it might impose commercial restrictions or mandate that any modifications must also be shared back with the community.

It’s important that organisations understand and mitigate against the risks of open source. When an open source library is imported/used, then all the dependencies that library uses is also included and there could be many levels of dependencies resulting in the use of considerable amounts of software from unknown sources.

Infecting popular open source libraries with malware and vulnerabilities is on the rise – this is known as a software supply chain attack. It can wreak maximum havoc as the malware will be further distributed to all users of the software that includes the library code.

Software Composition Analysis tools should be employed to analyse the dependency graph and keep an inventory of third-party components being used to build applications. These can then provide ongoing monitoring to:

  • Report on known security vulnerabilities and software bugs.
  • Alert when updated versions are available.
  • Accurately track the open source licensing conditions to fulfil all the legal requirements helping to avoid any unfortunate surprises, such as jeopardising exclusive ownership over proprietary code.

Such tools can help software vendors document their Software Bill of Materials (SBOM) which lists any components, libraries and tools used. There has been discussion that future legislation may force software companies to make SBOM declarations public.

DevOps

A black and white photo of an F1 car

DevOps is the engine that drives innovation – and reduces the time to deliver value.

A DevOps approach enables organisations to develop, deploy and improve products at a faster pace than they can with traditional software development approaches.

But DevOps is not just a product – it requires a culture of collaboration that is critical for DevOps to be successful.

In DevOps, we often discuss the inner and outer loop. The inner loop is the iterative process that a developer performs when they write, build and debug code. The outer loop is the build/deploy/monitoring and then driving the plan for subsequent development.

DevSecOps

A black and white photo of an F1 car and a safety car

DevSecOps is the evolution of DevOps. It focusses on integrating security practises within the DevOps inner and outer loops to create a security first culture.

Furthermore, it mandates a shift left mentality – that is addressing security in the earliest stages in the development lifecycle. So not only is the development team thinking about building a high quality product efficiently, but they are also implementing security as they go.

Addressing security earlier will improve the robustness, saves costs and accelerates delivery.

Learn more

The post Considerations for Securing your Applications appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Sustainability and Green Software Engineering http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2021/08/19/sustainability-and-green-software-engineering/ http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2021/08/19/sustainability-and-green-software-engineering/#comments Thu, 19 Aug 2021 13:49:01 +0000 Mark Harrison takes a look at what Sustainability and Green Software Engineering means for Software Engineers and Application Developers.

The post Sustainability and Green Software Engineering appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
An illustration of leaves representing sustainability, next to an illustration of Bit the Raccoon.

I was recently joined by my colleague Paola Annis where we presented ‘Green Software Engineering’ at a community event. A couple of days later the IPCC (Intergovernmental Panel on Climate Change) released a study that highlighted climate change is widespread, rapid, and intensifying, and some trends are now irreversible. The United Nations called it a “code red for humanity” and a catastrophe can only be avoided if the world acts fast.

 

Sustainability

Sustainability is about meeting the needs of the present – without compromising the ability of future generations to meet their needs. It’s often broken into three pillars: social, economic, and environmental.

  • Social sustainability is about the betterment of society.
  • Economic sustainability is about creating value out of whatever you are undertaking.
  • Environment sustainability is about looking after the planet.

The three categories are intertwined, and weakness in any one pillar will have a negative impact on the other pillars. Experts say that sustainability can only be truly achieved if all three pillars are looked after.

Microsoft supports all of the UN goals around sustainability, but today it’s primarily focused on environmental sustainability in the four key areas of carbon, water, waste, and ecosystems. Back in January 2020 we announced a bold commitment and a detailed plan to operate with 100% renewable energy by 2025, and be carbon negative by 2030. We built on that pledge with a series of industry-leading commitments to be a water positive, zero waste company by 2030, and provide support for a number of biodiversity projects and conservation ecosystems to ensure we protect more land than we use. Things don’t happen overnight, and this is journey, but we are now well on the road to deliver on these commitments.

Climate change is one of the greatest challenges for mankind and in our lifetime, and as Software Engineers and Application Developers we can play our part here. We all need to be onboard to encourage and help everyone to build and deploy sustainable software applications.

There are three things we can do:

  • Move to the cloud,
  • think smarter,
  • adopt the philosophies and competencies of green software engineering.

 

Move to the Cloud

Any hyper-scale cloud vendor will have substantial R&D budgets to ensure they reduce carbon emissions and be more energy efficient than what any enterprise organisation could achieve with their own on-premise datacenters for the same workload.

We found that transitioning workloads to Microsoft cloud services could be up to 98% more carbon efficient and up to 93% more energy efficient than on-premise options, depending on a number of factors. These are documented in an independent study by WSP.

When businesses choose Azure, they are taking positive action to reduce carbon emissions. It’s a compelling way to contribute to the climate goals of any company.

 

Think Smarter

Technology can contribute to addressing environmental issues. As organisations use technology to drive business value, they also need think about enabling smarter growth and to transform sustainably – this can even create new business models.

Innovation in data acquisition and analytics, coupled with artificial intelligence and advanced robotics, as well as cloud computing, satellite and mapping imagery, have opened the door to many solutions. This is enabling optimal decision making and driving efficiencies, ultimately saving carbon and energy. An example of this is smart infrastructure like smart cities, smart transport and smart buildings using technology to measure and minimise carbon emissions, water consumption, and waste.

Digital Twins is a technology where you can create rich models of anything physical or logical, then build applications that can use them. Such applications could help with the detection of water leaks, which is obviously key in helping protect water. Or you could combine Digital Twins with data sensors and predictive analytics to reduce energy use.

Robotic process automation (RPA) can improve process accuracy, reducing defective products and rework and cutting down on wasted materials or the need for extra energy and inputs to rework a process or re-make a product.

 

Green Software Engineering

Green Software Engineering is an emerging discipline with principles, philosophies, and competencies to define, develop, and run sustainable software applications.

You need to be making code changes, architectural changes and choices that actually reduce the carbon emissions consumed by the application.

Green applications are normally cheaper to run, more performant, more resilient and more optimised – but that’s just a welcome addition. The key thing is that developing applications in such a manner will have a positive impact on the planet.

For the full story – please view the video below:

Useful Links

Mark Harrison is an experienced Microsoft sales specialist with a wide and diverse range of technical skills, expertise and a wealth of customer facing experience. He has accomplished twenty-one years in Microsoft as a solution sales/technical specialist, and prior to that worked for seventeen years for systems integrators in all areas of the software lifecycle. You can find him on LinkedIn, GitHub and YouTube.

The post Sustainability and Green Software Engineering appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2021/08/19/sustainability-and-green-software-engineering/feed/ 2
Azure AppDev Trends in 2021 http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2021/02/16/azure-appdev-trends-in-2021/ Tue, 16 Feb 2021 15:00:09 +0000 Mark Harrison takes a look at the AppDev trends he's been seeing so far this year, as well as touching on the themes he's frequently asked to cover as an Azure AppDev Specialist.

The post Azure AppDev Trends in 2021 appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
The Azure logo, with a drawing of Bit the Raccoon looking up at it.

Hello! I’m a Microsoft UK Azure AppDev Specialist and below I’ll be sharing what I do and why I do it, as well as touch on the current AppDev trends I see and the themes that I’m frequently asked to cover.

Using the Microsoft mission statement as our pole star, with an AppDev lens this leads to the following vision and objectives:

  • Empower every organisation to achieve more by exploiting application development to innovate, differentiate and disrupt.
  • Empower every application developer to achieve more by unleashing their capabilities with best-in-class tools and education/guidance.

I’m privileged to be part of the Microsoft UK Azure AppDev team, and we’re all passionate about talking App Innovation, development and helping others. If you want an AppDev conversation or just need some guidance, do reach out in the comments below.

 

Application Innovation

For organisations to survive and thrive in this new era, they must embrace digital transformation. The challenge for all is the rate of innovation is getting faster; business competition can be fierce, so organisations must exploit software technology to innovate, differentiate and disrupt.

It’s investment in applications that will enable innovation and provide a business advantage over competitors and their offerings, which will ensure resilience and drive growth.

Resources:

 

Developer Velocity

We frequently talk about Digital Transformation being fundamental for business success, but it’s software innovation that is at the core of DT – and for that you need application development.

The catalyst for application development is software developers, who have an increasingly vital role in business value creation and will be at the heart of innovation endeavours.

Successful companies will be those that empower software developers to achieve more. They need best-in-class tools and platforms to unleash their capabilities along with a culture of collaboration and sharing.

The Visual Studio portfolio of development tools and services enables application development to be more productive for any developer, any platform, and any language. Visual Studio Live Share combined with GitHub and Teams enables frictionless team collaboration.

Resources:

 

Secure Infrastructure on Tap – aka the Cloud

The cloud is a great enabler for businesses by providing secure IT infrastructure on tap. Much like any other utility, such as water or electricity, you turn it on, pay for what you consume and expect it to be there when needed. It enables organisations to be agile, provides scale and resilience, and the cloud economics make it very attractive.

Around the globe Microsoft leads with its Azure presence – currently in over 60 regions – and each region itself comprises of multiple datacentres. Microsoft has also invested heavily in networks to join up the Azure regions and to interconnect the world.

Azure itself is many services – and it’s hard to appreciate the huge breadth of what’s available on the Azure platform. The link below takes you to a graphic tool that I use to show the numerous Azure services available, grouped together in categories such as Compute, Networking, Storage, and so on.

Resources:

Microsoft datacenter cold aisle row of server racks - wide angle

 

DevOps

A DevOps approach enables organisations to develop, deploy and improve products at a faster pace than they can with traditional software development approaches. But DevOps is not just a product – it requires a culture of collaboration that’s critical for DevOps to be successful.

Microsoft is a great example of company that had to make big changes in development practices to evolve from shipping box products every three to four years, to a cloud company with a new delivery cadence of every day.

There are two Microsoft DevOps offerings in this space – namely Azure DevOps and GitHub. We now have a single engineering team driving the future of both products.

In DevOps, we often discuss the inner and outer loop. The inner loop is the iterative process that a developer performs when they write, build and debug code. The outer loop is the building, deploying, monitoring and then driving the plan for subsequent development.

The outer loop includes Application Health and Performance Monitoring. The Microsoft offering in this space is Azure Monitor (Application Insights provides application monitoring, Log Analytics provides infrastructure monitoring).

Resources:

 

Application Platform Maturity

Application platform maturity is a frequent conversation and covers where an organisation is on their cloud journey, where they want to get to and how fast they want to get there.

The cloud’s economies of scale, flexibility and predictable payment structures are becoming too attractive to ignore. Organisations are moving to the cloud as a cost-effective option to develop, deploy and manage their application portfolio. However, many organisations will ‘lift and shift’ their applications as an approach to migrate the cloud, only getting limited advantages.

To get full value, they need to rebuild and rearchitect with cloud native technologies. This does not necessarily need changes to be done as a big bang, but rather a focus on the areas which are identified as the most business critical, those where future investments are likely and those which give the most advantages. It’s a journey that needs appropriate navigation to optimise returns.

We have several partner migration tools that scan an application’s source code and generates a comprehensive report that identifies issues  and areas that need work, to migrate to Azure/PaaS.

Resources:

 

Cloud Native

It’s applications that provide business value; managing complex infrastructure just consumes resources, time and money with no return. Development teams just want to focus on shipping applications and not be distracted managing the infrastructure stack. They want agility enabled, so there’s a shorter time to market and they can innovate faster.

The application platform needs to auto-scale based on workload demand – scale up to handle spikes, and scale down afterwards to safe costs. Resiliency must be built into the platform. It must offer cloud economics that’s efficient and productive, where you only pay for what you use with no wasted resources.

Key Azure application platform technologies are:

  • Platform as a Service
  • Serverless
  • Containers, Container Orchestrators (e.g. Kubernetes)
  • APIs/Microservices

Resources:

 

Cross Device/Cross Platform

Most users have multiple devices and expect to securely access their applications on any device from any location, and at any time. They may have a PC, a tablet, and a smartphone, and expect the experience to adapt based on the display characteristics of the device and the quality of the network.

Developing for multiple platforms will add to development costs, so choices must be made. Do you develop natively for each platform, or use cross platform development tools (like Xamarin)?

Modern web development tools provide PWA (progress web applications) that can act like mobile apps – they can be installed, access device hardware (like camera), can operate offline and store data on the device. This is likely to become popular as it provides for maximum reach, with costs contained to the development of a single web version.

Recent advances in device technology means that development may also need to target a new range of devices. Such items include:

  • Smart speakers and interactive voice units
  • Wearables, smart watches and heath monitoring devices
  • Mixed reality headsets
  • IoT Devices, sensors and control units
  • Games consoles

Microsoft is no longer a Windows-first company – we provide tools and services for all developers and all platforms, and embrace open source software.

Resources:

 

Open Source/Inner Source

Open source is software made available with source code that anyone can inspect, modify, and enhance. It’s also provided with a license that dictates how the software can be used – for example, it might impose commercial restrictions or mandate that any modifications must also be shared back with the community.

Open source software may be developed in a collaborative public manner, which can bring in diverse perspectives beyond those of a single company.

Inner source is the use of open source software development best practices and the establishment of an open source-like culture within organisations. Facilitating code re-use across teams focuses efforts on solving new problems important to business goals, versus those that have already been solved by others.

It’s important that organisations understand and mitigate against the risks of open source. When an open source library is imported/used, then all the dependencies that library uses is also included. There could be many levels of dependencies, resulting in the use of considerable amounts of software from unknown sources.

Software Composition Analysis tools should be employed to analyse the dependency graph and keep an inventory of third-party components being used to build applications. These can then provide ongoing monitoring to:

  • report on known security vulnerabilities and software bugs
  • alert when updated versions are available
  • accurately track the open source licensing conditions to fulfil all legal requirements, helping to avoid unfortunate surprises such as jeopardizing exclusive ownership over proprietary code.

Microsoft are a member of the Openchain project – and are Openchain 2.0 compliant. This means Microsoft can trust the open source code that it uses and ensures all compliance obligations are met.

Resources:

 

Infrastructure as Code

‘Infrastructure as Code’ is the practice of keeping infrastructure topology specified in documents and stored in version control, in a similar fashion to the way developers manage code and deploy solutions. This will involve using DevOps tooling which enables consistency, quality, and accountability.

Using this approach, organisations can quickly create and delete infrastructure on demand. This is useful for dev/test environments where you want to provision an environment to do testing, and once completed destroy it to avoid unnecessary costs.

Azure provides native tooling for infrastructure as code – namely ARM templates and Bash/PowerShell scripts using the Management API. Alternatively, several cloud/technology agnostic tools are available including Terraform, Ansible, Chef, Puppet and others.

Resources:

 

Integration

Business depends on all kinds of applications, often including external systems owned by partners, suppliers, and vendors. Value is realised when applications integrate seamlessly with each other. There are multiple challenges to consider when it comes to application integration:

  • Applications have multiple interfaces (mobile, web, desktop, or no interface at all) and APIs to connect and integrate to
  • Applications have multiple data sources and even different formats
  • Applications may be a collection of smaller services that run anywhere
  • Applications can be hosted in the cloud, multiple clouds or on-premise datacentres

Organisations can connect applications in the cloud or on-premise through APIs, workflows, messaging, and events using the right integration pattern for the task.

Azure Integration Services provides the components to facilitate common integration patterns and includes the following services:

  • Logic Apps – to schedule, automate, and orchestrate tasks, business processes, and workflows when you need to integrate apps, data, systems, and services across enterprises or organisation
  • API Management – to expose data and services to employees, partners, and customers by applying polices, such as authentication/authorisation, caching and usage limits
  • Event Hub – to manage the routing of all events from any source to any destination
  • Service Bus – to provide highly-reliable cloud messaging service between applications and services, even when one or more is offline.

Resources:

Male developers work remotely from a café.

Low Code/Citizen Developer

Citizen developers are employees who create new business applications for consumption by others using development and runtime environments sanctioned by corporate IT. They are typically not professional developers, but instead are end users that encounter/understand business problems and use simple low code tools to create solutions.

Microsoft Power Platform provides low code tooling to facilitate the citizen developer and encourages a culture of innovation amongst the workforce which helps release untapped value. The Platform includes:

  • Power Apps
  • Power Automate (Includes robotic process automation (RPA))
  • Power Virtual Agents
  • Power BI

The applications developed can be used to connect to Office 365, Dynamics 365, Azure, and hundreds of other third-party applications to enable end-to-end business solutions.

Optionally, any built Power Platform applications can be incorporated into DevOps tooling to ensure anything deployed that becomes business critical can be managed, supported, and governed.

Resources:

 

Application Security

This is huge subject with many facets, and application security is a common discussion due to the importance in protecting the business.

The protection of applications and associated data is critical for the success of an organisation. Cyber-crime is a huge industry and will attack organisations of any size; it’s often the case that exploiting just one vulnerability can open the doors to further wide-ranging malicious actions, ultimately resulting in severe damage to the confidentiality, integrity, and availability of data.

Even the best case will cause severe financial impact (corrupt data, compliance violation fines, loss of customer trust) and in the worst case can cause business ruin. In the most catastrophic case, a malicious cyber-attack can cause loss of life.

Information security is the practice of protecting systems/information by mitigating risks. The risk management process identifies risks, the likelihood of it being exercised and the impact that it will cause. It’s then a business decision to decide how to address the risk – such as avoid, mitigate, share, or accept. This will be an iterative process, so that the results of ongoing monitoring are fed back into subsequent cycles of the process.

Network security is an example of controls that may be implemented to mitigate against various known attack vectors. Application security often involves discussion around networking such as firewalls, gateways and load balancers, and ensuring the infrastructure is locked down from certain types of attacks.

Information security addresses many aspects including Application Security – this includes measures taken to improve the security of an application often by finding, fixing, and preventing security vulnerabilities. Microsoft has guidance, tooling and services to help make sure application security and code scanning is automated and baked into DevOps in a pervasive manner.

Application configuration secrets (e.g. database connection strings, API keys) must be locked away from malicious attack or accidently being leaked – Azure Key Vault provides hardware security modules that can help ensure such values are protected safely.

Resources:

 

Identity

With an organisation’s trusted people now working from anywhere, on untrusted networks, and with the risk of untrusted people present on their own trusted networks, many security experts will claim that identity has become the most important protection mechanism in information security.

Identity will combine with access controls to impose rules on who can access what and what level of access they have. For example: a user may have access to a data store but be limited to read-only. Access controls can generate audit logs of who did what, for later analysis.

Identity access solutions require both Authentication and Authorization.

Authentication is the process of identifying a user is who they claim to be. This could include multifactor authentication (MFA) checks e.g. the user must prove they have some item of knowledge (e.g. a password) and own a token (e.g. a specific phone with an authentication app). A successful authentication will generate a security token that will contain information about the user – this token is passed in any application messages that require authorisation.

Authorisation is the process of determining if an authentication user is granted the rights to perform the action that want to take. OpenID Connect/OAuth 2.0/SAML are commonly used protocols for authentication and authorisation processing.

Handling identity may be wider than just known employees – it might also need to support external parties such as suppliers, business partners and customers. For some scenarios, it might not be about people – identities can also be assigned to trusted devices and services.

Azure Active Directory (Azure AD) is a cloud directory that can store users and be used as an authentication endpoint. Azure AD can sync identities with a corporate Windows Active Directory. Azure AD can federate with other organisations Azure AD, and is useful for B2B applications.

Azure AD B2C can federate the authentication process with both Azure AD and social identity providers (e.g. Facebook), and the latter is useful for many consumer applications.

Resources:

 

Intelligent Edge

Hybrid cloud is evolving from being the integration of a datacentre with the public cloud, to becoming units of computing available at even the world’s most remote destinations working in connection with the public cloud.

The intelligent edge is the continually expanding set of connected systems and devices that gather and analyse information close to the physical world where data resides, to deliver real-time insights and immersive experiences that are highly responsive. At the edge, the application is contextually aware and can run in both connected and disconnected states.

Microsoft have several offerings for Edge computing/Internet of Things:

  • Azure ARC enables us to extend the Azure control plane out to the Edge platforms. For example, ARC will enable remote Kubernetes clusters such that applications will run at the Edge, but they have governance/policy imposed and monitoring from the central cloud.
  • The IoT Edge runtime is a collection of programs that turn a remote device into an IoT Edge device. Collectively, the IoT Edge runtime components enable IoT Edge devices to receive code to run at the Edge and communicate the results.
  • IoT Hub is a managed service, hosted in the cloud, that acts as a central message hub for bi-directional communication between IoT applications and the devices it manages. It enables IoT solutions with reliable and secure communications between millions of IoT devices and a cloud-hosted solution backend. You can connect virtually any device to IoT Hub.
  • Azure Stack Edge are managed devices bringing compute, storage, and intelligence of Azure to the edge.
  • Azure Sphere certified chips is a comprehensive IoT security solution, including hardware, OS, and cloud components, to provide highly secured devices and actively enable defence in depth.

Resources:

 

Data Storage & AI

Applications need to store and consume data – such information must be stored in repositories are reliable, fast, secure, scalable, and cost effective. There are many options ranging from cheap blob storage to relational databases to NoSQL/document databases.

Azure includes a variety of databases that are run as a managed service – enabling a focus on application development and not database management.

Data can be used for machine learning and enabling artificial intelligence. There is an expectation today that applications will be infused with artificial intelligence to provide innovation and differentiation. This is a conversation typically handled by my data platform specialist colleagues.

Resources:

 

Accessibility

Recent regulations mean UK public sector organisations have a legal duty to make sure websites and apps meet accessibility requirements. Commercial organisations should also adhere to these requirements because it is the right thing to do.

Microsoft is committed to revolutionising access to technology for people living with disabilities, impacting employment and quality of life for more than a billion people in the world.

I believe many of us in the AppDev community still have lots to learn here, but we need to be onboard and encourage/help everyone to build products that enrich the lives of all people and of all abilities.

Resources:

 

Sustainable Software Engineering

At Microsoft, we see sustainability and our response to climate change as one of the greatest challenges of our lifetime. Early last year we made a commitment to be carbon negative by 2030, and by 2050 to have removed from the environment all the carbon the company has ever emitted since it was founded in 1975. Further environmental commitments include reducing our water use intensity (water positive by 2030), reducing our waste (zero waste by 2030), and our support for biodiversity projects and conservation ecosystems.

I believe Application Developers can play their part here. Sustainable Software Engineering is an emerging discipline with principles, philosophies, and competencies to define, develop, and run sustainable software applications. Sustainable applications are normally cheaper to run, more performant, more resilient and more optimised – but that’s just a welcome addition. The key thing is developing applications in such a manner will have positive impact on the planet.

Resources:

 

a person sitting at a desk

Remote Development

The global pandemic has caused the way we work and live to change, and organisations of all sizes have scrambled to move to remote work. Developers are fortunate in that in most cases, their role can readily adapt with the shift to remote work.

Remote development by low-cost offshore code factories has been happening for several years, and the approach has been proven to be successful.

Microsoft has the tools developers love and the enterprise trust to keep them productive when working remote, enabling developers to:

  • Code from anywhere
  • Collaborate from anywhere
  • Ship from anywhere

However, I recognise that ‘lockdown remote working’ is not normal remote working and sadly many people are struggling with mental health and wellbeing for themselves and their family and friends. The past year has been bad for some and a complete disaster for many – with peoples plans and dreams thwarted. Hopefully with the vaccine roll-out happening we can start to be optimistic that there is light at the end of the tunnel.

#staypositive/keep busy, active and learning/build connections with others/and please just reach out if you want to talk.

Resources:

— — —

Thanks for reading! What have I missed? What is your AppDev top of mind? Let me know in the comments below.

-Mark

 

Mark Harrison is an experienced Microsoft sales specialist with a wide and diverse range of technical skills, expertise and a wealth of customer facing experience. He has accomplished twenty-one years in Microsoft as a solution sales/technical specialist, and prior to that worked for seventeen years for systems integrators in all areas of the software lifecycle. You can find him on LinkedIn and GitHub.

 

Useful Links

The post Azure AppDev Trends in 2021 appeared first on Microsoft Industry Blogs - United Kingdom.

]]>