TechNet UK Archives - Microsoft Industry Blogs - United Kingdom http://approjects.co.za/?big=en-gb/industry/blog/tag/technet-uk/ Thu, 31 Aug 2023 15:18:55 +0000 en-US hourly 1 An introduction to cloud analytics http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2023/07/07/get-started-using-analytics-in-the-cloud/ Fri, 07 Jul 2023 14:00:00 +0000 Microsoft Azure's core offerings can be broken down into three PaaS offerings for storing and managing your high scale data workloads: Azure Data Lake, Azure Databricks and HDInsight, plus Microsoft Power BI for visualising it.

The post An introduction to cloud analytics appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
An illustration representing a data warehouse, next to an illustration of Bit the Raccoon.

Microsoft Azure is a platform that can cater to your analytical workloads – picking the right tool for the right job is the key. Fortunately, the core offerings can be broken down into three platform as a service (PaaS) offerings for storing and managing your high scale data workloads, Azure Data Lake, Azure Databricks and HDInsight, and a well-integrated tool for visualising it, Microsoft Power BI.

Storing and managing your data

Analytics in the cloud is ultimately about storing your data in the cloud where it can be conveniently processed using powerful services. There are three Azure services for processing your data. One is built by Microsoft and the other two are popular non-Microsoft platforms hosted as first-party services on Azure.

Azure Data Lake Analytics (ADLA) is a massively parallel job service that can ingest file data and dynamically process it into more manageable data. ADLA uses U-SQL, a query language that is a mix of C# and SQL. It is deeply integrated with Visual Studio for development and debugging. It is also integrated with Active Directory, so if you are already using Microsoft for your identity management, it is a convenient way to extend your prior technology investments.

Azure Data Lake Analytics works hand-in-hand with another Azure service called Azure Data Lake Storage (ADLS). ADLS Gen2, which was made available to the public earlier this year, takes many of the features of the original ADLS and builds them on top of Azure Blob Storage. Since Azure Data Lake Storage is built around Apache YARN, it will also play well with any platform that uses the open Apache Hadoop Distributed File System (HDFS) standard, such as Databricks or HDInsight.

Azure Databricks is based on the popular Apache Spark analytics platform and makes it easier to work with and scale data processing and machine learning. The team that developed Databricks is in large part of the same team that originally created Spark as a cluster-computing framework at University of California, Berkeley. In 2017, the Databricks team worked with Microsoft to develop Azure Databricks as a first-party Microsoft service that integrates natively with Active Directory and other Azure tools.

If you prefer to process and analyse data using open source frameworks, HDInsight is a platform that combines several of them, including Apache Hadoop, Spark, Kafka, Hive, and Storm. This is the most cost-effective option for Azure-based analytics in the cloud. Using open source frameworks also allows you to enjoy community support and community apps while having access to Azure security and service level agreements (SLAs).

Viewing your data

Housing and analysing your data is only part of the story. To visualise your data, Microsoft provides Power BI, a powerful data visualisation tool that integrates with Data Lake Storage, Databricks, and HDInsight.

Produce dashboards and reports with rich visualisations in Power BI. There are 3 components to note when using Power BI:

  • Power BI Desktop is a Windows desktop application for your data analysts to build and create dashboards and reports to share with your wider organisation and business users .
  • Data Analysts will publish their content to the Power BI service, this is a cloud service where you you store and share the reports you create  with others members of your organization.
  • For those roles within your business that are away from their main devices there are also IOS, Android and Windows apps to access from your mobile devices to get access to your content wherever and whenever you need it.

Power BI offers a variety of visualisation types out-of-the-box such as bar charts, pie charts, gauges, KPIs, scatter charts, and maps. Besides these standard charts, Power BI also enables you to create your own custom visualisations. You can share your visualisations with others on a community site or  become inspired by other people’s charts. In addition, you have the ability to create even more impact with Report Themes. As with custom visualisations, you can share your custom designs in the community themes gallery.

Summary

Azure analytics in the cloud provides multiple ways to process and analyse your high scale data, whether you want to use Microsoft solutions or prefer to use open source solutions hosted on Azure. Either way, Azure provides the security, data storage and compute resources, data storage and compute resources to allow you to work with big data in a manner of your choosing through Data Lake Storage, Databricks, and HDInsight. Once your data is processed and analysed, you can use Microsoft Power BI to visualise and present your results on both desktop and mobile platforms and paint a picture of your cloud data.

Learn more

The post An introduction to cloud analytics appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Microsoft Build announcements and beyond! http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2023/07/04/microsoft-build-announcements-and-beyond/ Tue, 04 Jul 2023 14:00:00 +0000 Microsoft Build was held last month, and with AI being in the news a lot lately, Microsoft had a lot to share on the topic as well.

The post Microsoft Build announcements and beyond! appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Hello folks,

We are still seeing a lot of hybrid events, but with more people attending in-person it’s almost starting to feel like the former days.

Microsoft Build was held last month, and with AI being in the news a lot lately, Microsoft had a lot to share on the topic as well.

Some key Build announcements:

Personally, it’s always exciting to watch Mark Russinovich go over Azure Innovations. He spoke about the latest innovations in Azure architecture such as hollow-core fibre to optimise our WAN connections, and the AI workload-aware scheduler called Project Forge, a globally aware resource scheduler.

Kevin Scott, CTO & EVP of AI went over the Era of AI copilot and how Microsoft and the partnership with OpenAI’s platform can help develop the next generation of AI apps.

Microsoft’s Dev Box is bringing the tools your developers need to be ready to go with cloud computing.  In the age of hybrid work, a physical PC is no longer required.

We always encourage learning, and after Build there’s so much to catch up on. Be sure to check out this Microsoft Learn page to join the Learn Cloud Skills challenge and earn a free certification voucher and see the other Build Learn Collections.

Lastly, there is also the Book of News to follow all of the updates from Microsoft Build.  You can find information relevant to IT Pros despite this being a developer-focused event.

Until next time!

The post Microsoft Build announcements and beyond! appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
A Practical Approach to Monitoring Your Cloud Workloads – Example 1: Networking http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2023/06/28/a-practical-approach-to-monitoring-your-cloud-workloads-example-1-networking/ Wed, 28 Jun 2023 09:00:00 +0000 William Darnell and Tony Barker discuss a real-world example of how you would monitor networking in Azure.

The post A Practical Approach to Monitoring Your Cloud Workloads – Example 1: Networking appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
In the first post of this series, we gave you a high-level overview of the six steps which will help you determine what to monitor in your cloud workloads. We also said that, in order to cement your understanding, we would be releasing some specific example scenarios to help bring this to life. Today we are discussing a real-world example of how you would monitor networking in Azure.

Networking example overview

This example demonstrates how to apply the six-step process to a simple Hub and Spoke network architecture deployment to create a first pass end-to-end monitoring solution. If you are unfamiliar with the concept of the Hub and Spoke then please visit the Microsoft documentation that describes the Azure Landing Zone.

Remember that we are only aiming to achieve a starting point or baseline. If we continually analyse for every metric, log and alert we will never get anything done! We will also learn over time and include new metrics as we see fit.

Without further ado, let’s dive in.

Step 1: Evaluate Workload

The first step in determining what you need to monitor for your Azure workload is to identify all of the Azure resources included as part of the end-to-end solution. The approach recommended here is:

  • Create a full architecture diagram of the end-to-end solution.
  • Create a list of all the Azure resources included in the solution.

Create an Architecture Diagram

The following image depicts a simple network drawing showing hub and spoke network connectivity back to on-prem via a VPN gateway and an Azure Firewall.

This image depicts a simple network drawing showing hub and spoke network connectivity back to on-prem via a VPN gateway and an Azure Firewall.

Create an Azure Resource List

From the architecture drawing you can now derive a list of all the Azure resource types involved in this solution as follows:

An example list of Azure resource types.

Step 2: Review Available Metrics, Logs and Services

You may already have a clear list of monitoring requirements, but it is worth cross checking these with what is available ‘out of the box’ from a metrics and logs perspective for each Azure resource involved in the solution. The approach recommended here is:

  • For each Azure service gather the available metrics.
  • For each Azure service identify additional associated monitoring logs and services.

Metrics and logs are different things, and it is important to understand and capture both for all the resources in your deployment. To use our car analogy again, metrics can be thought of as your speedometer where small pieces of telemetry information are sent in near real-time to your car dashboard. Logs would be fault messages recorded that have their own structure and would be read at a later date and analysed using queries.

Gather Available Metrics

You can either grab the available metrics for your Azure resources manually from the supported metrics page. Alternatively, you can use this script for automatically obtaining all metrics for Azure resources that you already have deployed. You just point the script at your chosen scope (subscription, resource group etc.) and let it run. For this example, you will end up with a list of metrics like this:

An example list of metrics.

Identify Associated Monitoring Logs and Services

By looking at the Azure Portal under the Monitoring section for each Azure Resource or by reading the documentation associated with each Azure resource, you can identify possible additional sources of monitoring information. Broadly speaking, there are three considerations here for each resource:

  1. Activity Log: This provides insight into subscription-level events. The activity log includes information like when a resource is modified, or when a virtual machine is started. You may find it useful to monitor when a resource is changed in some way. These logs can be routed to a destination like Log Analytics.
  2. Monitor Logs: Different resources will capture different logs and these can be queried in Log Analytics. You can also use Alerts to pro-actively warn you of situations as they arise.
  3. Diagnostic Settings: Each Azure resource requires its own diagnostic setting, which defines the type of metric and log data to send to the destinations defined in the setting. The available types vary by resource type. Setting this up is an important step because NO resource logs are collected until they are routed to a destination.

For example, in addition to metrics, from the Azure portal we can see the following for the Azure Network Security Group resource:

Different options under the Azure Network Security Group resource.

Looking more closely into the Diagnostic Settings we can see that there are two categories of logs we can use. If we send them to Log Analytics they can be queried.

An example of querying diagnostic settings.

Each of the resources will have their own documentation and this is the specific documentation for the NSG.

It will take time, but you need to do this as there may be a log that is vital to you. Looking through each of the resources in our networking example we could derive a starting list like this:

A list of resources from the networking example.

Summarised as follows:

Log Based Monitoring Options
Connection Monitor (Network Watcher)
Azure Firewall Logs
Virtual Machine Insights
Virtual Machine Logs (AMA)
NSG Flow Logs
Activity Logs
Diagnostics Logs
Alerts (Azure Monitor)

Step 3: Assemble your requirements

The next very important stage is to assemble some coherent requirements. It is important to understand the ‘What’‘Who’ and ‘How’ for each monitoring requirement and so the recommended approach is to carefully write these requirements in the format:

  • As a {named individual/team} I want {a specific measurable outcome} so that {the rationale for this}.

You should also categorise your monitoring requirements. For example, wanting to receive an alert email for a metric threshold breach is not the same as wanting a dashboard showing the variation in that metric over the last 90 days. Therefore, you could classify the former as an ‘ALERT’ category whilst the latter is a ‘PERFORMANCE’ category.

As a starting point, you should consider making a list of these ‘User Stories’. A User Story is an end state that describes something as told from the perspective of the person desiring the functionality. It is widely used in software development as a small unit of work. You can then categorise your stories into different sections together with a success criteria referred to as ‘Definition of Done’ (DoD). This approach works very well for monitoring requirements. Here are some suggested category examples, and you may want to add some of your own:

  • ‘Alert’
    • Definition: Notification when monitored thresholds are breached
    • Format: email, text, alarm console bulb, web hook etc.
  • ‘Performance’
    • Definition: Variation of a measured value over time
    • Format: dashboards (graphs, time series), emailed reports etc.
  • ‘Troubleshooting’
    • Definition: Pro-active investigations into specific issues
    • Format: logs

With this approach you can write a monitoring requirement like this example:

TitleActionComments
VPN Connectivity AlertsStoryAs a ‘Cloud Operations Engineer’, I want to be able to receive an alert notification by email when connectivity from Azure to on-prem over the VPN connection fails, this is so that I can immediately investigate and remediate the issue.
DoD• Is triggered when packet transfer from Azure NIC to on-prem NIC over the VPN link fails to arrive.
• An alert notification email received to the ‘cloud support engineering’ email alias within 15 minutes of the occurrence.

So, for our networking example here, we could assemble some of our requirements like this:

An example table showing a list of requirements.

Step 4: Map your requirements to metrics, logs and services

This is an iterative process of evaluating available metrics and logs for each of your Azure resources and then mapping which of these meet your requirements as defined in Step 3. This may result in you spotting new requirements to add to the list as well as identifying where an ‘Out of the Box’ metric can meet that requirement. So, the approach here is:

  • Iteratively review the Azure metrics against your requirements from the previous step and select which one will satisfy it.
  • Iteratively review the Azure logs against your requirements from the previous step and select which one will satisfy it.

For example, looking at the metrics list, we can see that requirements 4, 5, 8 and 9 can be satisfied with ‘Out of the Box’ available metrics:

An example metrics list.

So, for our networking example here, we could map some of our requirements to Azure metrics and logs as below, where the green highlights are showing where a metric can meet a requirement from the requirements list and the yellow where an alternative log base solution is required:

An example table showing a list of requirements, with colour-coding.

Step 5: Populate your backlog stories

The next stage is to convert the outputs from the previous stages to generate a list of actual tasks for implementation of the monitoring requirements. The approach here is:

  • Identify the service and tools you will use to implement your requirements.
  • Create a list of tasks for each of your requirements for implementation in your Azure environment/landing zone.

These tasks will need to map to the specifics of your Azure landing zone. For example, if the environment is managed through CI/CD pipelines and uses ARM templates, then the tasks could involve the creation of ARM templates to implement your monitoring solution as shown in our example below. However, this may not be the case for your environment as maybe you are using Terraform or something else.

For our networking example, the list of Azure services that has been selected to meet the requirements is as follows:

Azure Services
Azure Resource Manager (ARM) Templates
Alerts (Azure Monitor)
Azure Metrics
Azure Network Watcher
Azure Connectivity Monitor
Azure Dashboards
Azure Firewall Diagnostics

This in turn leads to a first pass at populating a backlog of tasks for our network example as follows:

An example showing a backlog of tasks for our network example.

Here you will also notice (shown in bold) that this is where you are selecting the tools that meet your requirements. In this example Azure tools such as Azure Monitor, Azure Network Watcher etc. have been selected but this could be anything that fits your preference or environment constraints.

Step 6: Data retention considerations

The final step is to understand how long you need to keep your logs and metrics. Once you’ve configured the logging and metrics across your resources, information will need to be sent to a destination. At last, you’ll have the visibility you need but it comes at a financial cost. Therefore, you will need to look at both your Functional Requirements and Non-Functional Requirements to assess the correct retention and archive period.

As an example: Let’s say your functional requirement states that you need 90 days of data as a minimum to satisfy some performance requirements and your non-functional data requirements state you need 7 years for archiving. In this example, as with many others, the trade-off is requirements vs cost. We can look at a lower cost data archive storage model for data after 90 days has expired in order to keep costs down.

First let’s consider the metrics. As detailed here, platform and custom metrics are stored for 93 days but you can route them to a destination such as Azure storage where you can keep them indefinitely, to a third-party solution via Event Hubs or to a Log Analytics workspace where different retention periods apply.

And, as with metrics, it goes without saying that Azure Monitor can help with logging data, and we can adjust the settings on our Log Analytics workspace to accommodate our needs. The first thing we need to understand is that there are two different periods: a Retention period and an Archiving period. All of this is detailed here, but in essence during the interactive retention period, data is available for monitoring, troubleshooting, and analytics. When you no longer use the logs, but still need to keep the data for compliance or occasional investigation, archive the logs to save costs. Archived data stays in the same table, alongside the data that’s available for interactive queries. By default, all tables in your workspace inherit the workspace’s interactive retention setting and have no archive policy. You can modify the retention and archive policies of individual tables, except for workspaces in the legacy Free Trial pricing tier.

If all the data ingested into the Log Analytics workspace must be available for analysis and troubleshooting for 90 days, the default workspace retention policy can be changed to 90 days. That solves the functional requirement mandate. For the non-functional requirement, we would need to set an archive policy per table, and we can use 2556 days (7 years) as the setting. These settings would satisfy our example requirements here.

As another option, you can export your logs from the Log Analytics workspace to another destination. This is detailed here. What this means is that you can choose not to archive the data in Log Analytics but instead archive it somewhere else which may be a lower cost option for you whilst conforming to your requirements.

Summary

This concludes the first of our examples into how you can define and configure a monitoring strategy. In this post we used a real-world scenario based around networking and showed you the practical examples of each step.

As a reminder, don’t spend a large amount of time trying to gather every single eventuality for your user stories. Monitoring is a large and evolving topic so it’s realistic to expect that things may change over time. With that in mind aim for a Minimum Viable Product and build from there. That way you will start to get value from your monitoring strategy far quicker.

The post A Practical Approach to Monitoring Your Cloud Workloads – Example 1: Networking appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
New Azure features and functionality for May​ 2023 http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2023/06/27/new-azure-features-and-functionality-for-may-2023/ Tue, 27 Jun 2023 14:00:00 +0000 Just in case you’ve missed something, we’ve collected all of the Azure updates from last month in one place. Give the list a read through, as there’s a wealth of exciting new functions and updates to check out!

The post New Azure features and functionality for May​ 2023 appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
It’s been a busy month as far as new Azure features and functionality go! Just in case you’ve missed something, we’ve collected all of the announcements from April 2023 in one place. Be sure to give this list a read through, as there’s a wealth of exciting new functions and updates to check out!

General Availability

Updates

Misc

Useful Links

The post New Azure features and functionality for May​ 2023 appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Getting started with Azure Machine Learning http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2023/06/20/getting-started-with-azure-machine-learning/ Tue, 20 Jun 2023 14:00:00 +0000 Azure Machine Learning provides an environment to create and manage the end-to-end life cycle of Machine Learning models. Get started today.

The post Getting started with Azure Machine Learning appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Azure Machine Learning provides an environment to create and manage the end-to-end life cycle of Machine Learning models. Azure Machine Learning’s compatibility with open-source frameworks and platforms like PyTorch and TensorFlow makes it an effective all-in-one platform for integrating and handling data and models. Azure Machine Learning is designed for all skill levels, with advanced MLOps features and simple no-code model creation and deployment.

 

Getting started with Azure Machine Learning

Azure Machine Learning (Azure ML) is a cloud-based environment where you can build and manage machine learning models. It’s designed to govern the entire ML life cycle, so you can train and deploy models without focusing on setup. The platform is suitable for any kind of machine learning, from classical to deep learning, to supervised and unsupervised learning.

Azure ML is structured to help teams of data scientists and ML engineers make the most of their existing data processing and model development skills. Whether you prefer Python or R – or have previous experience with other open-source platforms such as PyTorch and TensorFlow – Azure ML is flexible enough to support these platforms and accelerate your work.

With built-in services like Azure ML studio that provide a user-friendly interface, and Automated Machine Learning capabilities that assist you in model selection and training, Azure ML has tools and features to suit every level of experience.

Kickstart your Azure Machine Learning journey

Whether you’re a developer or simply someone who wants to get a feel for what Azure Machine Learning is all about, there are plenty of learning resources out there.

You can also find further learning resources on the Azure ML learning resources page.

 

Try out Azure Machine Learning

One of the best ways to get grips with new tools and software is simply to give it a go. There’s no better way to do this than getting stuck into Azure Machine Learning itself.

Trying out Azure Machine Learning is free, so give it a whirl today.

Learn more

The post Getting started with Azure Machine Learning appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Becoming hybrid by design http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2023/06/15/becoming-hybrid-by-design/ Thu, 15 Jun 2023 14:00:00 +0000 Is cloud the end state? Do I need to migrate all of my workloads? We have been having this conversation for years now, and the answer is a resounding no.

The post Becoming hybrid by design appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Hello Folks,

Is cloud the end state? Do I need to migrate all of my workloads? We have been having this conversation for years now, and the answer is a resounding no.

Migrate what makes sense, keep on-premises what makes sense and figure out how to get benefits from the two working together. Jason Zander, Executive Vice-President of Microsoft Azure said it best at Microsoft Ignite in 2019.

“Hybrid, we believe, is a permanent state. It’s not just a transition.”

This is the truth we strive for everyday here at Microsoft. To that end we are releasing more and more capabilities for you to use and leverage, such as Azure Arc enabled Windows Servers that unlock capabilities such as Azure Policy, Azure Sentinel, Azure Automation and more. These all make managing your hybrid environment easier and more streamlined.

We’ve published a series that covers hybrid management – check out the posts below if you’d like to learn more.

  1. Best practices for onboarding Microsoft Azure Arc enabled servers
  2. Standardize DevOps practices across hybrid and multicloud environments
  3. Centrally design, deploy, and operate Kubernetes apps and clusters anywhere using Azure Arc
  4. Azure Arc – enabled data services with Azure Stack HCI and Azure Kubernetes Service (AKS)
  5. Azure Arc – enabled data services in disconnected mode
  6. Choose the right data solution for your hybrid environment

And remember, manage your environment in a way that makes sense for you. Just ensure that you manage ALL your environments using the tools that will bring you value.

Cheers!

Learn more

The post Becoming hybrid by design appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Using Azure Pipelines to increase creativity and reduce costs http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2023/06/13/using-azure-pipelines-to-increase-creativity-and-reduce-costs/ Tue, 13 Jun 2023 14:00:00 +0000 Organisations need to adapt to changing environments if they want to stay ahead of the competition. This usually means that software needs to change quickly too—providing more and better features and fewer bugs.

The post Using Azure Pipelines to increase creativity and reduce costs appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Building great software has always been a difficult task, requiring knowledge of coding, standards, and algorithms. Today, there are many additional worries a developer might have. The code almost never stands on its own, it’s built on other packages. Developers work in teams, often in different locations. The code needs to run on a variety of platforms, and there’s usually a long list of dependencies on helper tools, SDKs, and other artefacts.

This results in a very difficult road to deployment. It’s not uncommon for developers to spend more time on these issues than on writing the code. In addition, the repetitive nature of this work often leads to errors: it’s easy to make a small mistake that has a huge impact down the line. For instance, not keeping the underlying framework up to date might cause the software to fail on a system that runs a higher version of that framework.

Most developers are familiar with this effect, which is often referred to as the “works on my machine” state of software. Although that is meant as a joke, it does have some truth in it. Since developers build, test, and deploy on their own local development machine, chances are that even though it works on their computer, it will work differently or even not at all on the final machine it’s meant to run on.

But there’s more. Organisations need to adapt to changing environments if they want to stay ahead of the competition. This usually means that software needs to change quickly too—providing more and better features and fewer bugs. It also has to run on a lot of different platforms, and it needs to be built by teams all over the globe.

All of this adds to the workload of the developer in a very unproductive way. Developers should be building new features, solving bugs, and generally be busy adding value. Having to deal with the extra workload leads to distractions and potential problems.

Getting Started with Azure Pipelines

Azure Pipelines is a great way to mitigate those issues. With Azure Pipelines, we can move away from all those manual steps and have them done automatically whenever we need them.

An Azure DevOps service, Azure Pipelines is a system where you define the steps needed to build, test, and deploy your software once and then have the system take care of whatever else is necessary. It’s extremely simple to get started, but don’t let that fool you—it can be very powerful!

When you go to the Azure DevOps environment, you’ll find the option Pipelines, which is your starting point. There is a handy wizard to help you. For instance, you need to specify where your source code is located. There are several options available, including GitHub.

Once you’ve done that, you get a set of templates you can choose from, depending on what kind of software you are building. There are templates for ASP.NET, ASP.NET Core, .NET desktop, and Xamarin (both Android and IOS), among others. But these are just starting points. You can extend this pipeline as much as you want.

You can configure it to take all the dependencies from a NuGet repository, be it your own or a shared one. You can specify not only which tests to run and what to do if those tests fail, but also deployment slots for web apps, Azure Functions, SQL Databases, and so on. If you want to deploy to a staging environment and only swap staging with production when certain preconditions are met, you can specify that too.

You can adapt the pipeline as much as you want, and of course, these pipelines can be put under source control and distributed to others in your organisation.

Save time for more creative work

By using Azure Pipelines, you can automate many of the steps you used to have to do manually. Deployment is now a matter of starting the pipeline or, if you have continuous integration enabled, just checking in your changed code files. The pipeline engine will do the rest.

This results in better, more predictable builds and will leave you as a developer with more time to do what you do best: build awesome code.

You can get started by visiting the Azure Pipelines website.

The post Using Azure Pipelines to increase creativity and reduce costs appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Getting started with Azure Quantum http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2023/06/06/getting-started-with-azure-quantum/ Tue, 06 Jun 2023 14:00:00 +0000 Thinking about getting started with Azure Quantum? Curious about the Q# language? Here is a selection of resources that will start you on your Quantum journey.

The post Getting started with Azure Quantum appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Quantum computing presents unprecedented possibilities to solve society’s most complex challenges. Microsoft is committed to responsibly turning these possibilities into reality – for the betterment of humanity and the planet.

Over decades of research and development, Microsoft has achieved advancements across every layer of the quantum stack – including software, applications, devices, and controls – and is delivering true impact today through quantum-inspired classical computing.

Getting started with Q#

Bring quantum apps to life with the quantum development kit for the Q# quantum programming language and Azure Quantum. In this open-source kit, you’ll find tools to formulate and run optimization problems on large-scale or hardware-accelerated Azure compute resources, as well as for developing durable quantum applications for quantum hardware.

Try out the Azure Quantum Preview

When developing on Azure Quantum, you accelerate your development lifecycle by building your quantum solution once and running it on multiple systems with little to no change. Azure Quantum is your best path to leverage the latest optimisation technologies from Microsoft and our Partners, as you seek long term cost-saving solutions.

With Azure Quantum and its quantum development kit, what could be a heterogenous hardware and software set of solutions is unified. Your development investments are protected in a rapidly evolving technological landscape, and these hardware and software innovations are brought to you with minimal to no change to your code base.

Trying out the Azure Quantum preview is free, so give it a whirl today.

Kickstart your Azure Quantum learning

Whether you’re a developer or simply someone who wants to get a feel for what quantum computing is all about, there are plenty of learning resources out there.

You can also find further learning resources on the Quantum learning resources page.

Learn more

The post Getting started with Azure Quantum appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Getting started with Azure AI http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2023/05/30/getting-started-with-azure-ai/ Tue, 30 May 2023 14:00:00 +0000 With AI, we can build solutions that seemed like science fiction a short time ago; enabling incredible advances in health care, financial management, environmental protection, and other areas to make a better world for everyone.

The post Getting started with Azure AI appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
With AI, we can build solutions that seemed like science fiction a short time ago; enabling incredible advances in health care, financial management, environmental protection, and other areas to make a better world for everyone.

Discover Azure AI – a portfolio of AI services designed for developers and data scientists. Take advantage of the decades of breakthrough research, responsible AI practices and flexibility that Azure AI offers to build and deploy your own AI solutions. Access high-quality vision, speech, language and decision-making AI models through simple API calls, and create your own machine learning models with tools such as Jupyter Notebooks, Visual Studio Code and open-source frameworks such as TensorFlow and PyTorch.

 

Getting started with Azure AI

AI is a broad classification of computing that allows a software system to perceive its environment and take action that maximizes its chance of successfully achieving its goals. A goal of AI is to create a software system that’s able to adapt, or learn something on its own without being explicitly programmed to do it.

There are two basic approaches to AI. The first is to employ a deep learning system that’s modelled on the neural network of the human mind, enabling it to discover, learn, and grow through experience.

The second approach is machine learning, a data science technique that uses existing data to train a model, test it, and then apply the model to new data to forecast future behaviours, outcomes, and trends.

Kickstart your Azure AI learning journey

Whether you’re a developer or simply someone who wants to get a feel for what Azure AI is all about, there are plenty of learning resources out there.

You can also find further learning resources on the Azure AI learning resources page.

 

Try out Azure AI

One of the best ways to get grips with new tools and software is simply to give it a go. There’s no better way to do this than getting stuck into Azure AI itself.

Trying out Azure AI is free, so give it a whirl today.

Learn more

The post Getting started with Azure AI appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
A look at the announcements from Microsoft Build 2023 http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2023/05/24/a-look-at-the-announcements-from-microsoft-build-2023/ Wed, 24 May 2023 17:16:40 +0000 This year’s edition of Microsoft Build has now wrapped up, but don’t worry if you missed it!

The post A look at the announcements from Microsoft Build 2023 appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
This year’s edition of Microsoft Build has now wrapped up, but don’t worry if you missed it! The high-quality sessions and keynotes from across the two days are available to watch on-demand via the Microsoft Build Session Catalogue.

The event brought us many surprises, so just in case you couldn’t tune in live, let’s walk through some of the key announcements.

Azure AI

With the continual advancements being made in AI, solutions are rapidly changing to meet the needs of users. Microsoft Azure AI Service has several new capabilities to help customers increase productivity, efficiency and content safety for customers.

Updates to Azure OpenAI Service, now in preview, will include enhancements like Azure AI Studio, which will better enable organisations to combine Azure OpenAI Service with their data; a Provisioned Throughput Model, which will offer dedicated/reserved capacity; and plugins that will simplify integrating external data sources and streamline the process of building and consuming APIs.

Azure AI Content Safety, a new Azure AI service, will empower businesses to create safer online environments and communities. Models are designed to detect hate, violent, sexual and self-harm content across languages in both images and text. The models assign a severity score to flagged content, indicating to human moderators what content requires urgent attention.

Vector search for Azure Cognitive Search, the retrieval system for new large language models (LLM) apps, is coming soon in preview. Vector search allows developers to easily store, index and search by concept in addition to keywords, using organisational data including text, images, audio, video and graphs.

New capabilities, now in preview for Azure Cognitive Service for Language, will include the ability for developers to customise summarisation, in addition to the entity recognition, text classification and conversational language understanding (CLU) features already announced, and are all powered by Azure OpenAI Service.

Azure Machine Learning drastically improves machine learning professionals’ ability to operationalise responsible generative AI solutions by enabling evaluation at all phases of the model lifecycle. Updates to Azure Machine Learning include:

  • Prompt flow, in preview soon, will provide a streamlined experience for prompting, evaluating and tuning large language models. Users can quickly create prompt workflows that connect to various language models and data sources and assess the quality of their workflows with measurements, such as “groundedness,” to choose the best prompt for their use case.
  • Support for foundation models, in preview, will provide native capabilities to fine-tune and deploy foundation models from multiple open-source repositories using Azure Machine Learning components and pipelines.
  • Responsible AI dashboard support for text and image data, now in preview, will enable users to evaluate large models built with unstructured data during the model building, training and/or evaluation stage. This helps users identify model errors, fairness issues and model explanations before models are deployed, for more performant and fair computer vision and natural language processing (NLP) models.
  • Model monitoring, in preview, will enable users to track model performance in production, receive timely alerts and analyse issues for continuous learning and model improvement.

Azure Data

Microsoft Fabric, now in preview, delivers an integrated and simplified experience for all analytics workloads and users on an enterprise-grade data foundation. It brings together Power BI, Data Factory and the next generation of Synapse in a unified software as a service (SaaS) offering to give customers a price-effective and easy-to-manage modern analytics solution for the era of AI.

Power BI has several new updates that will empower organisations to turn data into insights immediately with an industry-leading BI platform and include:

  • Copilot in Power BI, in preview, will infuse the power of large language models with an organisation’s data to help uncover and share insights faster.
  • Power BI Direct Lake, in preview, is a new storage mode within Power BI datasets that will allow organisations to unlock massive data without having to replicate it, by seeing straight through to the data in the lake.
  • Power BI Desktop Developer Mode, in preview, will enable developer-centric workflows through Git integration for Power BI datasets and reports.

Azure Cosmos DB is introducing a range of new enhancements to optimise costs, performance and developer productivity. These enhancements demonstrate Microsoft’s commitment to improving the user experience for app developers.

Azure SQL Database Hyperscale elastic pools is introducing a shared resource model for Hyperscale databases, now in preview. This update will help developers build and manage new apps in the cloud and scale multiple databases that have varying and unpredictable usage demands.

Developer Community

Microsoft is launching a variety of training and documentation on Microsoft Learn to help people leverage the power of AI.

The newly released content helps technology professionals build expertise and gain new skills in the latest AI innovations, including how to:

  • Use Azure OpenAI Service to summarise text, get code suggestions and generate images for a website.
  • Add intelligence to apps – and find insights – by creating tailored AI models within Power Apps.
  • Use Power Virtual Agents to build adaptable chatbots that use AI.
  • Code suggestions with GitHub Copilot to take projects to the next level.

Power Platform

Next-generation AI in Power Pages is revolutionising how customers build and launch data-centric websites for their businesses. With Copilot in Power Pages, now in preview, users can increase productivity and speed up the website building process by generating text, creating complex forms, contextual chatbots and web page layouts and creating and editing image and site design themes for rapid visual setup and customisation. This is possible in minutes using natural language input and intelligent suggestions.

Power Virtual Agents continues to help developers create more intelligent chatbots using the latest AI capabilities. New features include the ability for Power Virtual Agents to generate dialogue and complete actions, Azure conversational language understanding (CLU) integration, and the expansion of previously announced features, including conversation boosters in Power Virtual Agents.

Catalog in Power Apps, a new feature within Power Platform now in preview, will give developers and makers a place to publish and share the building blocks that underlie their apps. With every new app that developers create, their organisation will enjoy the benefits of a robust catalog that reduces the time and cost of each new app.

With Power Virtual Agents (PVA), users can easily author an intelligent Microsoft Teams bot using natural language to build and point to any website available within the tenant and Teams users. This update, now in preview, will democratise building company-wide help desk bots, such as human resources bots, and department/team-wide bots, such as onboarding bots.

And more!

This is just a small selection of announcements from Build 2023! Be sure to check out the Book of News to see everything, which includes announcements on AI, Security, Windows and more.

Missed the show? Check out the sessions you might have missed in the Session Catalogue, and follow the conversation on the UK Twitter channel, @MSDevUK, as well as on the #MSBuild hashtag!

The post A look at the announcements from Microsoft Build 2023 appeared first on Microsoft Industry Blogs - United Kingdom.

]]>