Open Source Archives - Microsoft Industry Blogs - United Kingdom http://approjects.co.za/?big=en-gb/industry/blog/tag/open-source/ Tue, 25 Jul 2023 16:43:45 +0000 en-US hourly 1 How Hello Lamp Post use Azure to help cities better understand their citizens http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2022/11/13/how-hello-lamp-post-use-azure-to-help-cities-better-understand-their-citizens/ Sun, 13 Nov 2022 17:55:28 +0000 We caught up with Tiernan Mines, who spoke to us about the Hello Lamp Post platform, how they’re using Azure to power their natural language processing, and what the future has in store for smart cities.

The post How Hello Lamp Post use Azure to help cities better understand their citizens appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Hello Lamp Post is a UK and US-based company working at the exciting crossover of smart city space and customer experience. From urban planning to healthcare, they’re doing important work to reshape the ways in which cities can communicate and obtain valuable insights from their citizens.

We caught up with Tiernan Mines, who spoke to us about the Hello Lamp Post platform, how they’re using Azure to power their natural language processing, and how they’re creating the future of how people will experience place.

Chris: Who are you, who are Hello Lamp Post and what do you do?

A portrait photograph of Tiernan Mines, CEO of Hello Lamp Post

Tiernan: I’m Tiernan, I’m the CEO and one of the co-founders of Hello Lamp Post. As CEO, you can imagine I wear a few different hats and that as we evolve and grow as a company, my role morphs and changes.

I like to think of my role as being a servant to our vision and my team – the team do all of the clever work. I’m there to support and make sure everyone’s moving in the same direction together, achieving our goals and doing so in a fulfilling, enjoyable environment.

In terms of my background and how I came about co-founding Hello Lamp Post, it was mostly a venture and commercial-based background. In the early days I had roles in sales, and since then I’ve founded a couple of smaller ventures. I’ve joined other start-ups to grow particular revenue streams or new products, and eventually Hello Lamp Post was born when working with my two co-founders, Ben and Sam.

As for what we do, we’re a software company that makes anything interactive – outside of the home, in the public realm, anywhere in the world. We do that by using people’s devices combined with QR codes, which allows us to have two-way chat interactions between people and a specific hyper-local place, space or object. This could be anything from a park bench to a bus stop or a building. We also bring this online, allowing website and marketing channels to become interactive.

For companies and organisations, Hello Lamp Post can be described as a customer experience tool for places. It gives people easier access to information, be that live bus times at a bus stop, what the council has planned for your neighbourhood, or the history of a statue. This is really important because this two-way chat provides the organisations and companies that are looking after those places or areas a better view of people’s needs and wants at any given time.

C: Where did the idea behind Hello Lamp Post come from?

T: Hello Lamp Post was born out of a couple of frustrations and observations. Perhaps it was triggered by the whole Smart Cities movement a few years ago but we saw that the digital and physical worlds were coming together, in the sense that you were able to go anywhere and expect that there’s gonna be the Internet, be it Wi-Fi or mobile data. That convergence has been happening for years, but we noticed that when we move through or use physical environments, it’s all very analogue.

Whether you’re at a bus stop during your commute, working in an office, or visiting a park or landmark, everything’s still very analogue. It was the observation that you might pass through an area and want information about that place, you have feedback you want to pass on, or you have ideas about those areas – for most people these thoughts are simply forgotten.

On top of this, whenever you’re using or experiencing a place, there’s always a high barrier to entry for information. Obviously we have access to search engines, but the barriers are still high, right? If you want to find out what the council’s doing in your local area, you have to trawl through a number of websites. If you’re looking at a statue, information is limited to a tiny bronze plaque. Or if you have a tonne of questions while finding your way around a hospital, you’re limited to navigating a complex website, trying to understand complex signage or find an available staff member.

On the flip side, the other frustration is that this is a two-way investment. If I want to communicate, I need to know what company or organisation is looking after the place or object in particular.  Again, it’s not an impossible task to figure out, but the barriers to entry are really high and it cuts off a lot of interaction, and misses out a large portion of society.

So why didn’t a digital platform exist that made it easier to find information, in-location? Also, for the companies and organisations looking after those places, why wasn’t there a platform that made it easier for them to understand their audiences? That’s where the idea came from.

A photo of a man using their phone to scan a Hello Lamp Post photo, which says it can give them the story of the scenic lookout they are at.

C: How did you build the first iteration of Hello Lamp Post, and how has it changed since then?

T: My co-founders and I were talking about these frustrations and observations, and the direction in which physical worlds will change considerably. We knew they were going to become more interactive in one way or another, whether it’s for gaming, as a utility, whatever it might be, and we saw an opportunity to be a part of that.

Then there’s serendipity, and part of this was an award that we ended up winning. Originally, Hello Lamp Post didn’t exist beyond an idea on a bit of paper between us, but we entered into the Playable City Award which is run by Watershed in Bristol. Whoever wins gets to build out the first iteration of their idea.

We started from a user perspective, imagining a digital journal across a city where objects and places become interactive. How people can use those places to leave their memories and ideas, and how they would be shared with other people when they interact with the same objects and places. We entered Hello Lamp Post as an original concept and we ended up winning the inaugural award, which allowed us to build our first iteration.

We eventually deployed in Bristol for eight weeks and it was a considerable success. We proved that people will have conversational chats with inanimate objects, and that became our first jumping off point.

At the time we saw huge potential, but we never expected it to become a tool, a platform, a solution or service. But off the back of that, a number of different cities and organisations around the world got in touch to ask if this was a platform that could be used to engage and communicate in a more automated way.

What started as a one-off concept ended up morphing into a useful and powerful tool, both for us as people and also for decision makers in the public and private sector.

C: In what ways are people currently using Hello Lamp Post?

T: The beauty of what we’ve built is that the interactive layer can be anywhere. There are really no limits on where Hello Lamp Post can be deployed to have two-way exchanges. So we don’t need to be stuck in the age of reading bronze plaques, scrolling through websites, hosting a town-hall meeting or sending someone out with a clipboard to run surveys.

To go deeper on a couple of examples, fire and rescue services are using our platform to allow people to do an automated at-home safety check from their own device, which triages them on whether they’re at risk of fire at the home or not. It allows the fire service more visibility on fire risks, and in the long term it reduces the number of avoidable fires in the home.

We’re also working with hospitals, automating their process of gathering feedback. With the NHS and other healthcare services being extremely strained, things like engaging with patients and getting their feedback fall by the wayside, so we’re helping to make sure that this doesn’t happen.

At the same time, some councils are using Hello Lamp Post for public wellbeing and suicide awareness/prevention – making key locations interactive to give people essential help and information at times of critical need. Again, this is all done through smart two-way interaction.

Two people using their phones stand next to a metal pole with a Hello Lamp Post sign on it, which states that it's an air quality sensor and they can chat with it.

C: You use Azure for natural language processing – can you tell us more about how you use it?

T: For us, we break natural language processing (NLP) down into three parts: the customer-facing side, the user-facing side, and the internal side that we use. We tend to use a lot of natural language processing for the customer-facing side, with regards to presenting insights back to them.

Public and private sectors like to gather a lot of data, but we’re moving into an age of insights rather than just data. We wanted a way to distil all of the free-text responses and data that’s coming through the platform and present it back as useful insights at the click of a button. To that end, we use a lot of analytics around NLP and other areas that then feed to dashboards that our customers can look into.

From a user perspective, it’s really about using NLP live to assess what a user is saying, and that can be anything including sentiment, query recognition and classification of theme. Our algorithms have to understand what the user is saying and then decide what to do next.

Internally, we use NLP a lot with our analytics. We frequently analyse our databases to spot trends and how we can better utilise the data that we have access to. In the near future, it means better experiences for our users and customers.

Automation is a huge part of why we use NLP. In the early days of Hello Lamp Post we had to do a lot of manual analytics with spreadsheets, pivot tables and the like, so the dawn of natural language processing has really helped to automate a lot of that.

C: Why did you decide to use Azure over other cloud providers?

T: The kicking off point for us using Azure was when we when we got accepted into Microsoft AI for Good program. We were a part of the 2020 cohort, and it became a real eye opener to the Azure platform for us. We became more familiar with the different toolsets, the different infrastructure offerings and it gave us a good opportunity to sandbox a lot of what Azure has to offer. That’s when we realised how seamless it is.

Scalability was also very appealing as our ambitions were always to scale and grow, and we knew that the Azure platform could underpin all of that. It was a combination of being able to seamlessly use various tools that can talk to each other, but also knowing that if we grew rapidly then things wouldn’t fall over.

C: Do you use any open source software? If so, which software?

T: Yes, we do use open source software. We’re currently using Postgres, and some of our front end, externally-facing interfaces use Vue3 and React. On top of this, we also use several component libraries.

A Hello Lamp Post sign on a lamp post in Porthmadog, Wales. The sign is located on the platform of a train station.

C: Back in 2020, during the height of the Covid-19 pandemic, you launched an online version of Hello Lamp Post e.g. Hello Council. Did you foresee yourselves running something like that eventually? Did it affect the direction of the business in any way with what you learned?

T: I think we always envisaged moving in that direction, so the short answer is yes. We always wanted to bring the offering online, and obviously things like engagement, communication and customer experience don’t only exist in the physical world, so really the pandemic just accelerated that move for us.

Interestingly, there was an increased demand for what we do in the physical world at the start of the pandemic because companies weren’t able to have people out and about, that they now needed a contactless experience. Others wanted a more automated way to bring our platform online, think adding QR codes to letters that councils were sending out for example, to make those letters interactive. Councils were one of the types of organisations that pushed us online sooner.

In terms of lasting effects, I wouldn’t say it changed our direction, but we definitely became more informed. One of the key things we learned is that it’s clear that customer experience isn’t one-size-fits-all, it’s multi-faceted and about making it more accessible by lowering the barriers to entry. I think that was the biggest learning for us.

C: What’s next for Hello Lamp Post?

T: Our focus now is really about accelerating growth to help more customers and communities around the world. We’re really focused specifically on growing our customer base in the UK and North America, and we’ve got a team based in the US now which is really exciting, growing that market across public and private sectors.

Without saying too much we’ve got a really exciting product roadmap coming up as well, so we’re really accelerating towards that, but our main focus is just wanting to work with and help more customers and communities around the world.

C: What is Hello Lamp Post’s vision?

T: Ultimately, we want to make everywhere interactive. That might sound lofty, but what’s driving us is making it easier to connect people, information and place. There’s a sweet spot that our platform sits in that no one else does, which is partly why we’re shooting so high.

It does have a purpose, of course – the more people are connected, the better they’re informed about and involved in the places where they live, work and play. Our goal is to deploy in as many places as possible around the world and create the future of how we experience ‘place’. That ultimately feeds into our mission – make places better for people. 

Learn more

The post How Hello Lamp Post use Azure to help cities better understand their citizens appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
How Zeti is using ZERO and Azure to reduce emissions across fleets of vehicles http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2022/05/13/how-zeti-is-using-zero-and-azure-to-reduce-emissions-across-fleets-of-vehicles/ Fri, 13 May 2022 12:38:47 +0000 We caught up with Daniel Bass, who told us about Zeti's ZERO system, how they're using Azure to reduce the carbon footprint for their clients and themselves, and what the future has in store for the transport industry.

The post How Zeti is using ZERO and Azure to reduce emissions across fleets of vehicles appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
An illustration of leaves representing sustainability, next to an illustration of Bit the Raccoon.

Zeti is a UK-based start-up doing exciting work for sustainability in the transport sector by providing fleets of zero or ultra-low emission vehicles. They have the enviable mission of accelerating the shift to clean transport and doing their part in helping tackle the climate crisis.

We caught up with Daniel Bass, who told us about Zeti’s ZERO system, how they’re using Azure to reduce the carbon footprint for their clients and themselves, and what the future has in store for the transport industry.

Chris: Who are you, who are Zeti and what do you do?

A portrait photo of Dan Bass, CTO at ZetiDan: I’m Dan Bass, the co-founder and CTO of Zeti. My background is in software engineering and investment management, but I’ve also written a few apps and written two books on serverless architectures with Microsoft Azure. I also have a small blog that talks about serverless as well.

Zeti helps fleet operators convert to electric vehicles by making it as simple, easy and transparent as paying for any other utility. We’re really doing this to help solve the climate crisis, and with London in particular, the air pollution crisis. We offer a patent-pending form of finance called Pay-Per-Mile which is, instead of charging interest and having regular monthly payments like a normal car lease, customers get a cost-per-mile. Then, for every mile that their fleet drives, we charge them for those miles every month.

We’re currently focused on business consumers rather than individuals, but it’s proven really popular with our business customers. For one thing, we pool their utilisation. In normal leasing, each vehicle has a fixed limit on the number of miles it can run. In our model, if you have a really heavily-used vehicle, it just contributes to your bill through cost-per-mile. There’s no individual limits, which makes it a really attractive model. It also allows them to scale down their costs during quiet periods: for example, during COVID our fleet customers were able to park up their vehicles and reduce their operating costs significantly.

Chris: What is ZERO and what does it bring to your customers?

Dan: ZERO is basically the core of Zeti – it powers everything that we do. It does all sorts of things, with one example being the ability to track vehicles in real time using the telematics installed on the vehicles. I mentioned earlier that we do Pay-Per-Mile financing, but before the advent of telematics and 4G/5G, we would have to check every vehicle that we finance each month and look at its dashboard. That’s obviously impossible – fleet operators are very unhappy about not being able to use their vehicles, not to mention how difficult it’d be logistically.

Instead, we use telematics to automatically and transparently pull that data in real time, which is handled by ZERO. It does this for billing purposes, so it will generate bills monthly, send them to customers and collect payments over direct debit, which is an innovation itself. Again, businesses don’t tend to pay by direct debit and investment companies certainly aren’t used to being able to receive money via direct debit.

It also pulls quite sophisticated electric vehicle data, such as when vehicles are charging and battery health information, and this in turn can help us diagnose issues with vehicles. For fleet operators, we can proactively tell them that the battery in one of their vehicles is looking a bit weak. Maybe they’re charging it in a way that’s damaging – we can advise against it to give them a better range on the vehicle for the next two years.

ZERO also does a lot of reporting work, providing real time automated reporting to all of our investors. Zeti’s model is that we don’t lend our own money, we instead act as a partner to help fleet operators access investors who want to put money into, what we call in the investment management industry, ESG – environmental, social and governance. In this case we’re really focused on the environmental side of this with the climate crisis, and there’s a lot of investment money available to help tackle the climate crisis. As a side note, this is because people are demanding that their pensions and investments help tackle these issues.

ZERO gives a live view of how much carbon dioxide and nitrous oxide is being saved by these vehicles in real time at the tailpipe. So because of this, we can see these vehicles are driving through central London not emitting nitrous oxides, but if we hadn’t financed an electric vehicle it would instead be a diesel vehicle, which certainly would be.

In the private investment world, which I’ve worked in for a long time, the best you might expect would be a monthly spreadsheet. I’ve seen examples where someone would literally get an Excel spreadsheet every month that was reaching its cell limit, and that was their data load. That was on the better end of things, as you’d actually get lots of data rather than a PDF that just says everything’s fine. So having this level of real time transparency is really good for ZERO.

The fleet operators get that we’re creating a utility-like experience for them, but the good thing is that with these kinds of smart metering services, you can see exactly how much you’re using, you can see what happens when you change things, you can see the impact of changes immediately and you can also see predictions into the future of the fleet. That’s what the ZERO fleet operator portal does on top of showing them how much carbon dioxide they’re saving, which is something they can communicate back to their customers. So really, ZERO is the core of our operation that automates the entirety of our offering.

Chris: A lot of the ZERO platform is automated – how have you built this/what are the benefits?

Dan: The ZERO platform is entirely built on Microsoft Azure using serverless tech. The core billing and payments flow is built using durable functions on Linux function apps, all written in C#. The various portals and dashboards – anything we need to show to a user – these were built with Azure Static Web Apps, which are fantastic. We’ve also got conventional function apps for things like the telematics adapter that collects all the data from vehicles and then normalises it into a nice structure.

For us at Zeti, the benefits are kind of twofold with all of this automation. Firstly, without this automation, we couldn’t make as great a product. In the earlier example where we’d have to go and visiting vehicles to write down their mileages, we simply wouldn’t be able to deliver any kind of meaningful analysis. For investors we even provide a real time internal rate of return, so they can see exactly what percentage they’re making on their money. Real time stats and analysis would become, at best, once a month, probably once a quarter, if it wasn’t automated. It’s all this core automation that really powers being able to deliver value, otherwise you’d be bogged down in doing the basics.

Secondly it allows us to show that, fundamentally, that finance can be affordable. Our head of operations, Chelsea Dowling, has a saying which we’ve actually stuck on a wall here, and that’s “Make it tech’s problem”. It was originally tongue in cheek for whenever something unexpected came up, but it’s actually something that we really internalise at Zeti. Our objective as a company is not to just grow headcount for the sake of growing headcount, because that incurs costs and that cost eventually has to come from somewhere, right? The more efficient that we can make the company through automation, the less we have to charge and the quicker we can finance more electric vehicles. We want to put our time and resources into tackling the climate crisis and the air pollution crisis, rather than simply building a big team.

Chris: Are you using open source software?

Dan: C# itself is open source – we write the majority of our backends in C#. We also use TypeScript, which is another Microsoft open source language. I use Visual Studio Code for all of my development, which I find really effective.

There’s all sorts of open source components, too, such as Netlify CMS. We actually made a contribution to Netlify CMS to show people how they can use it on Azure, so it’s now in their documentation. We use Material UI, which is a framework for React. We also use a bit of Python and PySpark on the back-end for our large data analytics.

Chris: You use a lot of Azure Static Web Apps with ZERO – why is this/why not alternatives like PaaS?

Dan: There’s a few reasons for it. Firstly, for me, it’s a lack of maintenance. As we’re only a small team at Zeti and I’m one of only three people on the tech side, we don’t have a lot of time to spare. We want to minimise the amount of time that we spend on operations, so that we can maximise the time spent delivering new features and delivering a fantastic product that makes our customers happy.

I’ve operated PaaS systems before and they’re fantastic. They reduce maintenance significantly, but they don’t reduce it down quite as low as Static Web Apps do. The reason for that is you still need to choose things like how many cores you want to use, how much memory do you need, what’s the IOPS? Having to worry about that is operations in itself, because you have to set up dashboards and you have to pay attention to them. It creates quite a lot of work and takes a lot of time away from delivering features, which is what you actually want to do. With a serverless approach, that is what Static Web Apps really embodies. It’s not truly no-OPS, that’s kind of an overused phrase I think, but it’s less-OPS, so we can focus on building a great experience.

The second one is on the scalability side of things. Generally whenever I build something, I build it as a client-side framework of some kind. Generally React with a serverless API is my favourite at the moment, which is helpful because it’s inherently very scalable as you do the separation right at the start. This is unlike some other programming paradigms, where you end up being fixed to a server full model which is then very difficult to scale.

With Static Web Apps for example, they delivered a feature recently which lets you deploy your front end globally. With no effort at all, I could click a toggle and all of a sudden my React app is being served by a global CDN, and my users in America suddenly got decreased latency, which we could measure. That kind of pre-built scalability is really helpful.

Finally, there’s the cost. As a startup, particularly in the early days, you really need to keep a really tight handle on your costs, particularly if you want to do things properly and have multiple environments that are exact copies of production. That can be very difficult for a startup – if production costs me £300 a month then I can’t replicate it exactly for dev and test as that will triple my bill! That’s how you have your first outage usually, when dev and test don’t behave the same as production.

Static Web Apps are great because they’re so cheap. They’re free for a very high level of usage, and I think it’s a minimal cost per month after that. The monthly costs can add up, so you aren’t going to be spinning them up endlessly, but realistically it’s very reasonable for the level of service you get. For a startup, that’s invaluable.

Chris: As a start-up it’s important to watch costs – how do you manage this, and why serverless?

Dan: I actually did a conversion of an app at one point from PaaS, from a traditional app service and SQL, to a storage account and table storage, which saved us 98% on the cost of it. I was astonished by this – I knew they were savings to be had by doing this, but I didn’t realise it’d be so much.

One of the things that I realised whilst I was analysing how that happened was that it’s all down to utilisation. When I say your first outage is generally due to your dev and test environments not matching up, then your second will be due to underallocation. An article about you hits the press and you don’t know it’s coming – everyone visits your website and they’re seeing 500 errors because of the tiny amount of CPU and cores that you’ve put on it to save money. It just gets overwhelmed, and then you have a crisis because now people might start to think your product doesn’t work.

From then on people tend to massively over-allocate, so the utilisation rate of any given server full system is going to be maybe 5%, because you want to allocate for that peak and you never want everything to go down. You also want to have enough time during that peak to issue an alert and get everyone on board to double or triple the scale. But in the middle of the night, who’s on your app? Very few people. As such, utilisation rates in the single digits are not unreasonable.

With the serverless system, the utilisation rate is as close to 100% as possible at all times, because it flexes with how much people are actually using it. That’s where you get these big cost swings and that’s how we do cost management. Any time we’re incurring cost, it’s because we’ve got customers that we’re delivering value to. You obviously have to look at absolute cost, it’s very important, but there’s no ‘fat’ in there, so to speak.

One of the other things that plays into this is our environmental mission. We obviously can’t go around telling people to use zero emission vehicles because it’s better for the planet while burning 95% of all electricity that goes into our servers on nothing. So instead, if you’re serverless, your utilisation rate is very high, so you end up very economical with the electricity that you’re using. It’s a lot more efficient on that side as well.

Chris: How does Azure help with Zeti’s mission in reducing CO2 emissions?

Dan: The fundamental part is by providing a great platform to build on, that can get out of our way and let us deliver. Our business is not computing. We are a technology business, but it’s the deliverables that result from it. So it lets us focus on Zeti’s mission, as we don’t have to spend valuable time configuring settings. We can go and focus on getting another thousand zero emission vehicles on the road instead of diesels, or deliver another automated payment solution. All things which will help us encourage more investors to put money in, because they’ll trust the business more.

Microsoft themselves have a great environmental mission, such as the push to go carbon negative. Good environmental credentials are really important to us because when an investor comes to us looking to put money into vehicles, they also look at us to see that we’re credible on an environmental side as well. We can tell them that we’re using Microsoft as our cloud provider, and they can look through everything Microsoft is doing. They’re doing brilliant things in that regard, so it’s a partner that we can be proud of.

There’s also the credibility side – we’ve got to convince investors that have hundreds of millions of pounds to put that money through Zeti and into clean vehicles. We’re a startup, so it’s a conversation that has challenges. Using Microsoft services helps with this credibility, especially when they come to evaluate things like security. There’s a perception that startups don’t take security seriously. We do, obviously, but what we can also do is point them towards Microsoft Azure and its massive list of certifications.

Obviously they have to evaluate tech like ZERO on top of this, but what we don’t have to do is have someone visiting a warehouse somewhere in the South of England to evaluate whether we’ve got the right shift schedules, because that’s all handled by Azure. That helps our environmental mission by getting us over that conversation and closer to putting millions of pounds into clean vehicles, which is really important.

Chris: For developers who are new to Azure Static Web Apps, do you have any advice or tips on how to get started and why they should give them a go?

Dan: To get started, Microsoft Docs is an excellent resource. They’ve got some really good starters for any front end framework that you might be used to, and you’ll find it particularly comfortable if you’re a front-end developer. The fact that you can go in and have your GitHub repo deployed to the cloud or merged in just a few clicks is great.

On the serverless API side, there’s a lot of great starters out there already. To be honest, I looked through a lot of documentation like that and it’s all I used – I don’t have any kind of inside track or anything like that! If there’s a particular person to call out it’d be Anthony Chu, a product manager at Microsoft that works on Azure Static Web Apps. He writes fantastic articles on how to do really funky stuff with them, for when you want to push them that bit further.

If you’re used to using something like ASP.NET, there’s a little bit of a learning curve, but if you’re used to using Blazor Server, you can deploy Blazor WebAssembly to Azure Static Web Apps and keep yourself in the C# world – you don’t have to venture into React and TypeScript like me!

One of the things that I often see is people thinking that Azure Static Web Apps are just for public websites. We use them for all of our secure portals, and in fact, the security is probably significantly better than a server full solution because there isn’t really a server for anyone to hack into. When you’re serving static files, they can’t really be messed with in the same way that you could on WordPress, for example.

Chris: What’s next for Zeti and ZERO?

Dan: Zeti has entered the US market, where we’re working on a number of pilot projects. Anyone who wants to convert their fleet to electric in the US, get in touch. We’re also looking at continental Europe, so again, if anyone wants to get in touch about that, I’d encourage them to do so. We’re also very open to partnerships for things to bundle with our financing, for example electric charging cards, and generally anything that will help us accelerate this green transition – please do get in touch.

For ZERO itself, we’re working on a range of things on our data side, such as a new fleet operator dashboard experience and live maps of all our vehicles. As we’re entering the US, ZERO has to support US taxes and claiming payments, as well as following US regulations.

It’s quite an exciting time really, with our international expansion and our vehicle expansion as well. We’ve recently taken on Teslas for the first time which is very exciting, and we’re looking at a series of other brands to integrate into ZERO too.

Learn more

The post How Zeti is using ZERO and Azure to reduce emissions across fleets of vehicles appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Happy 20th Birthday, .NET! http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2022/02/14/happy-20th-birthday-net/ Mon, 14 Feb 2022 18:38:31 +0000 Today marks 20 years since Visual Studio .NET launched and the first version of the .NET platform was released to the world.

The post Happy 20th Birthday, .NET! appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
An image depicting a human figure in front of a calendar, next to a picture of Bit the Raccoon.

When .NET arrived, I was working on Classic ASP and PHP websites and Visual Basic desktop applications. C# was a brand new language, but I was surrounded by developers with great experience who were all clear that they were moving to C# and .NET.

I have early memories of getting to grips with data design surfaces, grid views, and my first proper attempts at object-oriented programming in Visual Studio. In 2006, there was a big leap forward with Visual Studio 2005 and .NET 2.0. The topic of conversation for several months was generics, though there were some other great features like partial classes and iterators that also arrived at the same time.

Each version of .NET and Visual Studio became an event in the calendar, unlike the experience of previous languages and tools. It was immediately obvious how we could put each new feature to use in the real world in exciting and innovative ways.

Then something amazing happened. ASP.NET MVC arrived, and Microsoft made the code available on Codeplex with a public license. This meant we could look through the code to get ideas, or to better understand how MVC worked under the hood. For those of us who had worked with raw HTML and CSS, MVC was like a homecoming event. We had full control over all the HTML generated and a great pattern for organising the code.

Within five years, the MVC web stack was fully open source, with an Apache 2.0 license; a pattern that soon became normal with Microsoft now making thousands of repositories open sourced on GitHub. For a few people, this means they can directly contribute to these projects, but for many others it provides an amazing resource for finding out how certain problems have been solved. It is not surprising that people refer to this open source pivot as being a new era for Microsoft developers.

Visual Studio, .NET, and C# have remained at the top of my favourites list for 20 years because they keep getting practical and useful innovations. The release notes are filled with features that are immediately useful, just like that .NET 2 release all those years ago. The .NET 6 release was just as exciting as the .NET 2 release and we still have excited conversations in the development community like we had about generics.

When I have an idea for a project, .NET is still my first choice. It is a start-fast and stay-fast ecosystem where getting an idea down quickly doesn’t crush productivity down the line. In 2022 I can get started with fewer lines of code and with amazing code completion that has a deep understanding of the patterns being written. I can easily run my code anywhere. With .NET I don’t have to trade off longevity and innovation; I can have both.

It is amazing that .NET has thrived for 20 years. It has been the most amazing journey for me, but I know there is more to come!

-=-

AnSteve Fenton is a Microsoft MVP for Developer Technologies. He has been sharing his passion for TypeScript since October 2012, presenting at developer meet-ups, running training sessions, and answering questions on Stack Overflow. He has worked on large-scale JavaScript applications for over 14 years, and with TypeScript for over five years. You can read more of his work on his personal blog, or chat with him over on Twitter.

Learn more

The post Happy 20th Birthday, .NET! appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
6 reasons you should attend Azure Open Source Day  http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2022/02/09/6-reasons-you-should-attend-azure-open-source-day/ Wed, 09 Feb 2022 15:11:33 +0000 If you’re a professional developer, you might be interested in coming along to Azure Open Source Day on 15th February – and it’s free!.

The post 6 reasons you should attend Azure Open Source Day  appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
An image depicting a human figure in front of a calendar, next to a picture of Bit the Raccoon.

If you’re a professional developer you’ll no doubt be using a number of different open-source tools, platforms and tech. If so, you might be interested in coming along to Azure Open Source Day on 15th February – and it’s free! Need to be persuaded? Here’s 6 reasons why it might help you be successful…  

  1. Be among the first to hear Microsoft CEO, Satya Nadella share a special announcement on the 30th anniversary for Linux! 
  2. Discover tools for every developer, including Visual Studio Code, GitHub Codespaces, and the new Azure Cosmos DB API for MongoDB. Find out how you can work on your projects anytime, and on any device, with the latest advancements from Visual Studio Code. From the browser-based Visual Studio Code to cloud-powered environments in GitHub Codespaces, learn how to code with trusted environments anywhere. Hear from SitePro, an energy and municipality industry software as a service (SaaS) provider, that built a cloud-based solution using services including Azure Cache for Redis, Azure Cosmos DB, Azure Cognitive Services, and Docker to capture real-time data from multiple data sources and internet of things (IoT) inputs. 
  3. Learn about the latest innovations in containers and serverless computing, including Azure Kubernetes Service (AKS). Get an overview on the full spectrum of Microsoft cloud-native applications and open source on Azure. Learn about the latest innovations for containers and serverless computing, including Azure Kubernetes Service (AKS), Azure Red Hat OpenShift, and Azure Container Apps. 
  4. Dig into CBL-Mariner, the Linux distribution built by Microsoft to host Azure services. A rapidly increasing number of Azure services are built using open source, leading Microsoft to create CBL-Mariner, Azure’s own internal Linux distribution. Learn why Azure decided to build Mariner, how it’s being developed in the open, and the benefits to using open-source services on Azure. 
  5. Explore practical ways to optimize your Linux investments and innovate faster on Azure. Get an overview on the full spectrum of Microsoft cloud-native applications and open source on Azure. Learn about the latest innovations for containers and serverless computing, including Azure Kubernetes Service (AKS), Azure Red Hat OpenShift, and Azure Container Apps. 
  6. Get answers to your questions. Attend the event between 9:00 – 10:30AM Pacific Time (+8 UTC) to watch demos, get tips and best practices Linux industry leaders like Red Hat and SUSE, and get insights from the product experts and engineers building these solutions. You’ll also have the opportunity to ask Azure and open-source industry experts your questions during the live chat Q&A.   

Join us to hear more about these benefits, explore solutions using Linux and Azure together, or expand your developer toolkit. We hope to see you there! 

Azure Open Source Day
Tuesday, February 15, 2022
9:00 AM–10:30 AM Pacific Time (+8 UTC) 

Register today! 

The post 6 reasons you should attend Azure Open Source Day  appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Powering 3 million requests an hour with open source software http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2021/07/27/powering-3-million-requests-an-hour-with-open-source-software/ Tue, 27 Jul 2021 14:00:58 +0000 We spoke to Tony Gorman from ASOS to learn how open source software is being used to power services that handle upwards of 3 million customer requests an hour.

The post Powering 3 million requests an hour with open source software appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
logo, company name

Open source technology continues to drive innovation, but it’s not always obvious when it’s used behind the scenes. We spoke to Tony Gorman from ASOS to learn how open source software is being used to power services that handle upwards of 3 million customer requests an hour.

 

Can you tell us a little about yourself?

My name is Tony Gorman and I’m one of the Engineering leads at ASOS. I work with the Principal Software Engineer, Principal Test Q/A and Principal Platform Engineer groups, who are all working across ASOS in a bunch of different platforms. One of the things I do is work with a small core team who create, maintain, manage and curate pipelines for deploying AKS, with all the bells, whistles and restrictions that ASOS need in order to run a secure platform. I spend a lot of time with those folks working on these pipelines and working with AKS, and a large part of the rest of my time is spent with teams that are working on implementing AKS.

 

What were your objectives in moving from classic cloud services to .NET core, containerising on Linux and moving more into open source?

The long term goals were, first of all, to get off of cloud services because of the various restrictions we had with them. We also wanted to take the opportunity to improve our cost base, so we were looking at density, the DevOps experience and the security that would wrap around all of that. We looked at AKS amongst a few other options, and we decided that it was a good angle to approach these goals from.

 

Which Azure technologies are you currently using?

We use a lot of different technologies as you would expect from an online e-commerce operation. Databases, IaaS, PaaS, and as with AKS, some things that fit in between. We also use various server-less and event driven offerings provided by Azure.

 

How are ASOS currently using Kubernetes?

We currently mostly use Kubernetes to run micro-services. We use the micro-x architecture extensively across our estate, written in a variety of languages and running on different application tiers. We have in-house Data Science and Integration teams that also use Kubernetes to run workloads.

 

I also saw that you’re using Redis as well. How are you using that?

We use Redis in a few places in ASOS, usually as a side cache for a number of applications across our estate that are handling large volumes of data and critical workloads. There are a lot of people within ASOS who are quite committed to the open source movement, and we prefer and encourage people to code in the open. We are more active on the open source front than we used to be, and we hope to continue that trend.

 

Why were you considering AKS to begin with, and what’s important about AKS specifically for how ASOS uses it?

One of the things we were trying to do was “shift left”, particularly in the DevOps space, so we wanted to streamline and enhance our CI/CD process. That led us down the containerisation route, and once we decided that containers were the way forward, it was then a case of making a choice on orchestration. We are heavily invested in Microsoft and we arrived at Kubernetes via containers and then looked at how best to run them. We considered Azure Container Service early on, and then AKS became a product. We fundamentally favour a more PaaS-like experience over an IaaS experience; they seem to interlock and dovetail nicely for us, so that’s basically how we said, OK, let’s start the journey with AKS and see if it progresses to a point were it meets our needs.

 

What has been the return on investment in making this move from a business perspective? 

It’s saved us quite a lot of money on compute and it’s saved us a lot of time in terms of CI/CD, so we were able to get some of our deployments to happen much faster because of how we’ve engineered stuff to work on it. We’re getting faster releases and we have fewer incidents because we feel that the compute platform is a lot more tractable.

There are also some indirect benefits as well – AKS itself is an interesting solution for our engineers and prospective new engineers to work on. We spend a lot of time training people on AKS as well, so getting the opportunity to be trained on a new and interesting technology has a return on investment without just being about the compute.

 

How did you get people on board with the move to AKS?

At the start we had already recognised that cloud services had a shelf life, so we had some impetus behind making our compute choice anyway. I think it was relatively easy because we did a lot of proof of concepts very early on while working with Microsoft. The first platform we actually used was a Data Science platform, and we got really good value out of that straight away. So most people looked at that, saw that seemed to be working, and gave us the green light to carry on. But we planned our way into it pretty well and didn’t have many hiccups along the way, so it worked out well for us.

 

Has moving to AKS influenced how your cloud strategy looks? If it has, how would you say it’s influenced it?

I think it has reminded everyone that the cloud has come of age. It’s a mature, secure, stable, reliable place to do your business. Any qualms we had in the past have long been vanquished. I think AKS has also shone a light on how many cloud services we have, how many challenges there can be and how much effort is required in order to maintain and release them.

There’s also been a shift in the marketplace as well, in terms of skillsets. We, like any company, have to keep recruiting, and it’s actually much easier to recruit people now who either have experience with or want to get involved in something like Kubernetes, which plays in our favour.

We are quietly and incrementally moving off of cloud services. We have a compute strategy that gives you simple choices between AKS, App Services and something like server-less with functions, based on the workload. But a lot of our workloads just fit naturally into the AKS slot. Because we’ve done a lot of automation behind the scenes and the Microsoft Product Group have done a lot of stuff to make life easier, it’s definitely focused us on containers as the way forward.

 

Was there anything that tripped you up during the move to AKS? Is there anything you’d recommend to people thinking about their own moves to AKS?

Number one is to make sure you understand the security boundary and that you have a clear understanding of what your security needs are. We had a lot of challenges at the start, and in some respects we actually over-engineered for our particular situation. Some of that was standard Kubernetes upstream, some of that was due to the way in which AKS started off, and some of it was down to how we structure our network topography at ASOS. So make sure you understand your security perimeter properly and that you understand how you can apply that within AKS.

The second thing I would say is to make sure you understand the learning curve, and make sure you have some form of training program in place. It doesn’t have to be radical but we recognised that we needed one very early on, so we worked with Microsoft on a bunch of training courses that we run internally. These are very popular and are a pre requisite for a team to have attended before they start using AKS.

The third thing is that it’s easy to get up and running with AKS, but you need to be clear about all the extra stuff that you need. You need VNets and NSGs. You need key vaults and a CI/CD strategy worked out. Understand that there’s way more to running an AKS cluster than running an “az aks” command – there’s a lot of plumbing that needs to go into place. We took a decision very early on to wrap that all up in a reusable pipeline, and I think it’s paid dividends. We run a lot of clusters and there are no clusters that don’t use our internal pipeline, because it saves people a lot of time, effort and money.

One of the things that I’ve valued on this journey is actually having really good contacts within Microsoft to work with. Even in the earlier days when we spoke to the product group more directly, it was super valuable and helpful for us to be able to have that level of contact and get our point of view across. We’ve had lots of calls with lots of people on various subjects, so I think we would have had a much harder time if it weren’t for that. We meet regularly with CSAs from Microsoft and they have really helped with improving how we run AKS.

 

Will there be more coming to AKS from ASOS in the future?

We’re moving more workloads over to AKS, but there isn’t a one-size-fits-all answer for choosing how to run an application – which means we’re always looking at options that offer a sensible default for the type of workload under consideration. In tandem, we’re continually improving how we use AKS, with a particular focus on ease of use. This is the general direction of travel for Microsoft as well so it aligns nicely. We ultimately want AKS to be invisible for our engineers and trusted by our Security and Platform community.

 

Learn more

The post Powering 3 million requests an hour with open source software appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Using open source software to connect charities with people in need of social housing http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2021/03/17/using-open-source-software-to-connect-charities-with-people-in-need-of-social-housing/ Wed, 17 Mar 2021 16:25:36 +0000 We speak to Chris Sainty, who proved anything is possible in OS by using it to develop the Blazor app that connects charities to people in need of social housing.

The post Using open source software to connect charities with people in need of social housing appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
An ASCII image of a block of flats, next to an image of Bit the Raccoon.

Leveraging the power of the open source community lets developers create cost-efficient apps that have real-life solutions for people in need. We speak to Chris Sainty, who proved anything is possible in OS by using it to develop a Blazor app that connects charities to people in need of social housing. 

 

Tell us a bit about yourself. 

I’ve been a software developer for a little over 15 years nowI started out using VB.NET early in my career and then transitioned over to C# a few years later. I’ve also learnt JavaScript over the years and then TypeScript more recently. 

I’ve worked in lots of different industries over my web development career. I’ve also always been drawn more to frontend/UI programming. I’ve worked with everything from WebForms, to MVC, to JavaScript frameworks like Angular. 

In terms of tooling, Visual Studio is my go-to IDE for almost everything. When I’m not using full VS, I’ll be using VS Code. It’s an awesome text editor and is my preferred environment when working on JavaScript applications or general scripting. 

 

How long have you been active in the open source community? 

I’ve been a user of open source for many years. But it’s only in the last couple of years that I’ve started giving back and actively contributing to the community. I think this was largely due to a fear of people seeing my code and thinking it was terrible! But once I got over that, the experience has been really rewarding. 

 

What first attracted you to using open source? 

I’m honest, it was probably the cost, or lack there-of. Now, having been using open source projects for so long, I would say it’s the fact the code is open—everyone can contribute to improve it for everyone else. If something is missing you can raise an issue, discuss it, raise a pull request, and boom! You now have that missing feature you wanted. But it’s also there for everyone else as well, and that small donation of your time is going to benefit many people. 

 

What are your favourite ever open source projects? What makes them special? 

There are so many great OS projects I’ve used over the years. XUnit, Serilog and MediatR are pretty much in all my project now-a-days. But my favourite open source project has got to be ASP.NET Core. It’s an amazing framework to work with. 

 

What was the inspiration behind your project? 

I used to work for a housing association where we had some conversations with a charity in our area that connected people who need help together with volunteers. They did this using a paper system which was hindering their ability to expand. It also made the process complicated and difficult to manage. 

We offered to write them some bespoke software which would move them from a paper based system to a fully electronic system. It would also provide automation where possible to streamline their process and allow them to focus on the people, not the paperwork. 

 

How did the project start and shape over time? 

We started by defining the requirements with the charity and scoping out a minimum set of features for the MVP release. Once this was done, the team got to work breaking the features down into small chunks in order to begin working on them. At this point we also decided on the technology we would use for the project. 

We started building out the initial infrastructure of the project and then moved straight into building features. But we have used several open source projects to help deliver the solution. We’ve used MediatR, Serilog, Blazored Toast, Blazored Typeahead and Blazored FluentValidation. 

 

How successful has it been so far? 

We were able to deliver the MVP in just over a month.  

The charity absolutely love it, so far. When we showed them the product in a review they wanted to take it there and then. Seeing what we’ve built so far has also made them think of lots of other ideas for features so the roadmap is filling up fast! 

 

What’s next for the project?  

We are currently working on some feedback items before starting work on the next milestone. At the moment, we want to get them up and working with a live system. Once they have this, they can begin using it and reaping the benefit over their current paper based system. 

We will then start focusing on features for the second milestone, which could include rolling the solution out to other branches the charity has. 

 

Learn more

The post Using open source software to connect charities with people in need of social housing appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Starting my first Open Source project http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2020/10/27/starting-my-first-open-source-project/ Tue, 27 Oct 2020 15:00:16 +0000 Getting started with open source can be daunting. Well, I don’t count myself as a veteran open source expert but I want to share with you some success I've had starting my first open source project.

The post Starting my first Open Source project appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
An illustration depicting a modern workplace, next to an illustration of Bit the Raccoon.

Getting started with open source can be daunting. When people think about open source projects, they often think of huge projects like TensorFlow or smaller libraries such as the awesome Alexa.NET SDK, and that they are run and contributed to by veteran open source experts.

Well, I don’t count myself as a veteran open source expert but I want to share with you some success I’ve had starting my first open source project.

Inspired by the awesome WebDevchecklist.com, I had the idea of creating a website with a checklist to help with code reviews called CodeReviewChecklist.com. It isn’t necessarily meant to be a definitive checklist, more a prompt/reminder of things to look for. The list of checks is one of the things needing some additional work!

This time, however, unlike my previous websites/side projects, I decided to try and get help from the development community and allow them to make changes themselves by making it open source. After all, this is a tool meant for use by the development community, so it made sense to open it up to contributions from them.

 

Initial Contributors

Following a short lull after creating the project, I started to get approached by people wishing to help out. The first issues included things like creating a toggleable dark theme and ensuring the site is responsive, which were immediate improvements.

Interestingly, the initial people that reached out to me asking whether they could help with CodeReviewChecklist.com heard about the project whilst I was giving a talk on a completely different subject (Augmented Reality on iPhone for .NET developers using Xamarin, C# & .NET). Which just goes to prove that the more you put yourself out there and give, the more people will take notice and wish to help.

Some of the initial contributors admitted themselves that they hadn’t had much OSS experience, which suited me to the ground as I hadn’t had much experience running an OSS project! We would be learning this together.

 

Live Streaming on Twitch

As an aside, around about the same time I started development of the site, I started to live stream some of my coding sessions on Twitch. Something I found to be incredibly fun and something I wish to get back to when I have more free time. If you haven’t already thought about checking out Twitch for either watching or broadcasting coding sessions, I recommend you check it out.

 

Showing GitHub Issues on the Site

So by this time, I had created a few issues in GitHub. However, what I really wanted to do was to pull in issues from GitHub and show them on a page on the site. Eventually I managed to tweak some code I found to do this.

The result is a list of the outstanding GitHub issues on the /OpenSource page on the site that people can see, without even having to go to the repository on GitHub.

 

Hacktoberfest

Hacktober presented another opportunity to enable and encourage the community to help improve the site, so following the very simple project maintainer steps, I added ‘hacktoberfest’ as a topic in my repository and this attracted additional contributors to tackle outstanding issues. What I would love is for contributors to continue to help improve the site well after Hacktoberfest has ended!

 

Things to do

Having never run an open source project before, I have learnt a few things that I need to do to improve the project on GitHub:

  • Provide a contribution guidance
  • Provide a code of conduct
  • Implement an automated CI build & release pipeline
  • Improve the content on the actual site!

And I will get to these, I promise!

 

Summary

Starting to run or starting to contribute to an open source project isn’t as daunting as you may think. All of the people I have come across in the OSS community and on GitHub are happy to help if they find your project of interest or if you run into difficulties.

Giving something back can be very rewarding, and in our line of work OSS contributions are a great way to give back, have fun and learn at the same time.

 

Useful Links

The post Starting my first Open Source project appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Creating an API to tap into Twitter with Open Source Software http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2020/09/08/creating-an-api-to-tap-into-twitter-with-open-source-software/ Tue, 08 Sep 2020 14:00:27 +0000 Jamie Maguire shows how and why he created his Social Opinion API, as well as how you can best take advantage of it.

The post Creating an API to tap into Twitter with Open Source Software appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
An illustration representing a data warehouse, next to an illustration of Bit the Raccoon.

When I was a kid, there was never enough money to buy full price games. I’d get a magazine that had demos of games, and after I’d played those to death, I’d turn to the back on the magazine where there were code listings. I’d type them out and hack away at the creations. Later on I’d order public domain games from magazines that came with source code, altering the code to learn how they worked and whether I could modify them.

In 2001 I dropped out of the final year of my CS degree (I had my Diploma in S/W Engineering) to take an IT Apprenticeship that lasted 24 months. I spent the first 12 months in various departments, with the remaining 12 months in a software development team for a consultancy, building a master patient index system.

At the end of the apprenticeship I was offered a job, staying there for 5 years before leaving to join a start-up, followed by a consultancy where I moved up the ladder to become a senior consultant and technical lead. I broke out on my own and became an independent contractor for a while, then returned to employment.

In between that, I completed an MSc in Computer Science with a focus on Bayesian Theorem and Text Analytics, and how this can be used to surface insights in social media data. I started to document these activities on my blog, which gained readership and resulted in my code experiments being picked up by Twitter themselves. These would later be used to help build Pluralsight Courses and community work, which resulted in an MVP Award in AI.

 

Building the Social Opinion API

I’ve been working with the Twitter API for many years and was working on a research project through an MSc I was doing outside of my day job. The project involved Twitter analytics, and I built an API using machine learning-based approaches to help me classify Twitter data using C#.

Shortly after this Twitter ran a developer initiative over 3 years called #Promote. I joined a few dots and submitted minimal viable products to #Promote each year, each of them building on the predecessor and all built using .NET. These products could help surface signals such as keywords, commercial intent and more.

One iteration was centred around audience segmentation and creating audiences for marketing purposes. That is, finding the right person at the right time, with the right message. Through my #Promote submissions, Twitter’s DevRel and Ad Partnership Teams contacted me to show demos of the software. I’ve since built good relationships with the DevRel and Product Teams as a result of these activities.

 

Why Did I build the API?

Through various conversations I’d had at the time, I knew a major Twitter API release was being planned, and I remembered the pain I experienced as a developer when trying to consume the Twitter API back in the early years!

No one else was working on building a .NET SDK that would target v2 of the Twitter API, so I wanted this to be the first on the market for .NET Developers. I wanted to make it easy for developers to consume these new APIs and build a community around the SDK. This goal makes it pretty easy for me to build and add new features to a social media analytics platform I work on in my spare time. It’s written in .NET Core so it’s cross platform, and an abundance of free tooling and services such as VS Community, GitHub and Azure Credits make it easier than ever to build and ship when you have limited resources.

The biggest challenge for me was finding the time to work on projects while keeping up with a regular job and family life. I got up 2 hours earlier each day and spent time at the weekends on open source activities, as well as engaging directly with the Twitter DevRel team on multiple occasions. It took four months from the start until the first iteration.

 

How developers can use the API

Whether you’re looking to grab social media analytics, perform topic analysis of Twitter data or build integrations with the Twitter API, there are many possible ways to use the Social Opinion API. You could even identify the most discussed products, services, locations or businesses, or use it for ad-tech and marketing. There are also many reasons why you should consider giving it a go:

  • Accelerates development with the Twitter API
  • Shields developers from having to write low level HTTP and security code (OAuth, etc.)
  • Gives developers a rich set of easy to use objects (e.g. Tweet.Text) in their code, thereby making the development experience simpler
  • You can easily surface Twitter data in existing applications with a few lines of code
  • It can be added in a few clicks via a NuGet package, or via the code on GitHub

User feedback has also helped shape the project itself – the data surfaced by the SDK/API was used to build an early version of a SaaS tool called Social Opinion. I’ve also had direct feedback saying data in the dashboards are a great way to see what’s forming the public conversation in one place.

 

Using open source software

Open source can help developers learn about new practices or approaches to development. I think we live in a bit of an API economy; more than ever it’s becoming easier to pull code examples or APIs from places like GitHub. You can use open source to quickly build prototypes and plug gaps in your existing software or understanding.

My advice to those on their open source journey is simple: don’t be intimidated about shipping your work. It can be nerve wracking when sharing your work online – but learn to dance with that! If you have an idea, just code it or write it and get it out there. It could be something that ends up helping you with your daily job, like an internal tool. Your work might not be for everyone, but that’s okay!

 

More from the author

Jamie Maguire is a Software Architect with nearly 20 years’ experience architecting and building solutions using the .NET stack. For more of Jamie’s articles, be sure to check out his website.

Want to learn more about this project? Be sure to view the project page, as well as Jamie’s overview of the API/SDK on his website. Also check out:

 

More from the OSS series

The post Creating an API to tap into Twitter with Open Source Software appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Using open source software to put the NHS in the pockets of over 40 million people http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2020/07/22/using-open-source-software-to-put-the-nhs-in-the-pockets-of-over-40-million-people/ Wed, 22 Jul 2020 14:00:12 +0000 When open source is paired with passionate people, we get innovative, creative solutions to help others. Meet Peter Farrell, Technical Architect at Kainos Software, who is using open source technology to bring the NHS to the pockets of millions of people.

The post Using open source software to put the NHS in the pockets of over 40 million people appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
An image of a mobile phone with a stethoscope on screen made with ASCII art, next to an illustration of Bit the Raccoon.

Kainos provides digital technology services for organisations across the globe. Kainos has been delivering award-winning digital transformation, data, AI, cloud and design solutions for over 30 years, and employs over 1,600 people across Europe and North America.

 

Working with the NHS

In September 2017, the UK Secretary of State (SoS) for Health made several public commitments focused on addressing the needs of 40+ million patients and 7,500+ GP (MD) practices in England by providing digital access to core NHS Services – the ambition was to create a universal Digital Front Door for the NHS through which patient services could be delivered. The NHS app was created to help deliver this commitment.

We worked with NHS Digital and fellow Microsoft partners, including BJSS, to help create a fully integrated user-centric service to be delivered through smartphone native apps, in just 15 months. It enables the 40+ million patients in England to check symptoms, book appointments, order repeat prescriptions, view their medical record and register to be an organ donor, all from their smartphone – truly transforming how people in England access healthcare.

We continue to work on the programme enabling the next wave of services to be provided through the App. We have also helped and will continue to help the NHS in the UK with its response to COVID-19.

 

Using Kubernetes and Azure

This project required a foundational platform that was scalable and secure. After significant evaluation, we chose to adopt the Azure Kubernetes Service (AKS). Adopting a managed Kubernetes offering like AKS provided the scalability and adaptability needed to allow the app to effectively service millions of people. It also provided the speed needed to allow delivery to progress at pace, ensuring we met deadlines throughout the project.

It is very important to us that software like Kubernetes is open source. We align closely to the UK Government’s Design Principles, one of which is: “Make things open: it makes things better”. We share this philosophy and try to use and create open-source software as much as possible.

This was also one of the first instances for NHS Digital to host a user-facing transactional service using public cloud. We chose the Azure platform for this project due to its flexibility, extensive service catalogue and its highly integrated multilayer approach to securing workloads. Additionally, our strong partnership with Microsoft is key to our ongoing strategy, as Microsoft’s constant investment and innovation to their technology means we know we are delivering world-class technology for our customers.

 

Pushing forward with open source

Open-source software (OSS) is critically important to developers. It eliminates “reinventing the wheel” – spending time and money designing and implementing a solution to a problem that someone else in the larger community has already solved. More often than not, the community has already identified and rectified issues that you haven’t even thought of when considering the problem. OSS allows us to build and improve upon the experience of others in the field.

Kainos believe in “Open by Default”. The source code for many of our public sector projects has been published to GitHub, empowering the community at large to contribute to the work. Of course, the source product is ultimately owned by the customer, and open-sourcing any product is a customer decision which we are happy to give guidance on.

If you are thinking about releasing your work as open source and it uses other third-party components, my advice would be to thoroughly check any licensing restrictions that apply therein. For a short summary of different licensing models, check here.

 

More from the author

I’m Peter Farrell, a Technical Architect at Kainos Software. You can find me on LinkedIn or Twitter and I also write some blog posts on Medium. Make sure you follow Kainos across LinkedIn, Twitter and Facebook as we post updates about our work across these channels frequently.

Want to learn more about this project? Be sure to view the full case study on the Kainos website.

 

More from the OSS series

The post Using open source software to put the NHS in the pockets of over 40 million people appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
Bringing open source sentiment analysis assistance to neurodivergent people http://approjects.co.za/?big=en-gb/industry/blog/technetuk/2020/07/15/bringing-open-source-sentiment-analysis-assistance-to-neurodiverse-people/ Wed, 15 Jul 2020 14:00:42 +0000 When open source is paired with passionate people, we get innovative, creative solutions to help others. Meet Luce Carter, who is using open source technology to bring sentiment analysis assistance to neurodivergent people.

The post Bringing open source sentiment analysis assistance to neurodivergent people appeared first on Microsoft Industry Blogs - United Kingdom.

]]>
a close up of a logo

How I got into software development is actually quite an interesting and personal story. My first ever blog post tells it in more detail for anyone interested, but essentially, I was in my late teens with no friends, bullied, struggling with my mental health and a bit lost. I had always liked computers and wanted to return to education so I took an IT course at a local college. Programming was one of the modules I took when I started and I just fell in love with it.

A classmate was already an experienced programmer and offered to help me learn C#. He would teach me away from college so as well as learning code, I made a friend out of it, then joined his social circle and never looked back! It honestly saved my life!

That led me to take Software Engineering at University. I have been lucky enough to work in the industry ever since graduating in 2014. It’s also one of the reasons I blog and public speak, because I am eager to share knowledge in the hopes it helps someone in the way technology and code has helped me.

 

EmotiPal (formerly Body Language Assistant)

EmotiPal is a mobile app which uses Xamarin and Azure Cognitive Services, to help me detect sentiment in text and emotion in faces in photographs.

In April 2018, I had the pleasure of meeting Jim Bennett, Snr Cloud Advocate at Microsoft. We had been in contact a lot as I reviewed his book Xamarin in Action on my blog, but we were both speaking at the same event for the first time. His talk on Cognitive Services in Azure had me hooked. It was so powerful, so easy to get started with the power of AI and the Cloud, and the documentation was fantastic!

Seeing how their vision SDK could be used to detect emotion in faces got me buzzing with ideas. As someone who is neurodivergent and struggles to read people, I thought of an app that I could use to help me understand sentiment better.

Then in May 2018, I had the pleasure of attending Build in Seattle. I attended a talk by Brandon Minnick, a colleague of Jim’s, who gave a talk and demo on the sentiment analysis side of Cognitive Services. It made me realise I am poor with written text too sometimes!

A few months later, I decided to join the world of public speaking and submitted to speak at my first conference. I wanted to do an intro to Xamarin, but also a longer version that shows how you can easily combine Xamarin and Cognitive Services to do something beyond just “Hello, World!”. This is what spurred me on to finally write my app, that I named EmotiPal. It has two uses; it’s a great app for helping me (and surprising myself), and a fantastic demo for talks. Winner!!

 

How does it work?

EmotiPal is a Xamarin.Forms app at heart, with shared UI and code between Android and iOS. When you first open the app, you are met with a menu page with two buttons, to pick which function you wish to use.

In the sentiment analysis page, there is a text box and a button. You enter the text, click the button and the app will send that text to Cognitive Services using an SDK. Cognitive Services then sends back a response including the most likely sentiment which is then processed for display on the page.

The photo analysis page is slightly more complex. It uses Microsoft’s Xamarin Essential’s Plugin Nuget package (comes out of the box with all new Xamarin.Forms projects), which contains platform level functionality such as handling photos to allow for taking a photo or selecting one from the device, it then sends that photo, using the Vision SDK to Cognitive Services, which identifies faces in the photo and returns a list of attributes about the face it sees, including the emotion.

As I mentioned earlier, I had seen a talk from Jim Bennett and was really excited by how easy it was to get started and all the possibilities of what you could do with it. As a lover of Xamarin, I already had a soft spot for Azure and Microsoft, so the easy documentation, free cost of entry and Jim’s talk convinced me that Azure would be perfect and most likely the easiest to integrate with Xamarin.

No matter how bad my code might be (that one is no doubt subjective), I always set my repo’s as public on GitHub, in case it can help anyone. I never set out to make EmotiPal open source, it was just a natural consequence of that. However, I also made sure it was on GitHub and public as I also mention it in my talks, and I want attendees to have the chance to dissect it and understand how it works at their own pace.

 

Challenges

For me, the hardest part was probably learning how to use multiple SDK’s; HttpClient (yup, in all my years as a dev, I have written network code so little I didn’t know how to do it, we all have gaps in knowledge!), the media plugin for taking photos and the Azure Cognitive Services SDK. Once I understood those, I then had to work out how to piece them all together and how to layout the app in a way that made sense from a design perspective.

Overall, it took me a few weeks but that was with only a few hours every few days put into it. The app itself is actually pretty simple, so someone with more experience may well have done it a lot quicker.

I have big dreams for the app longer term. For starters, I want to make the app look much nicer. It has very basic styling at the moment, and it is quite possible that a desktop style menu page when you open the app is a terrible design for a mobile app. I also want to find a way to add additional features such as speech to text analysis, analysing photos already on the device, and other things that might help those who are neurodivergent.

This leads on to the ultimate goal, which is to release it on app stores so it can help others besides just me. Although AI has limitations such as understanding sarcasm and the range of emotions it can detect, this is improving all the time and I still think the app may be able to help many people.

 

Embracing open source

For me, open source software is very important. It gives us all access to powerful projects and libraries without a cost, gives us code samples we can learn from as we try and achieve something in our own code, the ability to improve and expand a product faster due to community involvement (for example Xamarin.Forms itself, which is all open source) and one of the most important, it often gives newer developers a nice place to try and get involved in a bigger project, with many often having labels to identify work that might be great for a newer developer.

If you’re thinking about releasing your work open source, try and make it as welcoming as possible to everyone to get involved, no matter their experience. Create good documentation on how to get started, the architecture and any known issues or caveats.

Also try and take time beforehand to make sure the code is readable and maintainable so it’s not daunting to a potential maintainer. Oh, and most of all? Don’t leak your API keys! 😉

 

More from the author

My name is Luce Carter. I am a newly appointed Developer Advocate, currently working at MongoDB. I am @LuceCarter1 on Twitter, LuceCarter on GitHub, LuceCarter on Twitch, and I write on my blog, https://lucecarter.co.uk. I’ve not been as active lately with blogging but there are still some interesting posts on there for anyone interested in my story.

More from the OSS series

The post Bringing open source sentiment analysis assistance to neurodivergent people appeared first on Microsoft Industry Blogs - United Kingdom.

]]>