Thorsten Hans: VPN Austria – Unlock ORF and ServusTV

How you can unlock the popular free TV channels of ORF and ServusTV with a VPN for Austria. With a VPN like the one from NordVPN* you can unlock sports events like Formula 1, matches of the UEFA Champions League, the UEFA Europa League, the World Cup and European Championship, DFB-Pokal, as well as a lot of winter sports.

How can you enjoy the free content of ORF and ServusTV, without having to be in Austria? I’ll show you here.
With a reliable VPN, such as our test winner NordVPN* or the second-placed CyberGhost*, you can virtually transfer to Austria. Here is the explanation in detail:

How to unlock ORF and ServusTV Austria with VPN

Time needed: 15 minutes.

Unblock ORF and ServusTV in four simple steps with a VPN for Austria.

  1. Choose a VPN provider

    Play it safe if you care about your data and choose a reputable provider like NordVPN* or CyberGhost*. They have proven themselves to me.

  2. Install and launch the VPN software

    NordVPN and CyberGhost offer clients for Android, Windows, iOS, macOS, Amazon FireTV Stick, Linux and even Raspberry Pi. Install the right software for your system.

  3. Choose a server

    Choose a server from Austria and connect. With our favorites, you have several servers at your disposal, which means you always have fallback options. This is important, because streaming providers sometimes uncover and block servers. In this case, only a change of server will help.

  4. Start streaming

    Now you can watch ORF or ServusTV Austria completely protected from anywhere in the world. Once you are connected to the server, you can go to the ORF or ServusTV website and start streaming.

NordVPN is our top choice – unblock ServusTV and ORF Austria

To access ORF and ServusTV from anywhere in the world, you have to bypass the providers’ geoblocking. And the best way to do that is with a VPN service that is reliable and reputable. In my opinion, NordVPN is the best VPN provider and the best choice for streaming ORF and ServusTV Austria worldwide. Of course, you can also choose another provider. For NordVPN, I gladly pay about 3 Euros per month for all its advantages. The strict no-logs policy alone is worth it to me. This means that my data is never collected, stored or shared. So my ISP or the government can never access my personal data.

The best 3 VPNs for Austria under the magnifying glass

Since choosing the right VPN service can become a doctoral thesis due to the unmanageable oversupply, we have listed the most important criteria. After all, not every VPN is suitable for the Austrian programs of ORF and ServusTV. The provider must be able to reliably and seamlessly bypass geo-blocking without being exposed and deliver the required data speed so that even sports events can be streamed optimally.

In my opinion, these 3 are the best VPN services for Austria:

  • NordVPN* the best all-round
  • CyberGhost* the runner-up – great for streaming
  • PIA* the inexpensive classic

The all-rounder NordVPN – in detail

NordVPN* has the best price-performance ratio. It offers clients for Android, Windows, iOS, macOS, Amazon FireTV Stick, Linux and even for Raspberry Pi. Connection on 6 devices at the same time is possible. Moreover, it has a kill switch. You can choose from a whole 61 servers in Austria. Best of all, benefit from a money-back guarantee – during this phase, you can ask for your money back via NordVPN Support if you don’t like it.

The top streaming VPN CyberGhost – in detail

You can use CyberGhost* for up to 7 devices simultaneously. It supports Android, Windows, iOS, macOS, Amazon FireTV Stick, and Linux. It offers browser add-ons for Firefox and Chrome, has an adblocker, and is inclusive of malware, tracking, and phishing protection. Surfshark also has a kill switch and offers a trial period with money-back policy.

PIA the VPN classic for Austria – in detail

With the VPN from PIA*, 10 parallel connections are possible. It offers a client for Windows, iOS, macOS, and Android. It has browser extensions with which you can unlock Austrian programs as well. PIA also offers a money-back guarantee.

FAQ – Frequently asked questions and answers about VPN usage for Austria

What to do if the streaming via VPN Austria does not work properly or not at all?

Check your VPN connection. If you are sure that your VPN is switched on, delete the cache and cookies of the browser. Change the server manually and not via the automatic quick selection. If that doesn’t help, use the VPN of the popular providers via a browser extension on Chrome, Firefox and Safari. If it still doesn’t work, contact customer service, which is available 24/7 with reputable providers.

Why shouldn’t I stream ServusTV and ORF with a free VPN service?

The free VPN services are mostly throttled versions of the renowned providers. The inadequate server selection in Austria can prevent you from being able to unblock ORF and ServusTV. Lack of data volume and data speed can be more than a nuisance when it comes to the streaming experience, especially for sports events.

How can I watch ORF TVthek from abroad?

With a VPN, the geoblocking of ORF TVthek can also be unlocked.

Is VPN usage legal?

Using a VPN is completely legal in Germany, but also in many other countries like Switzerland or Austria. Using a VPN for illegal actions, such as downloading copyrighted media, are of course not legal.

Which sports events can I stream for free through VPN Austria?

Formula 1, UEFA Champions League matches, UEFA Europa League matches, World Cup and European Championship football matches, Ice Hockey World Championship matches and many more winter sports.

The post VPN Austria – Unlock ORF and ServusTV appeared first on xplatform.rocks.

Holger Schwichtenberg: Neu in .NET 7 [3]: UTF-8-Zeichenketten-Literale in C# 11

Die neue Version von Microsofts Programmiersprache kann aus Zeichenketten-Literalen Bytefolgen in UTF-8-Codes erstellen.

Code-Inside Blog: Use ASP.NET Core & React togehter

The ASP.NET Core React template

x

Visual Studio (at least VS 2019 and the newer 2022) ships with a ASP.NET Core React template, which is “ok-ish”, but has some really bad problems:

The React part of this template is scaffolded via “CRA” (which seems to be problematic as well, but is not the point of this post) and uses JavaScript instead of TypeScript. Another huge pain point (from my perspective) is that the template uses some special configurations to just host the react part for users - if you want to mix in some “MVC”/”Razor” stuff, you need to change some of this “magic”.

The good parts:

Both worlds can live together: During development time the ASP.NET Core stuff is hosted via Kestrel and the React part is hosted under the WebPack Development server. The lovely hot reload is working as expected and is really powerful. If you are doing a release build, the project will take care of the npm-magic.

But because of the “bad problems” outweight the benefits, we try to integrate a typical react app in a “normal” ASP.NET Core app.

Step for Step

Step 1: Create a “normal” ASP.NET Core project

(I like the ASP.NET Core MVC template, but feel free to use something else)

x

Step 2: Create a react app inside the ASP.NET Core project

(For this blogpost I use the “Create React App”-approach, but you can use whatever you like)

Execute this in your ASP.NET Core template (node & npm must be installed!):

npx create-react-app clientapp --template typescript

Step 3: Copy some stuff from the React template

The react template ships with some scripts and settings that we want to preserve:

x

The aspnetcore-https.js and aspnetcore-react.js file is needed to setup the ASP.NET Core SSL dev certificate for the WebPack Dev Server. You should also copy the .env & .env.development file in the root of your clientapp-folder!

The .env file only has this setting:

BROWSER=none

A more important setting is in the .env.development file (change the port to something different!):

PORT=3333
HTTPS=true

The port number 3333 and the https=true will be important later, otherwise our setup will not work.

Also, add this line to the .env-file (in theory you can use any name - for this sample we keep it spaApp):

PUBLIC_URL=/spaApp

Step 4: Add the prestart to the package.json

In your project open the package.json and add the prestart-line like this:

  "scripts": {
    "prestart": "node aspnetcore-https && node aspnetcore-react",
    "start": "react-scripts start",
    "build": "react-scripts build",
    "test": "react-scripts test",
    "eject": "react-scripts eject"
  },

Step 5: Add the Microsoft.AspNetCore.SpaServices.Extensions NuGet package

x

We need the Microsoft.AspNetCore.SpaServices.Extensions NuGet-package. If you use .NET 7, then use the version 7.x.x, if you use .NET 6, use the version 6.x.x - etc.

Step 6: Enhance your Program.cs

Add the SpaStaticFiles to the services collection like this in your Program.cs:

var builder = WebApplication.CreateBuilder(args);

// Add services to the container.
builder.Services.AddControllersWithViews();

// ↓ Add the following lines: ↓
builder.Services.AddSpaStaticFiles(configuration => {
    configuration.RootPath = "clientapp/build";
});
// ↑ these lines ↑

var app = builder.Build();

Now we need to use the SpaServices like this:

app.MapControllerRoute(
    name: "default",
    pattern: "{controller=Home}/{action=Index}/{id?}");

// ↓ Add the following lines: ↓
var spaPath = "/spaApp";
if (app.Environment.IsDevelopment())
{
    app.MapWhen(y => y.Request.Path.StartsWithSegments(spaPath), client =>
    {
        client.UseSpa(spa =>
        {
            spa.UseProxyToSpaDevelopmentServer("https://localhost:3333");
        });
    });
}
else
{
    app.Map(new PathString(spaPath), client =>
    {
        client.UseSpaStaticFiles();
        client.UseSpa(spa => {
            spa.Options.SourcePath = "clientapp";

            // adds no-store header to index page to prevent deployment issues (prevent linking to old .js files)
            // .js and other static resources are still cached by the browser
            spa.Options.DefaultPageStaticFileOptions = new StaticFileOptions
            {
                OnPrepareResponse = ctx =>
                {
                    ResponseHeaders headers = ctx.Context.Response.GetTypedHeaders();
                    headers.CacheControl = new CacheControlHeaderValue
                    {
                        NoCache = true,
                        NoStore = true,
                        MustRevalidate = true
                    };
                }
            };
        });
    });
}
// ↑ these lines ↑

app.Run();

As you can see, we run in two different modes. In our development world we just use the UseProxyToSpaDevelopmentServer-method to proxy all requests that points to spaApp to the React WebPack DevServer (or something else). The huge benefit is, that you can use the React ecosystem with all its tools. Normally we use Visual Studio Code to run our react frontend and use the ASP.NET Core app as the “Backend for frontend”. In production we use the build artefacts of the react build and make sure, that it’s not cached. To make the deployment easier, we need to invoke npm run build when we publish this ASP.NET Core app.

Step 7: Invoke npm run build during publish

Add this to your .csproj-file and it should work:

	<PropertyGroup>
		<SpaRoot>clientapp\</SpaRoot>
	</PropertyGroup>

	<Target Name="PublishRunWebpack" AfterTargets="ComputeFilesToPublish">
		<!-- As part of publishing, ensure the JS resources are freshly built in production mode -->
		<Exec WorkingDirectory="$(SpaRoot)" Command="npm install" />
		<Exec WorkingDirectory="$(SpaRoot)" Command="npm run build" />

		<!-- Include the newly-built files in the publish output -->
		<ItemGroup>
			<DistFiles Include="$(SpaRoot)build\**" />
			<ResolvedFileToPublish Include="@(DistFiles->'%(FullPath)')" Exclude="@(ResolvedFileToPublish)">
				<RelativePath>%(DistFiles.Identity)</RelativePath> <!-- Changed! -->
				<CopyToPublishDirectory>PreserveNewest</CopyToPublishDirectory>
				<ExcludeFromSingleFile>true</ExcludeFromSingleFile>
			</ResolvedFileToPublish>
		</ItemGroup>
	</Target>

Be aware that these instruction are copied from the original ASP.NET Core React template and are slightly modified, otherwise the path wouldn’t match.

Result

With this setup you can add any spa app that you would like to add to your “normal” ASP.NET Core project.

If everything works as expected you should be able to start the React app in Visual Studio Code like this:

x

Be aware of the https://localhost:3333/spaApp. The port and the name is important for our sample!

Start your hosting ASP.NET Core app in Visual Studio (or in any IDE that you like) and all requests that points to spaApp use the WebPack DevServer in the background:

x

With this setup you can mix all client & server side styles as you like - mission succeeded and you can use any client setup (CRA, anything else) as you would like to.

Hope this helps!

Holger Schwichtenberg: Neu in .NET 7 [2]: Zeilenumbrüche in Interpolationsausdrücken in C# 11.0

Die in geschweiften Klammern gefassten Zeichenketten-Interpolationsausdrücke dürfen neuerdings Kommentare und Zeilenumbrüche enthalten.

Thorsten Hans: Stream Lucifer Netflix Series with a VPN

Lucifer is a fictional Netflix series based on the DC Comics character of the same name. The show follows Lucifer Morningstar, the Devil, as he abandons his throne in Hell for Los Angeles, where he starts his own nightclub called Lux. While in LA, Lucifer becomes involved with the LAPD and helps solve crimes.

The show is available to watch on Netflix in most countries. However, due to licensing restrictions, it is not available in some countries. If you are unable to watch Lucifer on Netflix in your country, you can use a VPN to bypass these restrictions.

A VPN is a service that allows you to connect to the internet via a server run by a third party. This server acts as a middleman between you and the websites you visit. By connecting to a VPN server in a different country, you can make it appear as if you are located in that country. This allows you to access websites that are blocked in your country.

There are many VPNs available, and each has its own benefits and drawbacks. Here are some of the most popular VPNs:

NordVPN: NordVPN is one of the most popular VPNs available. It offers high speeds and strong security features, making it a good choice for streaming Lucifer.

– ExpressVPN: ExpressVPN is another popular VPN with high speeds and strong security features. It is also easy to use, making it a good choice for beginners.

– IPVanish: IPVanish is a good option for those who want a VPN with a large selection of servers. It also offers strong security features.

– PureVPN: PureVPN is a good choice for those who want a low-cost VPN with good security features.

How to Watch Using a VPN?

To watch Lucifer using a VPN, you will need to connect to a VPN server that is located in the United States. This will allow you to bypass geographic restrictions and watch the series.

What Are the Benefits of Using a VPN?

The benefits of using a VPN include:

– increased security and privacy

– bypass geographic restrictions

– unblock websites and content

– anonymous browsing.

Using a VPN is a great way to watch content if it is blocked in your country. It allows you to bypass restrictions and access content from anywhere in the world. Additionally, VPNs provide a level of security and privacy that is not available when using public Wi-Fi networks. So, if you are looking for a way to stream your favourite shows, a VPN is the best option available.

The first two seasons of Lucifer are available on Netflix US, but due to licensing agreements, the third season is not available until May 8th. However, there is a way to watch Lucifer Season 3 on Netflix US using a VPN.

To watch Lucifer Season 3 on Netflix US using ExpressVPN, follow these steps:

1. Sign up for a ExpressVPN account here.

2. Download the ExpressVPN app for your device.

3. Connect to a US server.

4. Open Netflix and start watching Lucifer!

The post Stream Lucifer Netflix Series with a VPN appeared first on xplatform.rocks.

Holger Schwichtenberg: Neu in .NET 7 [1]: Raw Literal Strings in C# 11.0

Das mit dem jüngsten .NET-Release veröffentlichte C# 11 bietet eine neue, einfache Methode zum Anlegen von Zeichenketten mit Umbrüchen und Einrückungen.

Sebastian Seidel: 7 steps to migrate from Xamarin Forms to .NET MAUI

With the end of support for Xamarin approaching, developers are busy migrating existing Xamarin Forms projects to .NET MAUI as its successor. So are we, of course. In this article, I'll show 7 steps we've always had to take during the transition to make your transition easier.

Thorsten Hans: UEFA Champions League Free TV abroad

You want to watch the UEFA Champions League on free TV abroad? Then I’ll show you how it works with a good VPN like the one from NordVPN. Below are the foreign channels and streams that still broadcast the UEFA Champions League free and legally on free TV:

Watching Champions League free TV abroad is possible, but only with a good VPN

Every year the best soccer teams in Europe compete against each other to be crowned as Champions League winners. Of course, you’ll see the best clubs in the competition competing against each other. For example, Bayern Munich, Juventus Turin, Manchester United, Paris St. Germain, Manchester City, Real Madrid, Borussia Dortmund and FC Barcelona. Teams from England, Spain and Germany in particular regularly get very far in the tournament. It is always the most exciting soccer tournament of the year, where no soccer fan wants to miss the games of their stars.

Soccer has become a big business. If you want to watch top soccer as a soccer fan, you have to pay a lot of money for pay TV and stadium visits. That’s why I did a little research on where you can still watch UEFA Champions League abroad on free TV. With the free TV channels listed above, you can watch it legally and for free with a good VPN.

Guide: How to watch Champions League abroad for free

Time needed: 15 minutes.

Here’s a quick guide on how to watch Champions League matches on free TV abroad.

  1. Get a reliable VPN

    First of all, you need to know which streaming provider you want to use. Choose from the list of free TV channels above. Before that, you should choose a reliable VPN with which you want to watch the Champions League. I advise you to use NordVPN or CyberGhost. I use both myself all the time and it works perfectly.

  2. Install the VPN on your devices and connect

    Decide on which devices you want to watch Champions League on free TV abroad. Install the VPN on all your devices – whether Android, Windows, macOS, iOS or Linux.

  3. Connect to the VPN server

    Once the VPN software is installed on your device, you can connect to your destination country. In the list, the VPN server_countries are in brackets after the free TV channels.

  4. Visit the streaming provider

    Once you have dialed in to the right country, simply visit the streaming provider’s website.

Champions League abroad for free on free TV

In some countries, Champions League matches continue to be shown free of charge. ServusTV Austria (servustv.com), for example, shows certain matches on Wednesdays. Especially in the preliminary round, the channel focuses on teams from its own country, but also on those with Austrian players or coaches. However, the broadcasts are subject to geoblocking and you need an Austrian IP address. With a good VPN like NordVPN or CyberGhost, this is no problem. You can use it to unblock the channel.

However, Champions League free TV abroad is also possible via other broadcasters. RTL Luxembourg or Belgium also stream various games for free. However, you need an IP address in Luxembourg or Belgium.

Why is it so complicated?

OK, it’s actually not as complicated as it sounds. The software of the best VPN providers is very user-friendly and even technically less experienced people can cope with it.

The problem at this point is called geoblocking. The broadcasters and streaming providers in the individual countries only have the license to broadcast the respective Champions League game in a certain region.

However, based on your IP address, the streaming providers know in which country you are located. If you come from England, for example, and want to stream ServusTV Austria, the service will block you, stating that the desired broadcast cannot be transmitted for legal reasons.

If you are on vacation in Austria, Luxembourg, Belgium or another country with free Champions League broadcasts on free TVL, the situation is of course different. If you use the WLAN in your Airbnb, hotel or a SIM card from the corresponding country, you will also get a correct IP address and the broadcasters will not block you anyway.

However, if you are not in the corresponding country and connect to a server in Austria, for example, and then visit the website of the Austrian streaming provider, the provider thinks you are physically there. Now the regional block is lifted and you can stream Champions League on free TV.

What can I do if it does not work right away?

There are a few reasons why streaming does not work despite VPN and also a few solutions.

If you are using Android or iOS, the mobile apps of the respective streaming providers often work better than browsers.

With various streaming providers, some browsers have problems and in this case just try another internet browser. It can also help to delete cookies and cache or to use the incognito mode. In Firefox this is called private window.

It also happens that your server is unmasked. Disconnect and connect to another server. This has also helped me.

The best VPN services also provide browser extensions that act as a proxy. This often works better. Proxies aren’t as secure, but they are faster and when streaming you want speed.

The VPN protocol can also play a role. Change it if it doesn’t work at all. You can change it with the best services directly in the app with a few clicks.

The fact is, if you’re smart and look around a bit, you can watch a lot of interesting Champions League games for free – it’s very easy to save money here!

FAQ – Questions and answers about streaming

You still have some questions about the Champions League on free TV? Maybe you’ll find the right answers in this section.

Can I watch Champions League for free?

Yes, this is possible, but not everywhere anymore. There are several foreign broadcasters that stream UEFA Champions League matches on free TV. At the beginning of the article I have put a list of the channels that I have found.

Which channels show CL for free?

ZDF (Champions League final only) (VPN Germany)
ServusTV Austria (VPN Austria)
RTL Luxemburg (VPN Luxembourg)
RTL Zwee (VPN Luxembourg)
Canale 5 (VPN Italy)
Club RTL (VPN Belgium)

Is it legal to watch Champions League with a VPN for free?

Many believe that it is not illegal to bypass geoblocking and watch Champions League on free TV via foreign countries. You are almost certainly violating the terms of use of the streaming providers. It’s best to find out for yourself how this is regulated in your location.

The post UEFA Champions League Free TV abroad appeared first on xplatform.rocks.

Thorsten Hans: Watch Formula 1 for free – abroad via Free TV

In Belgium, Luxembourg, Austria and Switzerland, Formula 1 on free TV is still possible. With a reliable VPN, you can access these channels from anywhere to stream Formula 1 abroad on free TV for free. NordVPN* is my favorite here, because with it the free F1 streaming always works for me.

These channels show Formula 1 on free TV:

You can watch Formula 1 for free – for example via ORF / ServusTV or SRF

Perhaps a few more important notes. ServusTV and ORF take turns showing the races. SRF, on the other hand, shows all races, but you have to look up on which channel. This can be SRF 2 or SRF Info. However, all websites offer a TV program, which you can quickly find out about. RTL Luxembourg also broadcasts the races and even records them. You can stream the last race for a week as a repeat. RTL Play in Belgium shows the F1 races with French commentary.

If you are traveling and on vacation in Switzerland, Luxembourg, Belgium or Austria, you can simply stream the races. The above mentioned broadcasters have the license to officially broadcast the races in their country on free TV. If you are not on location, you will get a message that the respective content may not be broadcast for legal reasons – this is then the so-called geoblocking, which you can bypass, I use here NordVPN* or CyberGhost*.

How to stream Formula 1 abroad for free on free TV?

Time needed: 15 minutes.

Follow my simple instructions and you will be able to watch the races you want online for free abroad without any problems.

  1. Get a VPN

    Subscribe to a VPN with servers in Austria, Luxembourg, Belgium or Switzerland. It depends on what you prefer to watch over. The best VPN services offer servers in these countries anyway. The providers I recommend, NordVPN* and CyberGhost*, provide apps for Android, Windows, macOS, iOS, and Linux.

  2. Install the VPN on your devices

    Once you have decided, download the appropriate VPN apps and install them on your device. Open the app and log in with your user data.

  3. Connect to one of the servers

    At this point, it depends on whether you want to stream RTL / RTL Play / ORF / ServusTV or SRF. Connect to the appropriate server in the country where you want to stream Formula 1 for free on free TV.

  4. Open the streaming provider

    Now open one of the possible F1 streams. On SRF, the races are usually broadcast on SRF 2, possibly also on SRF Info. Sometimes you have to change the channel during a Grand Prix, but the commentators announce this in the live stream.

Pro tip: You can also watch the races on free TV via RTL Luxembourg (rtl.lu). There you can watch the repeat of the last Formula 1 race for a whole week. So if you missed a race or it was broadcast at an inconvenient time for you, you can watch the Formula 1 replay there for free. However, you need an IP address in Luxembourg and you have to connect to a corresponding server.

Is watching Formula 1 abroad for free legal?

Various specialist lawyers are convinced that you are not doing anything illegal when circumventing geoblocking. You are most likely violating the terms of use of the respective streaming provider. However, you don’t have to register with any of these broadcasters, and you probably won’t be prosecuted anyway.

Instead, the streaming providers rely on so-called geographical blockades and try to detect and block your VPN. That’s why you need a reliable VPN provider with many good servers.

However, there are countries where VPNs are prohibited. What I would like to say at this point: Find out for yourself what is allowed in your location and what is not. The fact is that you can stream Formula 1 abroad via free TV, as long as you are in one of these countries.

The F1 stream with VPN does not work – solutions

Here are some tips. If the F1 stream of Formula 1 on Free TV abroad does not work, this can have several causes.

Possibly, the streaming provider has a problem. This rarely happens, but it is a possibility. Maybe there is a technical problem that you are powerless against. In this case, you can only wait or switch to another channel.

Sometimes certain streams do not work with various browsers. Try another browser and maybe that will solve your problem.

Providers are always eager to detect VPNs. It happens that a server is unmasked and then the broadcast is blocked. If this is the case, simply change the server. Disconnecting and reconnecting often solves the problem.

The best VPNs offer multiple protocols. It may be that some VPN protocols don’t work well and that’s why try to change the protocol in the app’s settings. You can also install the browser extension of the service. These are mostly proxies and they are great for streaming.

FAQ – Frequently asked questions and answers about streaming F1 for free

The bottom line is that it’s pretty easy to watch Formula 1 on free TV from anywhere if you know the right trick with VPN.

Where can I stream Formula 1 for free?

You can watch Formula 1 abroad for free on all the channels mentioned above. All races are geoblocked and therefore you need a VPN with servers in the corresponding country.

Where can I watch Formula 1 replays for free?

RTL Luxembourg shows the replay of the last race, but only for one week – still. This is quite useful when races are broadcast at night or very early in the morning. You don’t have to stay awake or get up early, just watch the Grand Prix when it suits you.

Can I watch F1 with a free VPN?

I highly doubt it. All free VPNs have limitations. Some offer only a few Mbytes of data volume per month and with that you can’t stream a complete race. Others don’t have the servers you need. Rather take a cheap premium service and you don’t have to be annoyed.

The post Watch Formula 1 for free – abroad via Free TV appeared first on xplatform.rocks.

Thorsten Hans: Stream Six Nations Rugby for free abroad

As a rugby fan, you can’t miss the Six Nations, the most important rugby tournament of the year. I’ll show you how you can stream the Six Nations for free abroad. For this you need a good VPN like the one from NordVPN. Licensing rights, pay-TV and geoblocking are the reasons why you can’t easily stream the Six Nations at home for free. More about this later.

These foreign streams broadcast the Six Nations on free TV:

Country (VPN Server)ChannelLanguage
EnglandBBC iPlayer + IPTVEnglish
ItalyDMAXItalian
FranceFrance 2French
IrelandRTE or VirginEnglish

How to stream Six Nations Rugby for free abroad

You want to stream the rugby tournament of the year for free? I’ll explain it to you with the solution via BBC and IPTV in England:

Time needed: 15 minutes.

Watch all Six Nations rugby matches – it’s free:

  1. You need a VPN that can bypass geoblocking

    Get a reliable VPN like NordVPN or CyberGhost – these two services are known to work around geoblocking very well. ITV and BBC broadcast the games only in England and that’s why you need a VPN with local servers. Analogous for Italy, France and Ireland.

  2. Connect to a server

    The next step is to connect to one of these servers in England. This will give you a local English IP address and it will look like you are on site.

  3. Find out which channel shows which game

    BBC (https://www.bbc.com/) and ITV (https://www.itv.com/) usually show the matches in rotation. Find out in time, who broadcasts which match of the Six Nations. With both broadcasters you have to create an account and register for free. Registration requires you to enter a zip code in England… pick one!

  4. Finished!

    Now you can start the free stream of the Six Nations.

Which countries participate in the Six Nations?

The Six Nations brings together the best teams from Europe. England, Wales, Ireland, Scotland, France and Italy fight each year for the coveted rugby crown. Italy is the biggest underdog in this tournament, but they are increasingly causing big surprises. Italy is improving year by year and I am curious to see who they will upset this year.

The rules of the Six Nations – Rugby rules explained in brief

You are interested in the tournament, but you have some gaps in the rules? Here briefly the scoring system of the rugby tournament, as it can cause some wonderment:

  • Four points are awarded for a win.
  • In case of a draw, each team gets two points.
  • A team gets a try bonus point if it lays 4 or more tries in a game.
  • If a team loses by 7 or fewer points, the team receives a losing bonus point.
  • If a team manages a Grand Slam, i.e. wins all 5 games, then it gets 3 extra points.

The bonus points system is designed to make teams play as offensively as possible and not just give up when victory seems hopeless.

The best VPNs to watch Six Nations for free

In principle, any VPN that can successfully bypass geoblocking of one of the channels listed above will work. However, the service should deliver high speeds, otherwise Six Nations streaming is no fun. Below are two VPN providers that I have had excellent experiences with when streaming Six Nations for free.

NordVPN

The provider is perfect for streaming the Six Nations for free via the channels listed above abroad. The VPN unlocks all of the above channels and geoblocking is no longer a problem. I tested it myself with all variants and it works flawlessly.

NordVPN allows the connection of 6 devices at the same time. But it also allows you to use it on a router, which allows you to connect devices like smart TVs and game consoles to the VPN.

The service supports all popular operating systems: Windows, Android, iOS, macOS and even Linux – including Raspberry Pi.

NordVPN has an adblocker that also protects against malware, phishing and trackers.

Another special feature of NordVPN is the cloaking servers (Obfuscate). This allows the service to work even in countries with VPN blocks, such as China, Turkey, Egypt, and Russia.

The Kill Switch protects your devices in case the connection to the VPN fails accidentally. The app will immediately disconnect your Internet connection until a connection with a VPN server is restored.

You can even try NordVPN for free and risk-free because it comes with a 30-day money-back guarantee.

CyberGhost

CyberGhost is cheaper than NordVPN, but can bypass geoblocking just as well. You can also use it to unblock the above mentioned channels to watch Six Nations for free abroad without any problems.

CyberGhost allows 7 simultaneous device connections. Of course, you may also use this provider on a router to connect your Playstation, Xbox or Smart TV to it.

CyberGhost offers one of the best and most user-friendly Android apps I know of. Besides WireGuard, CyberGhost also offers OpenVPN and if you use the latter VPN protocol as TCP, stealth mode is automatically enabled.

Otherwise, there are apps for all popular operating systems: Android, iOS, Windows, macOS and Linux. There is even a GUI for the latter, and that is rather rare.

CyberGhost also has an adblocker that protects against other cyber threats – phishing, trackers and malware.

CyberGhost also offers a money-back guarantee. This is valid for 45 days.

FAQ – frequently asked questions about the Six Nations

Can I stream Six Nations Rugby for free?

Yes, you can. This works with the TV channels listed above abroad.

Can I watch Six Nations Rugby with a free VPN?

Probably not, and if so, then in an extremely poor quality. We run into several problems here. Most free VPNs limit the data volume and it is not enough to watch a complete game. Other free VPNs throttle the bandwidth and that’s why streaming is not possible without annoying interruptions to load the data. Another hurdle is that free VPNs only offer servers in a few countries.

Is it legal to stream the Six Nations with a VPN?

You are most likely violating the terms of use of some streaming providers. In the worst case, this will lead to an exclusion from the broadcaster. However, with a VPN you are anonymous on the Internet.
VPNs are completely legal in Germany, Austria, Switzerland and Luxembourg. However, they are not a license to break the law. Even if you use a VPN, you must comply with the respective legislation.
However, there are countries where VPNs are illegal or restricted. These include China, Egypt, Turkey, Russia, Iran and so on. If you’re traveling abroad, check the laws before you go – they’re known to change.

Where will the Six Nations be broadcast for free?

I only found free legal streams in England, France, Ireland and Italy. I have listed the respective channels of the free TV countries in the list at the beginning of the article.

Can I stream Six Nations on my phone or tablet?

Of course it works. Either you open the websites of the services, which are of course optimized for mobile devices, or you get the corresponding apps.
With a VPN, you can watch Six Nations for free on the go and never miss a game.

The post Stream Six Nations Rugby for free abroad appeared first on xplatform.rocks.

Thorsten Hans: Modern Family – How to Watch Abroad?

Modern Family is a modern-day family sitcom that originally aired on ABC in 2009. The show follows the lives of three generations of a fictional family, the Dunphys.

However, Modern Family is not available on Netflix in all countries. If you’re located in a country where Modern Family is not available on Netflix, you can use a VPN to watch it.

VPNs are online services that allow you to conceal your real IP address and encrypt your traffic. This means that you can bypass Netflix’s geographic restrictions and watch Modern Family no matter where you are located.

Netflix released the Modern Family series that has become very popular. However, some people are not able to watch it because their location does not allow them to access the content. A VPN can be used to change a person’s IP address so they can watch Modern Family from anywhere. There are many different types of VPNs, and each one has its own benefits.

A VPN is a great way to keep your information safe when you are online. It can also help you get around content restrictions. Some of the benefits of using a VPN include:

– Increased privacy and security – When you use a VPN, your traffic is encrypted, which means that it is much harder for someone to track your online activities.

– Access to blocked content – If you are trying to watch Modern Family from a location that doesn’t allow it, a VPN can help you get around those restrictions.

– Reduced risk of being hacked – A VPN can help protect your devices from being compromised by hackers.

– Improved connection speeds – Some VPNs can improve your connection speeds, which can be helpful if you are trying to stream content or play games online.

When choosing a VPN, it is important to consider the different options available to you. There are many different providers, and each one offers a unique set of features. Some of the things you should consider include:

– Price – VPNs can be expensive, so it is important to find one that fits your budget.

– Number of devices supported – Not all VPNs support the same number of devices, so you will want to make sure the one you choose can be used on all of your devices.

– Location – Some VPNs are only available in certain locations, so you will want to make sure the one you choose covers the area you need it to.

– Bandwidth – The amount of bandwidth that a VPN provides can vary, so you will want to make sure you have enough bandwidth to meet your needs.

If you are looking for a VPN to watch Modern Family, there are a few things to keep in mind. First, decide what features are important to you and then find a provider that offers those features. Second, make sure the provider is trustworthy and has a good reputation. Finally, read the terms of service carefully to make sure you understand what you are getting into. By following these tips, you can find the perfect VPN for your needs.

What VPNs Are There?

There are many different VPN providers out there, each with its own unique features and benefits. Some of the most popular VPN providers include ExpressVPN, NordVPN, and CyberGhost.

How to watch Modern Family using a VPN?

Once you have signed up for a VPN service, you will need to download and install the VPN software. Then, open the VPN software and connect to a server in the US. Once you are connected, you can open Netflix and watch Modern Family. Note that some VPN providers may slow down your internet connection, so you may want to test out a few different servers before settling on one.

So if you’re looking for a way to watch Modern Family outside of the United States, using a VPN is the best option. VPNs are easy to use and provide a lot of benefits, such as privacy and security. So don’t miss out on Modern Family – sign up for a VPN today!

The post Modern Family – How to Watch Abroad? appeared first on xplatform.rocks.

Code-Inside Blog: Your URL is flagged as malware/phishing, now what?

Problem

On my last day in 2022 - Friday, 23. December, I received a support ticket from one customer, that our software seems to be offline and it looks like that our servers are not responding. I checked our monitoring and the server side of the customer and everything was fine. My first thought: Maybe a misconfiguration on the customer side, but after a remote support session with the customer, I saw that it “should work”, but something in the customer network blocks the requests to our services. Next thought: Firewall or proxy stuff. Always nasty, but we are just using port 443, so nothing too special.

After a while I received a phone call from the customers firewall team and they discovered the problem: They are using a firewall solution from “Check Point” and our domain was flagged as “phishing”/”malware”. What the… They even created an exception so that Check Point doesn’t block our requests, but the next problem occured: The customers “Windows Defender for Office 365” has the same “flag” for our domain, so they revert everything, because they didn’t want to change their settings too much.

x

Be aware, that from our end everything was working “fine” and I could access the customer services and our Windows Defender didn’t had any problems with this domain.

Solution

Somehow our domain was flagged as malware/phishing and we needed to change this false positive listening. I guess there are tons of services, that “tracks” “bad” websites and maybe all are connected somehow. From this indicent I can only suggest:

If you have trouble with Check Point:

Go to “URLCAT”, register an account and try to change the category of your domain. After you submit the “change request” you will get an email like this:

Thank you for submitting your category change request.
We will process your request and notify you by email (to: xxx.xxx@xxx.com ).
You can follow the status of your request on this page.
Your request details
Reference ID: [GUID]
URL: https://[domain].com
Suggested Categories: Computers / Internet,Business / Economy
Comment: [Given comment]

After ~1-2 days the change was done. Not sure if this is automated or not, but it was during Christmas.

If you have trouble with Windows Defender:

Go to “Report submission” in your Microsoft 365 Defender setting (you will need an account with special permissions, e.g. global admin) and add the URL as “Not junk”.

x

I’m not really sure if this helped or not, because we didn’t had any issues with the domain itself and I’m not sure if those “false positive” tickets bubbles up into a “global defender catalog” or if this only affects our own tenant.

Result

Anyway - after those tickets were “resolved” by Check Point / Microsoft the problem on the customer side disappeared and everyone was happy. This was my first experience with such an “false positive malware report”. I’m not sure how we ended up on such a list and why only one customer was affected.

Hope this helps!

Christina Hirth : (Data) Ownership, Boundaries, Contexts – what do these words mean?

In the last months, we started to use these terms more and more at my company without discussing the concepts behind them. One day I was asked, “What do you mean by data ownership?” 🤔 The question made me realise that I don’t know how much of these concepts are understood.

These terms refer to sociotechnical concepts (some originating from Domain-driven design). They refer to one possible answer to the question: how can a product be improved and maintained in the long term? How can we avoid hunting for weeks for bugs, understanding what the code does, finding out what it should do, and hoping that fixing one issue does not lead to a new problem? How can we continue having fun instead of becoming more and more frustrated?

Real digital products address needs which were fulfilled earlier manually. Companies which survived the first years of testing the product are often innovators in their market. They have chances to stay ahead of the others, but they have the burden of solving all questions themselves. I don’t mean the technical questions; nowadays, we have a considerable toolbox we can use. But all the competitors have that toolbox too. The questions to answer are how to organise in teams and how to organise the software to reach a steady pace without creating an over-complicated, over-engineered or over-simplified solution.

How to get a grip on the increasing complexity built up in those years when the only KPI that mattered was TTM (Time-to-Market)?

Years ago, the companies creating software to help automate work answered this question with silos around the architecture: frontend, backend, processing, etc. In the meantime, it became clear that this was not good enough.

Engineers are not hired to type code but to advise and help to solve problems. 

This means they should not belong to an engineering department anymore but be part of teams around different topics to handle: marketing, search, checkout, you name it. These are sub-domains or bounded contexts (depending on the importance of the subject, more than one bounded context can build the solution for the same sub-domain). These contexts and their boundaries are not fixed forever because the context changes, the market around the company changes, and the needs change. The people involved change and, finally, the effort needed changes. The best way also to define them is to take a look at how the business is organised (sales, marketing, finance, platform, developer experience, etc.) and how the companies using the product are organised (client setup, client onboarding, employee onboarding, payroll period, connected services, etc.). By aligning the software and – to get the most significant benefit – the teams to these sub-domains, you can ensure that the cognitive load for each team is smaller than the sum of all.

What are the benefits?

  • The domain experts and the engineers speak the same language, the ubiquitous language of their bounded context, to use the DDD terms.
  • The teams can become experts in their sub-domain to make innovation and progress easier as the problems are uncovered one after another. They can and will become responsible and accountable about their domain because they are the only ones enabled to do so.
  • Each team knows who to contact and with whom to collaborate because the ownership and the boundaries are clear. (No long-running meetings and RFCs anymore by hoping to have reached everyone involved).

What does data ownership mean in this case? Data ownership is not only about which team is the only one controlling how data is created and changed but also the one controlling which data is shared and which remains implementation detail. This way, they stay empowered and autonomous: they can decide about their experiments, reworks, and changes inside their boundaries.

Data ownership also means process ownership. 

It means the team which owns the data around “expenses”, for example, owns the features around this topic, what is implemented and when so that they are involved in each improvement or change regarding expenses from the beginning. This is the only way to respect the boundaries, take responsibility, and be accountable for all decisions around the software the engineers create.

Applying these concepts can’t be done overnight, mainly because it is not only about finding the (currently) good boundaries but also shifting the mindset from “let me build something” towards “I feel responsible for this part of my product”. It needs knowledge about the product and a lot of coaching and care. But finding the boundaries to start with should be doable in case of a product already established on the market and with a clear strategy. The alternatives are silos, continuously increasing cognitive load or the loss of an overview and local optimisations.

Martin Richter: PTZControl unterstützt nun bis zu 3 Kameras

Es erstaunt und freut mich, dass mein kleines Helferlein gut benutzt wird und es tatsächlich noch Anfragen zu Features gibt.

Ich habe heute eine neue Version bereitgestellt.

  • Unterstützung bis maximal 3 Kameras nun.
  • Layout passt sich für jede Anzahl Kameras so an, dass möglichst wenig Platz auf dem Bildschirm eingenommen wird.
  • Unterstützte Kameras sind nun: Logitech PTZ 2 Pro, PTZ Pro, Logitech Rally und die ConferenceCam CC3000e Kameras.

Hier der entsprechende Link auf mein Repository mit der neuesten Version:
https://github.com/xMRi/PTZControl


Copyright © 2017 Martin Richter
Dieser Feed ist nur für den persönlichen, nicht gewerblichen Gebrauch bestimmt. Eine Verwendung dieses Feeds bzw. der hier veröffentlichten Beiträge auf anderen Webseiten bedarf der ausdrücklichen Genehmigung des Autors.
(Digital Fingerprint: bdafe67664ea5aacaab71f8c0a581adf)

Christina Hirth : KanDDDinsky 2022 Watch-List

This is the list of the sessions I watched, some with additional insights, others as a resource. All of them are recommended if the topic is interesting to you.

All sessions recorded during the conference can be viewed on the KanDDDinsky YouTube channel.

Keynote By Mathias Verraes about Design & Reality

Thought-provoking, like all talks I saw from Mathias.

Connascence: beyond Coupling and Cohesion (Marco Consolaro)

An interesting old concept regarding cohesion and good developer practices. Fun fact: I had never heard of Connascence before, but two times at this conference 😀.

Learn more about this from Jim Weirich’s “Grand Unified Theory of Software Design” (YouTube). It is a clear recommendation for programmers wanting to learn how to reduce cohesion.

Architect(ure) as Enabler of Organization’s Flow of Change (Eduardo da Silva)

The evolution of the rate of change in time

“The level and speed of innovation has exploded, but we still have old mental models when it comes to organisations” – Taylorism says hello 🙁

Evolution pattern depends on architectural and team maturity.

“There is no absolute wrong or right in the organisational model of the architecture owners; it is contextual and depends on the maturity.”

This talk is highly recommended if you work in or with big organisations.

Systems Thinking by combining Team Topologies with Context Maps (Michael Plöd)

A lot of overlapping between Team Topologies and DDD

💯 recommended! (The slides are on speakerdeck.)

Road-movie architectures – simplifying our IT landscapes (Uwe Friedrichsen)

There will always be multiple architectures.

“The architecture is designed for 80-20% of the teams, and it is ignored by 80-20% of them.”

The complexity trap

Uwe describes his concept-in-evolution of a desirable solution that could help avoid the different traps. They should be

  • collaborative and inclusive,
  • allowing to travel light with the architecture,
  • topical and flexible

The concept is fascinating, with a lot of good heuristics. A clear recommendation 👍

How to relate your OKRs to your technical real-estate (Marijn Huizenveld)

Common causes of failure with OKRs
Combine OKRs with Wardley Maps

The slides are on speakerdeck. Marijn is a great speaker; the talk is recommended if you work with OKRs.

Improving Your Model by Understanding the Persona Behind the User (Zsofia Herendi)

Salesforce study: 76% of customers expect companies to understand their needs and expectations.

😱 what about the rest of 24%?!! Do they not even expect to get what they need?

Zsofia gives a lot of good tips about visualising and understanding the personas.

Balancing Coupling in Software Design (Vladik Khononov)

Maths meet physics meet software development – yet again, a talk from Vladik, which must be seen more than once.

The function for calculating the pain due to cohesion.

By reducing one of these elements (strength, volatility, distance) to 0, the maintenance pain due to coupling can be reduced to (almost) 0. Now we know what we have to do 😁.

Culture – The Ultimate Context (Avraham Poupko)

Why does not have the DDD community any actual conflicts? Because our underlying concept is to collaborate – to discuss, challenge, decide, agree, commit (even if we disagree) and act.

 

This talk is so “beautiful” (I know, it is a curious thing to say), so overwhelming (because of this extraordinary speaker 💚), it would be a failure even to try to describe it! It is available, go and watch it if you want to understand the DDD community.


This list is just a list. It won’t give you any hints about the hallway conversations which happen everywhere, about the feeling of “coming home to meet friends!” which I got each year, and I won’t even try 🙂

Code-Inside Blog: SQLLocalDb update

Short Intro

SqlLocalDb is a “developer” SQL server, without the “full” SQL Server (Express) installation. If you just develop on your machine and don’t want to run a “full blown” SQL Server, this is the tooling that you might need.

From the Microsoft Docs:

Microsoft SQL Server Express LocalDB is a feature of SQL Server Express targeted to developers. It is available on SQL Server Express with Advanced Services.

LocalDB installation copies a minimal set of files necessary to start the SQL Server Database Engine. Once LocalDB is installed, you can initiate a connection using a special connection string. When connecting, the necessary SQL Server infrastructure is automatically created and started, enabling the application to use the database without complex configuration tasks. Developer Tools can provide developers with a SQL Server Database Engine that lets them write and test Transact-SQL code without having to manage a full server instance of SQL Server.

Problem

(I’m not really sure, how I ended up on this problem, but I after I solved the problem I did it on my “To Blog”-bucket list)

From time to time there is a new SQLLocalDb version, but to upgrade an existing installation is a bit “weird”.

Solution

If you have installed an older SQLLocalDb version you can manage it via sqllocaldb. If you want to update you must delete the “current” MSSQLLocalDB in the first place.

To to this use:

sqllocaldb stop MSSQLLocalDB
sqllocaldb delete MSSQLLocalDB

Then download the newest version from Microsoft. If you choose “Download Media” you should see something like this:

x

Download it, run it and restart your PC, after that you should be able to connect to the SQLLocalDb.

We solved this issue with help of this blogpost.

Hope this helps! (and I can remove it now from my bucket list \o/ )

Christina Hirth : Paper on Event-Driven Architecture

Programming Without a Call Stack – Event-driven Architectures
by Gregor Hohpe (2006) shared by Indu Alagarsamy on KanDDDinsky 2019.

Holger Schwichtenberg: In eigener Sache: Bücher zu C# 11.0, Blazor 7.0 und Entity Framework Core 7.0

Der Dotnet-Doktor hat seine Buchreihe bereits auf den Stand der fertigen Version von .NET 7.0 gebracht.

Jürgen Gutsch: Windows Terminal, PowerShell, oh-my-posh, and Winget

I'm thinking about changing the console setup I use for some development tasks on Windows. The readers of this block already know that I'm a console guy. I'm using git and docker in the console only. I'm navigating my folders using the console. I even used the console to install, update or uninstall tools using Chocolatey (https://chocolatey.org/).

This post is not a tutorial on how to install and use the tools I'm going to mention here. It is just a small portrait of what I'm going to use. Follow the links to learn more about the tools.

PowerShell and oh-my-posh

Actually, working in the console doesn't work for me with the regular cmd.exe and I completely understand why developers on Windows still prefer using windows based tools for git and docker, and so on. Because of that, I was using cmder (https://cmder.app/), a great terminal with useful Linux commands and great support for git. The git support not only integrates the git CLI, but it also shows the current branch in the prompt:

Cmder in action

The latter is a great help when working with git; I missed that in the other terminals. Commander also supports adding different shells like git bash, WSL, or PowerShell but I used the cmd shell which has been enriched with a lot more useful commands. This worked great for me.

For a couple of weeks, I'm playing around with the Windows Terminal a little more. The reason why I looked into the Windows Terminal is, that I like the more lightweight settings.

The Windows Terminal (download it from the windows store) and oh-my-posh (https://ohmyposh.dev/) are out for a while and I followed Scott Handelman's blog posts about it for a long time but wasn't able to get it running on my machine. Two weeks ago I got some help by Jan De Dobbeleer to get it running. It just turned out that I had too many posh versions installed on my machine, and the path environment variable was messed up. After cleaning my system and reinstalling oh-my-posh on my machine by following the installation guide it is working quite well:

Terminal and posh in action

I still need to configure the prompt a little bit to match my needs 100% but the current theme is great for now and does more as cmder does. I'd like to display the latest tag of the current git repository and the currently used dotnet SDK version, but this will be another story.

Windows Terminal

In the Windows Terminal, I configured oh-my-posh for both, the Windows PowerShell 5 and the new PowerShell 7 and set the PowerShell 7 as my default console. I also added configurations to use PowerShell 5, WSL (both Ubuntu 18 and Ubuntu 20), git bash, and the Azure Cloud Shell. I did almost the same with cmder but I like the way how it gets configured in Windows Terminal.

Winget

Winget is basically an apt-get for windows and I like it.

As mentioned, Chocolatey is the tool I used to install the tools I need, like git, cmder, etc. I tried it for a while, winget was mentioned on Twitter (unfortunately I forgot the link). Actually, it is much better than Chocolatey because it uses the application registry used by windows, which means it can update and uninstall programs that have been installed without using winget.

Winget is the console version of installing and managing installed programs on Windows and it is natively installed on Windows 10 and Windows 11.

Conclusion

So I'm going to change my setup from this ...

  • cmder
    • cmd
    • chocolatey

... to that ...

  • Windows Terminal
    • Powershell7
    • oh-my-posh
    • Winget

... and it seems to work great for me.

Any other tools that I should have a look at? Just drop me a comment :-)

Holger Schwichtenberg: .NET 7 erscheint am 8. November im Rahmen der .NET Conf 2022

Während der Entwicklerkonferenz .NET Conf 2022 will Microsoft nächste Woche die einsatzreifen Versionen von .NET 7.0 und C# 11.0 freigeben.

Jürgen Gutsch: ASP.NET Core Globalization and a custom RequestCultureProvider [Updated]

In this post, I'm going to write about how to enable and use Globalization in ASP.NET Core. Since you don't can change the culture depending on route values by default, I show you how to create and register a custom RequestCultureProvider that does this job.

UPDATE:

Hisham Bin Ateya pointed me to the [fact via Twitter](TWITTER STATUS) that there already is a RequestCultureProvider that can change the culture depending on route values in ASP.NET Core. Because of that, please see the last section in this blog post just as an example about how to create a custom RequestCultureProvider.

I also restructured the post a little bit. to separate general information about Globalization and RequestCultureProvider. If you are familiar with Globalization, just skip the fist sections and jump to the second last section.

About GLobalization

Resources Files

Like in the old time of the .NET Framework, the resources (strings, images, icons, etc.) for different languages are stored in so-called resource files that end with resx stored in a folder called Resources by default.

Unlike in the good old time of the .NET Framework, the right resource files will be fetched automatically by the implementation of the specific Localizer if you follow some naming conventions.

  • If you inject the Localizer into a controller, the localizer should be named like Controllers.ControllerClassName.[Culture].resx or put to a subfolder called Controllers and named like ControllerClassName.[Culture].resx .
  • If you are injecting the Localizer into a view, it is almost the same as for the controllers. The difference is just to have a view name in the resource path instead of a controller name: Views.ControllerName.ViewName.[culture].resx or Views/ControllerName/ViewName.[culture].resx.

It is up to you to decide how you like to structure your resource files. Personally, I prefer the folder option. Also, an autogenerated code file as you might know from the past is no longer needed since you need to use a localizer to access the resources.

Unfortunately there is now way yet to add a resource file via the .NET CLI. Maybe there will be a template in the future. I created the resource file with the Visual Studio 2022 and copied it to create the other files needed.

Localizers

You no longer need to use the resource manager to read the actual localized strings from the resource files. You can now use an IStringLocalizer or an IHtmlLocalizer. The latter doesn't HTML-encode the strings that are stored in the resource files and can be used to localize strings that contain HTML code if needed.:

using Microsoft.AspNetCore.Mvc;
using Microsoft.Extensions.Localization;

namespace Internationalization.Controllers;

public class HomeController : Controller
{
	private readonly IStringLocalizer<HomeController> _localizer;

    public AboutController(IStringLocalizer<HomeController> localizer)
    {
   		_localizer = localizer;
    }

    public IActionResult Index()
    {
    	return View(new { Title = _localizer["About Title"]});
    }
}

The resource key named "About Title" doesn't need to exist or even the resource file doesn't need to exist. If the Localizer doesn't find the key, the key itself gets returned as a string. You can use any kind of string as a key. This can help you to develop the application without having the resource files in place.

You can even inject a localizer in the Razor View like this:

@using Microsoft.AspNetCore.Mvc.Localization
@inject IViewLocalizer Localizer

@model HomeIndexViewModel
@{
    ViewData["Title"] = Localizer["Title"];
}

<h1>@ViewData["Title"]</h1>

In this case, it is an HtmlLocalizer the key can also contain HTML that doesn't get encoded when writing it out to the view. Even if it's not recommended to save HTML in resource files it might be needed in some cases. You shouldn't do that because HTML should be part of the frontend templates like Razor, Blazor, etc.

Instead of using the ViewLocalizer in the Razor Templates, you can also localize the entire View. Therefore you need to suffix the view name with the needed culture or put the view in a subfolder called like the culture. How localized views are handled needs to be configured when enabling Localization and Globalization.

Enabling Globalization in ASP.NET Core

As usual, you need to add the required services to enable localization:

builder.Services.AddLocalization(options =>
{
    options.ResourcesPath = "Resources";
});
builder.Services.AddControllersWithViews()
    .AddViewLocalization(LanguageViewLocationExpanderFormat.Suffix)
    .AddDataAnnotationsLocalization();

The first line adds general localization to be used in the C# code, like controllers, etc. setting the ResourcePath in the options is optionally and just added to the snippets, to show you that you can change the path where the resources are stored.

After that, the ViewLocalization, as well as the DataAnnotationLocalization, was added to the Service Collection. The LanguageViewLocationExpanderFormat tells the View localizer that in the case of localized views, the culture was added as a suffix to the filename instead of being part of the folder structure.

After adding the needed services to the service collection the required middleware needs to be added as well:

app.UseRequestLocalization(options =>
{
    var culture = new List<CultureInfo> {
        new CultureInfo("en-en"),
        new CultureInfo("fr-fr"),
        new CultureInfo("de-de")
    };
    options.DefaultRequestCulture = new RequestCulture("en-en");
    options.SupportedCultures = culture;
    options.SupportedUICultures = culture;
});

This Middleware uses pre-configured RequestCultureProviders to set the Culture and the UICulture to the current process. With this culture set, the localizers can select the right resource files or the right localized views.

That's it with enabling Localization and Globalization. With this information, you should be able to create multilanguage applications already.

Culture vs. UI Culture

Setting the culture will set the application to a specific language and optional a region. If you also set the UI Culture, you make a distinction between translating texts and between how numbers, dates, and currencies are displayed. The UI culture is used to load resources from a corresponding resource file and the Culture is used to change the way how numbers, dates, and currencies are formatted and displayed.

In some cases, it makes sense to handle that separately. If you only like to translate your page without taking care of number and date formats, etc. you should only change the UI Culture.

Localize ViewModels

While enabling view localization, we also enabled DataAnnotationsLocalization. This helps you to translate labels for form fields in case you use the @Html.LabelFor() method. This doesn't need to specify the ResourceType anymore. Since there is no longer an autogenerated C#-File, there is also no ResourceType specified. Inside the ViewModel you just need to specify the DisplayAttribute:

public class EmployeeViewModel
{
    [Display(Name = "Number")]
    public int Number { get; set; }

    [Display(Name = "Firstname")]
    public string? Firstname { get; set; }

    [Display(Name = "Lastname")]
    public string? Lastname { get; set; }

    [Display(Name = "Department")]
    public string? Department { get; set; }

    [Display(Name = "Phone")]
    public string? Phone { get; set; }

    [Display(Name = "Email")]
    public string? Email { get; set; }

    [Display(Name = "Date of birth")]
    public DateTime DateOfBirth { get; set; }

    [Display(Name = "Size")]
    public decimal Size { get; set; }

    [Display(Name = "Salery")]
    public decimal Salery { get; set; }
}

The DataAnnotationsLocalizer will automatically use the string that is set in the Name property as a key to search for the relevant resource. This also works for the Description and the ShortName properties.

The resource file that is used to translate the display names has to be placed inside subfolders called ViewModels/ControllerName. Example: /Resources/ViewModels/Home/EmployeeModel.de-DE.resx

Creating a custom RequestCultureProvider

RequestCultureProviders

As mentioned RequestCultureProviders retrieve the culture from somewhere and prepare it for the process to work with the culture. The RequestCultureProviders return a ProviderCultureResult that has the property Culture and UICulture set. Both cultures can differ if needed. In most cases, it will be the same.

There are three preconfigured RequestCultureProviders:

  • QueryStringRequestCultureProvider This provider extracts the Culture and UICulture from query string values if there are any. This means you can switch the language by just setting the query strings. ?culture-de-DE&ui-culture=de-DE

  • CookieRequestCultureProvider This provider extracts the culture information from a specific cookie. The cookie-name is .AspNetCore.Culture and the value of the cookie might look like this: c=es-MX|uic=es-MX (c is the culture and uic is the ui-culture)

  • AcceptLanguageHeaderRequestCultureProvider That provider extracts the language information from the Accept-Language Header that gets sent by the browsers. Every browser has preferred languages configured and sends those languages to the server. With this information, you can localize your application-specific to the user's language

As you have seen in the previews section, not every language sent by the accept-language header, cookie, or query string gets accepted by your application. You need to define a list of supported cultures and a default request culture that is used if the language sent by the client isn't supported by your application.

The custom RequestCultureProvider

UPDATE:

Actually there is an existing RequestCultureProvider in ASP.NET Core that can change the culture depending on rout values. Since it isn't in the default list of registered RequestCultureProvider, I expected that there is none. That was wrong.

Since there is one already, just see the following section as an example about how to create a custom RequestCultureProvider.

What I am missing in the list of RequestCultureProviders is a RouteValueRequestCultureProvider. A provider that is getting the culture information from a route value in case it is part of the route like this /en-US/Home/Index/

Let's assume we have a route configured like this:

app.MapControllerRoute(
    name: "default",
    pattern: "{culture=en-us}/{controller=Home}/{action=Index}/{id?}");

This adds the culture as part of the route.

Actually, I built a RouteValueRequestCultureProvider that handles the route values:

using Microsoft.AspNetCore.Localization;

namespace Internationalization.Providers;

// <summary>
/// Determines the culture information for a request via values in the route values.
/// </summary>
public class RouteValueRequestCultureProvider : RequestCultureProvider
{
    /// <summary>
    /// The key that contains the culture name.
    /// Defaults to "culture".
    /// </summary>
    public string RouteValueKey { get; set; } = "culture";

    /// <summary>
    /// The key that contains the UI culture name. If not specified or no value is found,
    /// <see cref="RouteValueKey"/> will be used.
    /// Defaults to "ui-culture".
    /// </summary>
    public string UIRouteValueKey { get; set; } = "ui-culture";

    public override Task<ProviderCultureResult?> DetermineProviderCultureResult(HttpContext httpContext)
    {
        if (httpContext == null)
        {
            throw new ArgumentNullException(nameof(httpContext));
        }

        var request = httpContext.Request;
        if (!request.RouteValues.Any())
        {
            return NullProviderCultureResult;
        }

        string? queryCulture = null;
        string? queryUICulture = null;

        if (!string.IsNullOrWhiteSpace(RouteValueKey))
        {
            queryCulture = request.RouteValues[RouteValueKey]?.ToString();
        }

        if (!string.IsNullOrWhiteSpace(UIRouteValueKey))
        {
            queryUICulture = request.RouteValues[UIRouteValueKey]?.ToString();
        }

        if (queryCulture == null && queryUICulture == null)
        {
            // No values specified 
            return NullProviderCultureResult;
        }

        if (queryCulture != null && queryUICulture == null)
        {
            // Value for culture but not for UI culture so default to culture value for both
            queryUICulture = queryCulture;
        }
        else if (queryCulture == null && queryUICulture != null)
        {
            // Value for UI culture but not for culture so default to UI culture value for both
            queryCulture = queryUICulture;
        }

        var providerResultCulture = new ProviderCultureResult(queryCulture, queryUICulture);

        return Task.FromResult<ProviderCultureResult?>(providerResultCulture);
    }
}

This RouteValueRequestCultureProvider reads the culture and the ui-culture out of the route values and returns a ProviderCultureResult that will be used by the Localizers.

The route engine handles the generation of the route URLs for us if we use the MVC mechanisms to create links and tags. We'll now have the selected language and region everywhere in the routes.

To create a language changer, we Just need to change the culture in the route value like this:

<ul class="navbar-nav flex-grow-1 justify-content-end">
    <li class="nav-item">
        <a class="nav-link text-dark" asp-area="" 
           asp-controller="@Context.GetRouteValue("Controller")" 
           asp-action="@Context.GetRouteValue("Action")" 
           asp-route-culture="en-US">EN</a>
    </li>
    <li class="nav-item">
        <a class="nav-link text-dark" asp-area="" 
           asp-controller="@Context.GetRouteValue("Controller")" 
           asp-action="@Context.GetRouteValue("Action")" 
           asp-route-culture="de-DE">DE</a>
    </li>
    <li class="nav-item">
        <a class="nav-link text-dark" asp-area="" 
           asp-controller="@Context.GetRouteValue("Controller")" 
           asp-action="@Context.GetRouteValue("Action")" 
           asp-route-culture="fr-FR">FR</a>
    </li>
</ul>

Changing the culture and the UI culture also changes the way how dates, numbers, and currencies are displayed. This means the language changer is also changing the region and will display the currency in Euro in case you change the region to a region that uses the Euro as local currency. You need to keep this in mind when working with financial data because just changing the currency doesn't make sense if you don't convert the actual numbers to the local currency as well. If you don't want to change the currency, you should hard code the way how to format and display currency. Just fixing the culture of a region and changing the UI culture would also set the numbers and dates to a fixed format which is not what we want to have.

This is the start page of the sample project in French.

French localized UI

(I apologies for wrong translations. Unfortunately it is more than 25 years in the past when I was learning French in school.)

Sample application and Conclusion

This is actually working and I created a small application to demonstrate this. This sample includes all the topics of this post. You will find the sample project in Github.

Microsoft reduces the complexity a lot. On the other hand, if you were used to work with more complex resource handling in the past, you will stumble upon small things you won't expect like me. However, adding Globalization and Localization in .NET 7 is easy and I like the way how to get it working.

Holger Schwichtenberg: Erneute Umbenennung: 18 Monate sind nun der "Standard Support" für .NET

Microsoft verwirft die zwischenzeitlich bei .NET eingeführte Bezeichnung "Short-term Support" wieder und spricht nun von "Standard Support".

Code-Inside Blog: Azure DevOps & Azure Service Connection

Today I needed to setup a new release pipeline on our Azure DevOps Server installation to deploy some stuff automatically to Azure. The UI (at least on the Azure DevOps Server 2020 (!)) is not really clear about how to connect those two worlds, and thats why I’m writing this short blogpost.

First - under project settings - add a new service connection. Use the Azure Resource Manager-service. Now you should see something like this:

x

Be aware: You will need to register app inside your Azure AD and need permissions to setup. If you are not able to follow these instructions, you might need to talk to your Azure subscription owner.

Subscription id:

Copy here the id of your subscription. This can be found in the subscription details:

x

Keep this tab open, because we need it later!

Service prinipal id/key & tenant id:

Now this wording about “Service principal” is technically correct, but really confusing if your are not familar with Azure AD. A “Service prinipal” is like a “service user”/”app” that you need to register to use it. The easiest route is to create an app via the Bash Azure CLI:

az ad sp create-for-rbac --name DevOpsPipeline

If this command succeeds you should see something like this:

{
  "appId": "[...GUID..]",
  "displayName": "DevOpsPipeline",
  "password": "[...PASSWORD...]",
  "tenant": "[...Tenant GUID...]"
}

This creates an “Serivce principal” with a random password inside your Azure AD. The next step is to give this “Service principal” a role on your subscription, because it has currently no permissions to do anything (e.g. deploy a service etc.).

Go to the subscription details page and then to Access control (IAM). There you can add your “DevOpsPipeline”-App as “Contributor” (Be aware that this is a “powerful role”!).

After that use the "appId": "[...GUID..]" from the command as Service Principal Id. Use the "password": "[...PASSWORD...]" as Service principal key and the "tenant": "[...Tenant GUID...]" for the tenant id.

Now you should be able to “Verify” this connection and it should work.

Links: This blogpost helped me a lot. Here you can find the official documentation.

Hope this helps!

Jürgen Gutsch: ASP.NET Core 7 updates

Release candidate 1 of ASP.NET Core 7 is out for around two weeks and the release date isn't that far. The beginning of November usually is the time when Microsoft is releasing the new version of .NET. Please find the announcement post here: ASP.NET Core updates in .NET 7 Release Candidate 1. I will not repeat this post but pick some personal highlights to write about.

ASP.NET Core Roadmap for .NET 7

First of all, a look at the ASP.NET Core roadmap for .NET 7 shows us, that there are only a few issues open and planned for the upcoming release. That means the release is complete and almost only bugfixes will be pushed to that release. Many other open issues are already stroked through and probably assigned to a later release. Guess we'll have a published roadmap for ASP.NET Core on .NET 8 soon. At the latest at the beginning of November.

What are the updates of this RC 1?

A lot of Blazor

Even this release is full of Blazor improvements. Those working a lot with Blazor will be happy about improved JavaScript interop, debugging improvements, handling location-changing events, and dynamic authentication requests coming with this release.

However, there are some quite interesting improvements within this release that might be great for almost every ASP.NET Core developer:

Faster HTTP/2 uploads and HTTP3 performance improvements

The team increases the default upload connection window size of HTTP/2, resulting in a much faster upload time. Stream handling is always tricky and needs a lot of fine-tuning to find the right balance. Improving the upload speed by more than five times is awesome and really helpful to upload bigger files. Even in HTTP/3 the performance was increased by reducing HTTP/3 allocations. Feature parity with HTTP/1, HTTP/2, and HTTP/3 is as useful as Server Name Indication (SNI) when configuring connection certificates.

Rate limiting middleware improvements

The rate-limiting middleware got some small improvements to make it easier and more flexible to configure. You can now add attributes to controller actions to enable or disable rate limiting on specific endpoints. To do the same on Minimal API endpoints and endpoint groups you can use methods to enable or disable rate limiting. This way you can enable rate-limiting for an endpoint group, but disable it for a specific one inside this group.

You can specify the rate limiting policy on both attributes, endpoints, and endpoint groups methods. Unlike the attributes that support named policies, only the Minimal API methods can also take an instance of a policy.

Experimental stuff added to this release

WebTransport is a new draft specification for HTTP/3 that works similarly to WebSockets but supports multiple streams per connection. The support for WebTransport is now added as an experimental feature to the RC1

One of the new features in .NET 7 is gRPC JSON transcoding to turn gRPC APIs into RESTful APIs. Any RESTful API should have an OpenAPI documentation, and so should gRPC JSON transcoding. This release now contains experimental support to add Swashbuckle Swagger to gRPC to render an OpenAPI documentation

Conclusion

ASP.NET Core on .NET 7 seems to be complete now and I'm really looking forward to the .NET Conf 2022 beginning of November which will be the launch event for .NET 7.

And exactly this reminds me to start thinking about the next edition of my book "Customizing ASP.NET Core" which needs to be updated to .NET 8 and enhanced by probably three more chapters next year.

Stefan Henneken: IEC 61131-3: SOLID – Das Interface Segregation Principle

Der Grundgedanke des Interface Segregation Principle (ISP) hat starke Ähnlichkeit mit dem Single Responsibility Principle (SRP): Module mit zu vielen Zuständigkeiten können die Pflege und Wartbarkeit eines Softwaresystem negativ beeinflussen. Das Interface Segregation Principle (ISP) legt den Schwerpunkt hierbei auf die Schnittstelle des Moduls. Ein Modul sollte nur die Schnittstellen implementieren, die für seine Aufgabe benötigt werden. Im Folgenden wird gezeigt, wie dieses Designprinzip umgesetzt werden kann.

Ausgangssituation

Im letzten Post (IEC 61131-3: SOLID – Das Liskov Substitution Principle) wurde das Beispiel um einen weiteren Lampentyp (FB_LampSetDirectDALI) erweitert. Das Besondere an diesem Lampentyp ist die Skalierung des Ausgangwertes. Während die anderen Lampentypen 0-100 % ausgeben, gibt der neue Lampentyp einen Wert von 0 bis 254 aus.

So wie alle anderen Lampentypen, besitzt auch der neue Lampentyp (DALI-Lampe) einen Adapter (FB_LampSetDirectDALIAdapter). Die Adapter sind bei der Umsetzung des Single Responsibility Principle (SRP) hinzugekommen und stellen sicher, dass die Funktionsblöcke der einzelnen Lampentypen nur für eine einzelne Fachlichkeit zuständig sind (siehe IEC 61131-3: SOLID – Das Single Responsibility Principle).

Das Beispielprogramm wurde zuletzt so angepasst, dass von dem neuen Lampentyp (FB_LampSetDirectDALI) der Ausgangswert innerhalb des Adapters von 0-254 auf 0-100 % skaliert wird. Dadurch verhält sich die DALI-Lampe genau wie die anderen Lampentypen, ohne das Liskov Substitution Principle (LSP) zu verletzen.

Dieses Beispielprogramm soll uns als Ausgangssituation für die Erklärung des Interface Segregation Principle (ISP) dienen.

Erweiterung der Implementierung

Auch dieses Mal, soll die Anwendung erweitert werden. Allerdings wird nicht ein neuer Lampentyp definiert, sondern ein vorhandener Lampentyp wird um eine Funktionalität erweitert. Die DALI-Lampe soll in der Lage sein, die Betriebsstunden zu zählen. Hierzu wird der Funktionsblock FB_LampSetDirectDALI um die Eigenschaft nOperatingTime erweitert.

PROPERTY PUBLIC nOperatingTime : DINT

Über den Setter kann der Betriebsstundenzähler auf einen beliebigen Wert gesetzt werden, während der Getter den aktuellen Zustand des Betriebsstundenzählers zurückgibt.

Da FB_Controller die einzelnen Lampentypen repräsentiert, wird dieser Funktionsblock ebenfalls um nOperatingTime erweitert.

Die Erfassung der Betriebsstunden erfolgt im Funktionsblock FB_LampSetDirectDALI. Ist der Ausgangswert > 0, so wird jede Sekunde der Betriebsstundenzähler um 1 erhöht:

IF (nLightLevel > 0) THEN
  tonDelay(IN := TRUE, PT := T#1S);
  IF (tonDelay.Q) THEN
    tonDelay(IN := FALSE);
    _nOperatingTime := _nOperatingTime + 1;
  END_IF
ELSE
  tonDelay(IN := FALSE);
END_IF

Die Variable _nOperatingTime ist die Backing Variable für die neue Eigenschaft nOperatingTime und ist im Funktionsblock deklariert.

Welche Möglichkeiten gibt es, um den Wert von nOperatingTime aus FB_LampSetDirectDALI in die Eigenschaft nOperatingTime von FB_Controller zu übertragen? Auch hier gibt es jetzt verschiedene Ansätze, um die geforderte Erweiterung in die gegebene Softwarestruktur zu integrieren.

Ansatz 1: Erweiterung von I_Lamp

Die Eigenschaft für das neue Leistungsmerkmal wird mit in die Schnittstelle I_Lamp integriert. Somit erhält auch der abstrakte Funktionsblock FB_Lamp die Eigenschaft nOperatingTime. Da alle Adapter von FB_Lamp erben, erhalten die Adapter aller Lampentypen diese Eigenschaft, unabhängig ob der Lampentyp einen Betriebsstundenzähler unterstützt oder nicht.

Der Getter und der Setter von nOperatingTime in FB_Controller können somit direkt auf nOperatingTime der einzelnen Adapter der Lampentypen zugreifen. Der Getter von FB_Lamp (abstrakter Funktionsblock, von dem alle Adapter erben) liefert den Wert -1 zurück. Somit kann das Fehlen des Betriebsstundenzähler erkannt werden.

IF (fbController.nOperatingTime >= 0) THEN
  nOperatingTime := fbController.nOperatingTime;
ELSE
  // service not supported
END_IF

Da FB_LampSetDirectDALI den Betriebsstundenzähler unterstützt, überschreibt der Adapter (FB_LampSetDirectDALIAdapter) die Eigenschaft nOperatingTime. Der Getter und der Setter vom Adapter greifen auf nOperatingTime von FB_LampSetDirectDALI zu. Der Wert des Betriebsstundenzählers wird somit bis zu FB_Controller weitergegeben.

(abstrakte Elemente werden in kursiver Schriftart dargestellt)

Beispiel 1 (TwinCAT 3.1.4024) auf GitHub

Dieser Ansatz setzt das Leistungsmerkmal wie gewünscht um. Auch werden keine der bisher gezeigten SOLID-Prinzipen verletzt.

Allerdings wird die zentrale Schnittstelle I_Lamp erweitert, nur um bei einem Lampentyp ein weiteres Leistungsmerkmal hinzuzufügen. Alle anderen Adapter der Lampentypen, auch die, die das neue Leistungsmerkmal nicht unterstützen, erhalten über den abstrakten Basis-FB FB_Lamp ebenfalls die Eigenschaft nOperatingTime.

Mit jedem Leistungsmerkmal, welches auf diese Weise hinzugefügt wird, vergrößert sich die Schnittstelle I_Lamp und somit auch der abstrakte Basis-FB FB_Lamp.

Ansatz 2: zusätzliche Schnittstelle

Bei diesem Ansatz wird die Schnittstelle I_Lamp nicht erweitert, sondern es wird für die gewünschte Funktionalität eine neue Schnittstelle (I_OperatingTime) hinzugefügt. I_OperatingTime enthält nur die Eigenschaft, die für das Bereitstellen des Betriebsstundenzählers notwendig ist:

PROPERTY PUBLIC nOperatingTime : DINT

Implementiert wird diese Schnittstelle vom Adapter FB_LampSetDirectDALIAdapter.

FUNCTION_BLOCK PUBLIC FB_LampSetDirectDALIAdapter EXTENDS FB_Lamp IMPLEMENTS I_OperatingTime

Somit erhält FB_LampSetDirectDALIAdapter die Eigenschaft nOperationTime nicht über FB_Lamp bzw. I_Lamp, sondern über die neue Schnittstelle I_OperatingTime.

Greift FB_Controller im Getter von nOperationTime auf den aktiven Lampentyp zu, so wird vor dem Zugriff geprüft, ob der ausgewählte Lampentyp die Schnittstelle I_OperatingTime implementiert. Ist dieses der Fall, so wird über I_OperatingTime auf die Eigenschaft zugegriffen. Hat der Lampentyp die Schnittstelle nicht implementiert, wird -1 zurückgegeben.

VAR
  ipOperatingTime  : I_OperatingTime;
END_VAR
IF (__ISVALIDREF(_refActiveLamp)) THEN
  IF (__QUERYINTERFACE(_refActiveLamp, ipOperatingTime)) THEN
    nOperatingTime := ipOperatingTime.nOperatingTime;
  ELSE
    nOperatingTime := -1; // service not supported
  END_IF
END_IF

Ähnlich ist der Setter von nOperationTime aufgebaut. Nach der erfolgreichen Prüfung, ob I_OperatingTime von der aktiven Lampe implementiert wird, erfolgt über die Schnittstelle der Zugriff auf die Eigenschaft.

VAR
  ipOperatingTime  : I_OperatingTime;
END_VAR
IF (__ISVALIDREF(_refActiveLamp)) THEN
  IF (__QUERYINTERFACE(_refActiveLamp, ipOperatingTime)) THEN
    ipOperatingTime.nOperatingTime := nOperatingTime;
  END_IF
END_IF
(abstrakte Elemente werden in kursiver Schriftart dargestellt)

Beispiel 2 (TwinCAT 3.1.4024) auf GitHub

Analyse der Optimierung

Das Verwenden einer separaten Schnittstelle für das zusätzliche Leistungsmerkmal entspricht der ‚Optionalität‘ aus IEC 61131-3: SOLID – Das Liskov Substitution Principle. In dem obigen Beispiel kann zur Laufzeit des Programms geprüft werden (mit __QUERYINTERFACE()), ob eine bestimmte Schnittstelle implementiert und somit das jeweilige Leistungsmerkmal unterstützt wird. Weitere Eigenschaften, wie bIsDALIDevice aus dem ‚Optionalität‘-Beispiel, sind bei diesem Lösungsansatz nicht notwendig.

Wird pro Leitungsmerkmal bzw. Funktionalität eine separate Schnittstelle angeboten, können andere Lampentypen diese ebenfalls implementieren, um so das gewünschte Leistungsmerkmal umzusetzen. Soll FB_LampSetDirect ebenfalls einen Betriebsstundenzähler erhalten, so muss FB_LampSetDirect um die Eigenschaft nOperatingTime erweitert werden. Außerdem muss FB_LampSetDirectAdapter die Schnittstelle I_OperatingTime implementieren. Alle anderen Funktionsblöcke, auch FB_Controller, bleiben unverändert.

Ändert sich die Funktionsweise der Betriebsstundenzähler und I_OperatingTime erhält zusätzliche Methoden, so müssen nur die Funktionsblöcke angepasst werden, die auch das Leistungsmerkmal unterstützen.

Beispiele für das Interface Segregation Principle (ISP) sind auch im .NET zu finden. So gibt es in .NET die Schnittstelle IList. Diese Schnittstelle enthält Methoden und Eigenschaften für das Anlegen, Verändern und Lesen von Auflistungen. Je nach Anwendungsfall ist es aber ausreichend, dass der Anwender eine Auflistung nur lesen muss. Das Übergeben einer Auflistung durch IList würde in diesem Fall aber auch Methoden anbieten, um die Auflistung zu verändern. Für diese Anwendungsfälle gibt es die Schnittstelle IReadOnlyList. Mit dieser Schnittstelle kann eine Auflistung nur gelesen werden. Ein versehentliches Verändern der Daten ist somit nicht möglich.

Das Aufteilen von Fachlichkeiten in einzelne Schnittstellen erhöht somit nicht nur die Wartbarkeit, sondern auch die Sicherheit eines Softwaresystems.

Die Definition des Interface Segregation Principle

Damit kommen wir auch schon zur Definition des Interface Segregation Principle (ISP):

Ein Modul, das eine Schnittstelle benutzt, sollte nur diejenigen Methoden präsentiert bekommen, die sie auch wirklich benötigt.

Oder etwas anders formuliert:

Clients sollten nicht gezwungen werden, von Methoden abhängig zu sein, die sie nicht benötigen.

Ein häufiges Argument gegen das Interface Segregation Principle (ISP) ist die erhöhte Anzahl von Schnittstellen. Ein Softwareentwurf kann im Laufe seiner Entwicklungszyklen jederzeit noch angepasst werden. Wenn Sie also das Gefühl haben, das eine Schnittstelle zu viele Funktionalitäten beinhaltet, prüfen Sie, ob eine Aufteilung möglich ist. Natürlich sollte ein Overengineering immer vermieden werden. Ein gewisses Maß an Erfahrung kann hierbei hilfreich sein.

Abstrakte Funktionsblöcke stellen ebenfalls eine Schnittstelle (siehe FB_Lamp) dar. In einem abstrakten Funktionsblock können Grundfunktionen enthalten sein, die der Anwender nur um die notwendigen Details ergänzt. Es ist nicht notwendig, alle Methoden oder Eigenschaften selbst zu implementieren. Aber auch hierbei ist es wichtig, den Anwender nicht mit Fachlichkeiten zu belasten, die für seine Aufgaben nicht notwendig sind. Die Menge der abstrakten Methoden und Eigenschaften sollte möglichst klein sein.

Die Beachtung des Interface Segregation Principles (ISP) hält Schnittstellen zwischen Funktionsblöcken so klein wie möglich, wodurch die Kopplung zwischen den einzelnen Funktionsblöcken reduziert wird.

Zusammenfassung

Soll ein Softwaresystem weitere Leistungsmerkmale abdecken, so reflektieren Sie die neuen Anforderungen und erweitern Sie nicht voreilig bestehende Schnittstellen. Prüfen Sie, ob separate Schnittstellen nicht die bessere Entscheidung sind. Als Belohnung erhalten Sie ein Softwaresystem das leichter zu pflegen, besser zu testen und einfacher zu erweitern ist.

Im Letzten noch ausstehenden Teil, wird das Open Closed Principle (OCP) näher erklärt.

Stefan Henneken: IEC 61131-3: SOLID – The Liskov Substitution Principle

„The Liskov Substitution Principle (LSP) requires that derived function blocks (FBs) are always compatible to their base FB. Derived FBs must behave like their respective base FB. A derived FB may extend the base FB, but not restrict it.” This is the core statement of the Liskov Substitution Principle (LSP), which Barbara Liskov formulated already in the late 1980s. Although the Liskov Substitution Principle (LSP) is one of the simpler SOLID principles, its violation is very common. The following example shows why the Liskov Substitution Principle (LSP) is important.

Starting situation

I use once again the example, which was already developed and optimized in the two previous posts. The core of the example are three lamp types, which are mapped by the function blocks FB_LampOnOff, FB_LampSetDirect and FB_LampUpDown. The interface I_Lamp and the abstract function block FB_Lamp secure a clear decoupling between the respective lamp types and the higher-level controller FB_Controller.

FB_Controller no longer accesses specific instances, but only a reference of the abstract function block FB_Lamp. The IEC 61131-3: SOLID – The Dependency Inversion Principle is used to break the fixed coupling.

To realize the required functionality, each lamp type provides its own methods. For this reason, each lamp type also has a corresponding adapter function block (FB_LampOnOffAdapter, FB_LampSetDirectAdapter and FB_LampUpDownAdapter), which is responsible for mapping between the abstract lamp (FB_Lamp) and the concrete lamp types (FB_LampOnOff, FB_LampSetDirect and FB_LampUpDown). This optimization is supported by the IEC 61131-3: SOLID – The Single Responsibility Principle.

Extension of the implementation

The three required lamp types can be mapped well by the existing software design. Nevertheless, it can happen that extensions, which seem simple at first sight, lead to difficulties later. The new lamp type FB_LampSetDirectDALI will serve as an example here.

DALI stands for Digital Addressable Lighting Interface and is a protocol for controlling lighting devices. Basically, the new function block behaves like FB_LampSetDirect, but with DALI the output value is not given in 0-100 % but in 0-254.

Optimization and analysis of the extensions

Which approaches are available to implement this extension? The different approaches will also be analyzed in more detail.

Approach 1: Quick & Dirty

High time pressure can tempt to realize the Quick & Dirty implementation. Since FB_LampSetDirect behaves similarly to the new DALI lamp type, FB_LampSetDirectDALI inherits from FB_LampSetDirect. To enable the value range of 0-254, the SetLightLevel() method of FB_LampSetDirectDALI is overwritten.

METHOD PUBLIC SetLightLevel
VAR_INPUT
  nNewLightLevel : BYTE(0..254);
END_VAR
nLightLevel := nNewLightLevel;

The new adapter function block (FB_LampSetDirectDALIAdapter) is also adapted so that the methods regard the value range 0-254.

As an example, the methods DimUp() and On() are shown here:

METHOD PUBLIC DimUp
IF (fbLampSetDirectDALI.nLightLevel <= 249) THEN
  fbLampSetDirectDALI.SetLightLevel(fbLampSetDirectDALI.nLightLevel + 5);
END_IF
IF (_ipObserver <> 0) THEN
  _ipObserver.Update(fbLampSetDirectDALI.nLightLevel);
END_IF
METHOD PUBLIC On
fbLampSetDirectDALI.SetLightLevel(254);
IF (_ipObserver <> 0) THEN
  _ipObserver.Update(fbLampSetDirectDALI.nLightLevel);
END_IF

The simplified UML diagram shows the integration of the function blocks for the DALI lamp into the existing software design:

(abstract elements are displayed in italics)

Sample 1 (TwinCAT 3.1.4024) on GitHub

This approach implements the requirements quickly and easily through a pragmatic strategy. But this also added some specifics that complicate the use of the blocks in an application.

For example, how should a user interface behave when it connects to an instance of FB_Controller and FB_AnalogValue outputs a value of 100? Does 100 mean that the current lamp is at 100 % or does the new DALI lamp output a value of 100, which would be well below 100 %?

The user of FB_Controller must always know the active lamp type in order to interpret the current output value correctly. FB_LampSetDirectDALI inherits from FB_LampSetDirect, but changes its behavior. In this example, the behavior is changed by overwriting the SetLightLevel() method. The derived FB (FB_LampSetDirectDALI) behaves differently to the base FB (FB_LampSetDirect). FB_LampSetDirect can no longer be replaced (substituted) by FB_LampSetDirectDALI. The Liskov Substitution Principle (LSP) is violated.

Approach 2: Optionality

In this approach, each lamp type contains a property that returns information about the exact function of the function block.

In .NET, for example, this approach is used in the abstract class System.IO.Stream. The Stream class serves as the base class for specialized streams (e.g., FileStream and NetworkStream) and specifies the most important methods and properties. This includes the methods Write(), Read() and Seek(). Since not every stream can provide all functions, the properties CanRead, CanWrite and CanSeek provide information about whether the corresponding method is supported by the respective stream. For example, NetworkStream can check at runtime whether writing to the stream is possible or whether it is a read-only stream.

In our example, I_Lamp is extended by the property bIsDALIDevice.

This means that FB_Lamp and therefore every adapter function block also receives this property. Since the functionality of bIsDALIDevice is the same in all adapter function blocks, bIsDALIDevice is not declared as abstract in FB_Lamp. This means that it is not necessary for all adapter function blocks to implement this property themselves. The functionality of bIsDALIDevice is inherited by FB_Lamp to all adapter function blocks.

For FB_LampSetDirectDALIAdapter, the backing variable of the property bIsDALIDevice is set to TRUE in the method FB_init().

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains  : BOOL;
  bInCopyCode   : BOOL;
END_VAR
SUPER^._bIsDALIDevice := TRUE;

For all other adapter function blocks, _bIsDALIDevice retains its initialization value (FALSE). The use of the FB_init() method is not necessary for these adapter function blocks.

The user of FB_Controller (MAIN block) can now query at program runtime whether the current lamp is a DALI lamp or not. If this is the case, the output value is scaled accordingly to 0-100 %.

IF (__ISVALIDREF(fbController.refActiveLamp) AND_THEN fbController.refActiveLamp.bIsDALIDevice) THEN
  nLightLevel := TO_BYTE(fbController.fbActualValue.nValue * 100.0 / 254.0);
ELSE
  nLightLevel := fbController.fbActualValue.nValue;
END_IF

Note: It is important to use the AND_THEN operator instead of THEN. This means that the expression to the right of AND_THEN is only executed if the first operand (to the left of AND_THEN) is TRUE. This is important here because otherwise the expression fbController.refActiveLamp.bIsDALIDevice would terminate the execution of the program in case of an invalid reference to the active lamp (refActiveLamp).

The UML diagram shows how FB_Lamp receives the property bIsDALIDevice via the interface I_Lamp and is thus inherited by all adapter function blocks:

(abstract elements are displayed in italics)

Sample 2 (TwinCAT 3.1.4024) on GitHub

This approach still violates the Liskov Substitution Principle (LSP). FB_LampSetDirectDALI behaves further on differently to FB_LampSetDirect. The user hast to take this difference into account (querying bIsDALIDevice) and correct it (scaling to 0-100 %). This is easy to overlook or to implement incorrectly.

Approach 3: Harmonization

In order not to violate the Liskov Substitution Principle (LSP) any further, the inheritance between FB_LampSetDirect and FB_LampSetDirectDALI is resolved. Even if both function blocks appear very similar at first glance, the inheritance should be avoided with at this point.

The adapter function blocks ensure that all lamp types can be controlled using the same methods. However, there are still differences in the representation of the output value.

In FB_Controller the initial value of the active lamp is represented by an instance of FB_AnalogValue. A new initial value is transmitted by the Update() method. To ensure that the initial value is also displayed uniformly, it is scaled to 0-100 % before the Update() method is called. The necessary adjustments are made exclusively in the methods DimDown(), DimUp(), Off() and On() of FB_LampSetDirectDALIAdapter.

The On() method is shown here as an example:

METHOD PUBLIC On
fbLampSetDirectDALI.SetLightLevel(254);
IF (_ipObserver <> 0) THEN
  _ipObserver.Update(TO_BYTE(fbLampSetDirectDALI.nLightLevel * 100.0 / 254.0));
END_IF

The adapter function block contains all the necessary instructions, which causes the DALI lamp to behave to the outside as expected. FB_LampSetDirectDALI remains unchanged with this solution approach.

(abstract elements are displayed in italics)

Sample 3 (TwinCAT 3.1.4024) on GitHub

Optimization analysis

Through various techniques, it is possible for us to implement the desired extension without violating the Liskov Substitution Principle (LSP). Inheritance is a precondition to violate the LSP. If the LSP is violated, this may be an indication of a bad inheritance hierarchy within the software design.

Why is it important to follow the Liskov Substitution Principle (LSP)? Function blocks can also be passed as parameters. If a POU would expect a parameter of the type FB_LampSetDirect, then FB_LampSetDirectDALI could also be passed when using inheritance. However, the operation of the SetLightLevel() method is different for the two function blocks. Such differences can lead to undesirable behavior within a system.

The definition of the Liskov Substituation Principle

Let q(x) be a property provable about objects x of type T. Then q(y) should be true for objects y of type S where S is a subtype of T.

This is the more formal definition of the Liskov Substitution Principle (LSP) by Barbara Liskov. As mentioned above, this principle was already defined at the end of the 1980s. The complete elaboration was published under the title Data Abstraction and Hierarchy.

Barbara Liskov was one of the first women to earn a doctorate in computer science in 1968. In 2008, she was also one of the first women to receive the Turing Award. Early on, she became involved with object-oriented programming and thus also with the inheritance of classes (function blocks).

Inheritance places two function blocks in a specific relationship to each other. Inheritance here describes an is-a relationship. If FB_LampSetDirectDALI inherits from FB_LampSetDirect, the DALI lamp is a (normal) lamp extended by special (additional) functions. Wherever FB_LampSetDirect is used, FB_LampSetDirectDALI could also be used. FB_LampSetDirect can be substituted by FB_LampSetDirectDALI. If this is not ensured, the inheritance should be questioned at this point.

Robert C. Martin has included this principle in the SOLID principles. In the book (Amazon advertising link *) Clean Architecture: A Craftsman’s Guide to Software Structure and Design, this principle is explained further and extended to the field of software architecture.

Summary

By extending the above example, you have learned about the Liskov Substitution Principle (LSP). Complex inheritance hierarchies in particular are prone to violating this principle. Although the formal definition of the Liskov Substitution Principle (LSP) sounds complicated, the key message of this principle is simple to understand.

In the next post, our example will be extended again. The Interface Segregation Principle (ISP) will play a central role in it.

Norbert Eder: Git Commit signieren – No secret key

Mit der Signatur des Commits unterschreibst du den Commit persönlich und bestätigst, dass der übermittelte Code von dir stammt. Das kann nur machen, wer auch den privaten Schlüssel zur Verfügung hast. In der Regel bist das ausschließlich du. Damit kann zwar jemand mit deinem Namen und Mail-Adresse einen Commit erstellen und pushen und sich als Du ausgeben, nicht aber mit deiner Signatur unterschreiben (Zugriff auf das Repository vorausgesetzt).

Nun kommt es aber nach der Konfiguration unter Windows häufig zu diesem Fehler:

gpg: signing failed: No secret key

In diesem fall fehlt dir vermutlich nur eine Git-Konfiguration. Und zwar kann Git die GPG-Applikation nicht finden.

git config --global gpg.program [GPG-Pfad]

Einfach den GPG-Pfad mit dem direkten Pfad zur gpg.exe versehen und das Signieren funktioniert sofort.

Wie Git-Commits signieren?

Falls sich die allgemeine Frage auftut, wie man Git-Commits signieren kann, ein paar kurze Worte dazu. Wenn du beispielsweise GPG verwendest, kannst du (sofern noch nicht vorhanden) ein neues Schlüsselpaar anlegen. Mit

gpg --list-keys

kannst du dir dann dein Schlüsselpaare ausgeben lassen. Vom gewünschten Schlüsselpaar kopierst du dir die Schlüssel-Id. Diese hinterlegst du in der Git-Konfiguration als Standard-Schlüssel-Id. Hierzu nachfolgend einfach [KEYID] mit dre Schüssel-Id ersetzen.

git config --global user.signingkey [KEYID]

Nun noch unter Windows den Link zur gpg.exe eintragen, wie oben gezeigt und schon können Commits mit der zusätzlichen Option -S des Kommandos git commit signiert werden. Hierzu unbedingt auf Groß- und Kleinschreibung achten. Beispiel:

git commit -S -am "Test commit"

So einfach fügst du deine persönliche Unterschrift dem Commit hinzu. Ich persönlich empfehle diese Vorgehensweise.

Der Beitrag Git Commit signieren – No secret key erschien zuerst auf Norbert Eder.

Norbert Eder: Abhängigkeiten und Vulnerabilities im Griff

Umso größer Entwicklungsprojekte sind, umso mehr Abhängigkeiten bestehen. Alle Abhängigkeiten im Überblick zu behalten ist teilweise schon eine Herausforderung, ganz zu schweigen, von der allzu oft durchgeführten wahllosen Einbindung ohne Check der Lizenzen, Vulnerabilities etc. im Vorfeld. Aber wie bekommt man diese Themen alle in den Griff?

Prüfen auf Vulnerabilities

Die meisten Paketmanager etc. bieten mittlerweile entsprechende Features an. Mit dotnet list package --vulnerable erfolgt beispielsweise im .NET-Umfeld eine Auflistung aller vulnerablen Pakete. Mit npm audit kann eine derartige Liste mit NPM herausgefahren werden.

Diesen Varianten ist aber gemein, dass sie den Status zum Aufrufzeitpunkt abbilden. Nicht mehr und nicht weniger. Und möglicherweise möchte man etwas mehr:

  • Tracking der Abhängigkeiten über Versionen der eigenen Software hinweg
  • Übersicht aller Lizenzen der Abhängigkeiten
  • Auflistung und Risikobewertung aller Schwachstellen pro Version der eigenen Software
  • Möglichkeit, Schwachstellen zu auditieren und Entscheidungen zu dokumentieren
  • Automatische Aktualisierung/Auswertung durch Integration ins Build-System

Dependency Track von OWASP

Das Open Web Application Security Project (kurz OWASP) ist vielen vielleicht ein Begriff, bringt die Foundation doch regelmäßig die Top 10 Web Application Security Risks heraus. Diese sollten in der Webentwicklung auf jeden Fall neben den Secure Coding Practices [PDF] und dem Web Security Testing Guide im Auge behalten werden.

Mit Dependency-Track stellt OWASP ein Tool zur Verfügung, in welches mittels einer CycloneDX-BOM (Bill of Material) Listen von Abhängigkeiten importiert und gegen Vulnerability Datenbanken geprüft werden. Hierfür stehen VulnDB, GitHub Advisories und zahlreiche weitere Quellen zur Verfügung.

Für die Generierung der notwendigen BOM stehen zahlreiche Tools für unterschiedliche Entwicklungsplattformen zur Verfügung. Somit ist eine einfache Einbindung in die Buildumgebung problemlos zu machen.

Die Installation von Dependency-Track gestaltet sich denkbar einfach, da die Auslieferung unter anderem als Docker-Container erfolgt.

Nachfolgend einige Screenshots des Herstellers.

Dependency-Track: Übersicht der Komponenten
Dependency-Track: Audit der gefundenen Vulnerabilities

Zudem steht ein übersichtliches Dashboard für einen Überblick über die gesamte Softwareinfrastruktur und einer Bewertung des aktuellen Risikos bereit.

Dependency-Track: Dashboard

Dependency-Track steht auf GitHub zur Verfügung und bereichert die Entwicklungsumgebung kostenlos.

Aktives Abhängigkeiten- und Schwachstellen-Management notwendig

Der bloße Einsatz dieses Tools bringt keine Verbesserung der Situation. Vielmehr muss es einen klaren Verantwortlichen geben, der zum Einen ein Abhängigkeitsmanagement betreibt (Wildwuchs eingrenzen, Überblick, Lizenzen) und zum anderen ein Audit über gefundene Risiken durchführt und deren Behebung (Aktualisierung der Abhängigkeit, Austausch etc.) einleitet.

Umso zentraler dieses Thema im Entwicklungsprozess behandelt wird, umso besser und schneller kann auf Schwachstellen reagiert werden.

Der Beitrag Abhängigkeiten und Vulnerabilities im Griff erschien zuerst auf Norbert Eder.

Holger Schwichtenberg: Droht nun das große Namenschaos beim OR-Mapper Entity Framework Core?

Mit oder ohne Core im Namen? Das ist die Frage, die sich beim Blick auf die Historie des Entity Framework (Core) 7 stellt.

Code-Inside Blog: 'error MSB8011: Failed to register output.' & UTF8-BOM files

Be aware: I’m not a C++ developer and this might be an “obvious” problem, but it took me a while to resolve this issue.

In our product we have very few C++ projects. We use these projects for very special Microsoft Office COM stuff and because of COM we need to register some components during the build. Everything worked as expected, but we renamed a few files and our build broke with:

C:\Program Files\Microsoft Visual Studio\2022\Enterprise\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(2302,5): warning MSB3075: The command "regsvr32 /s "C:/BuildAgentV3_1/_work/67/s\_Artifacts\_ReleaseParts\XXX.Client.Addin.x64-Shims\Common\XXX.Common.Shim.dll"" exited with code 5. Please verify that you have sufficient rights to run this command. [C:\BuildAgentV3_1\_work\67\s\XXX.Common.Shim\XXX.Common.Shim.vcxproj]
C:\Program Files\Microsoft Visual Studio\2022\Enterprise\MSBuild\Microsoft\VC\v170\Microsoft.CppCommon.targets(2314,5): error MSB8011: Failed to register output. Please try enabling Per-user Redirection or register the component from a command prompt with elevated permissions. [C:\BuildAgentV3_1\_work\67\s\XXX.Common.Shim\XXX.Common.Shim.vcxproj]

(xxx = redacted)

The crazy part was: Using an older version of our project just worked as expected, but all changes were “fine” from my point of view.

After many, many attempts I remembered that our diff tool doesn’t show us everything - so I checked the file encodings: UTF8-BOM

Somehow if you have a UTF8-BOM encoded file that your C++ project uses to register COM stuff it will fail. I changed the encoding and to UTF8 and everyting worked as expected.

What a day… lessons learned: Be aware of your file encodings.

Hope this helps!

Code-Inside Blog: Which .NET Framework Version is installed on my machine?

If you need to know which .NET Framework Version (the “legacy” .NET Framework) is installed on your machine try this handy oneliner:

Get-ItemProperty "HKLM:SOFTWARE\Microsoft\NET Framework Setup\NDP\v4\Full"

Result:

CBS           : 1
Install       : 1
InstallPath   : C:\Windows\Microsoft.NET\Framework64\v4.0.30319\
Release       : 528372
Servicing     : 0
TargetVersion : 4.0.0
Version       : 4.8.04084
PSPath        : Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework
                Setup\NDP\v4\Full
PSParentPath  : Microsoft.PowerShell.Core\Registry::HKEY_LOCAL_MACHINE\SOFTWARE\Microsoft\NET Framework Setup\NDP\v4
PSChildName   : Full
PSDrive       : HKLM
PSProvider    : Microsoft.PowerShell.Core\Registry

The version should give you more then enough information.

Hope this helps!

Christian Dennig [MS]: ASP.NET Custom Metrics with OpenTelemetry Collector & Prometheus/Grafana

Every now and then, I am asked which tracing, logging or monitoring solution you should use in a modern application, as the possibilities are getting more and more every month – at least, you may get that feeling. To be as flexible as possible and to rely on open standards, a closer look at OpenTelemetry is recommended. It becomes more and more popular, because it offers a vendor-agnostic solution to work with telemetry data in your services and send them to the backend(s) of your choice (Prometheus, Jaeger, etc.). Let’s have a look at how you can use OpenTelemetry custom metrics in an ASP.NET service in combination with the probably most popular monitoring stack in the cloud native space: Prometheus / Grafana.

TL;DR

You can find the demo project on GitHub. It uses a local Kubernetes cluster (kind) to setup the environment and deploys a demo application that generates some sample metrics. Those metrics are sent to an OTEL collector which serves as a Prometheus metrics endpoint. In the end, the metrics are scraped by Prometheus and displayed in a Grafana dashboard/chart.

Demo Setup

OpenTelemetry – What is it and why should you care?

OpenTelemetry

OpenTelemetry (OTEL) is an open-source CNCF project that aims to provide a vendor-agnostic solution in generating, collecting and handling telemetry data of your infrastructure and services. It is able to receive, process, and export traces, logs, and metrics to different backends like Prometheus , Jaeger or other commercial SaaS offering without the need for your application to have a dependency on those solutions. While OTEL itself doesn’t provide a backend or even analytics capabilities, it serves as the “central monitoring component” and knows how to send the data received to different backends by using so-called “exporters”.

So why should you even care? In today’s world of distributed systems and microservices architectures where developers can release software and services faster and more independently, observability becomes one of the most important features in your environment. Visibility into systems is crucial for the success of your application as it helps you in scaling components, finding bugs and misconfigurations etc.

If you haven’t decided what monitoring or tracing solution you are going to use for your next application, have a look at OpenTelemetry. It gives you the freedom to try out different monitoring solutions or even replace your preferred one later in production.

OpenTelemetry Components

OpenTelemetry currently consists of several components like the cross-language specification (APIs/SDKs and the OpenTelemetry Protocol OTLP) for instrumentation and tools to receive, process/transform and export telemetry data. The SDKs are available in several popular languages like Java, C++, C#, Go etc. You can find the complete list of supported languages here.

Additionally, there is a component called the “OpenTelemetry Collector” which is a vendor-agnostic proxy that receives telemetry data from different sources and can transform that data before sending it to the desired backend solution.

Let’s have a closer look at the components of the collector…receivers, processors and exporters:

  • Receivers – A receiver in OpenTelemetry is the component that is responsible for getting data into a collector. It can be used in a push- or pull-based approach. It can support the OLTP protocol or even scrape a Prometheus /metrics endpoint
  • Processor – Processors are components that let you batch-process, sample, transform and/or enrich your telemetry data that is being received by the collector before handing it over to an exporter. You can add or remove attributes, like for example “personally identifiable information” (PII) or filter data based on regular expressions. A processor is an optional component in a collector pipeline.
  • Exporter – An exporter is responsible for sending data to a backend solution like Prometheus, Azure Monitor, DataDog, Splunk etc.

In the end, it comes down to configuring the collector service with receivers, (optionally) processors and exporters to form a fully functional collector pipeline – official documentation can be found here. The configuration for the demo here is as follows:

receivers:
  otlp:
    protocols:
      http:
      grpc:
processors:
  batch:
exporters:
  logging:
    loglevel: debug
  prometheus:
    endpoint: "0.0.0.0:8889"
service:
  pipelines:
    metrics:
      receivers: [otlp]
      processors: [batch]
      exporters: [logging, prometheus]

The configuration consists of:

  • one OpenTelemetry Protocol (OTLP) receiver, enabled for http and gRPC communication
  • one processor that is batching the telemetry data with default values (like e.g. a timeout of 200ms)
  • two exporters piping the data to the console (logging) and exposing a Prometheus /metrics endpoint on 0.0.0.0:8889 (remote-write is also possible)

ASP.NET OpenTelemetry

To demonstrate how to send custom metrics from an ASP.NET application to Prometheus via OpenTelemetry, we first need a service that is exposing those metrics. In this demo, we simply create two custom metrics called otel.demo.metric.gauge1 and otel.demo.metric.gauge2 that will be sent to the console (AddConsoleExporter()) and via the OTLP protocol to a collector service (AddOtlpExporter()) that we’ll introduce later on. The application uses the ASP.NET Minimal API and the code is more or less self-explanatory:

using System.Diagnostics.Metrics;
using OpenTelemetry.Resources;
using OpenTelemetry.Metrics;

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddOpenTelemetryMetrics(metricsProvider =>
{
    metricsProvider
        .AddConsoleExporter()
        .AddOtlpExporter()
        .AddMeter("otel.demo.metric")
        .SetResourceBuilder(ResourceBuilder.CreateDefault()
            .AddService(serviceName: "otel.demo", serviceVersion: "0.0.1")
        );
});

var app = builder.Build();
var otel_metric = new Meter("otel.demo.metric", "0.0.1");
var randomNum = new Random();
// Create two metrics
var obs_gauge1 = otel_metric.CreateObservableGauge<int>("otel.demo.metric.gauge1", () =>
{
    return randomNum.Next(10, 80);
});
var obs_gauge2 = otel_metric.CreateObservableGauge<double>("otel.demo.metric.gauge2", () =>
{
    return randomNum.NextDouble();
});

app.MapGet("/otelmetric", () =>
{
    return "Hello, Otel-Metric!";
});

app.Run();

We are currently dealing with custom metrics. Of course, ASP.NET also provides out-of-the-box metrics that you can utilize. Just use the ASP.NET instrumentation feature by adding AddAspNetCoreInstrumentation() when configuring the metrics provider – more on that here.

Demo

Time to connect the dots. First, let’s create a Kubernetes cluster using kind where we can publish the demo service, spin-up the OTEL collector instance and run a Prometheus/Grafana environment. If you want to follow along the tutorial, clone the repo from https://github.com/cdennig/otel-demo and switch to the otel-demo directory.

Create a local Kubernetes Cluster

To create a kind cluster that is able to host a Prometheus environment, execute:

$ kind create cluster --name demo-cluster \ 
        --config ./kind/kind-cluster.yaml

The YAML configuration file (./kind/kind-cluster.yaml) adjusts some settings of the Kubernetes control plane so that Prometheus is able to scrape the endpoints of the controller services. Next, create the OpenTelemetry Collector instance.

OTEL Collector

In the manifests directory, you’ll find two Kubernetes manifests. One is containing the configuration for the collector (otel-collector.yaml). It includes the ConfigMap for the collector configuration (which will be mounted as a volume to the collector container), the deployment of the collector itself and a service exposing the ports for data ingestion (4318 for http and 4317 for gRPC) and the metrics endpoint (8889) that will be scraped later on by Prometheus. It looks as follows:

apiVersion: v1

kind: ConfigMap
metadata:
  name: otel-collector-config
data:
  otel-collector-config: |-
    receivers:
      otlp:
        protocols:
          http:
          grpc:
    exporters:
      logging:
        loglevel: debug
      prometheus:
        endpoint: "0.0.0.0:8889"
    processors:
      batch:
    service:
      pipelines:
        metrics:
          receivers: [otlp]
          processors: [batch]
          exporters: [logging, prometheus]
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: otel-collector
  labels:
    app: otel-collector
spec:
  replicas: 1
  selector:
    matchLabels:
      app: otel-collector
  template:
    metadata:
      labels:
        app: otel-collector
    spec:
      containers:
        - name: collector
          image: otel/opentelemetry-collector:latest
          args:
            - --config=/etc/otelconf/otel-collector-config.yaml
          ports:
            - name: otel-http
              containerPort: 4318
            - name: otel-grpc
              containerPort: 4317
            - name: prom-metrics
              containerPort: 8889
          volumeMounts:
            - name: otel-config
              mountPath: /etc/otelconf
      volumes:
        - name: otel-config
          configMap:
            name: otel-collector-config
            items:
              - key: otel-collector-config
                path: otel-collector-config.yaml
---
apiVersion: v1
kind: Service
metadata:
  name: otel-collector
  labels:
    app: otel-collector
spec:
  type: ClusterIP
  ports:
    - name: otel-http
      port: 4318
      protocol: TCP
      targetPort: 4318
    - name: otel-grpc
      port: 4317
      protocol: TCP
      targetPort: 4317
    - name: prom-metrics
      port: 8889
      protocol: TCP
      targetPort: prom-metrics
  selector:
    app: otel-collector

Let’s apply the manifest.

$ kubectl apply -f ./manifests/otel-collector.yaml

configmap/otel-collector-config created
deployment.apps/otel-collector created
service/otel-collector created

Check that everything runs as expected:

$ kubectl get pods,deployments,services,endpoints

NAME                                  READY   STATUS    RESTARTS   AGE
pod/otel-collector-5cd54c49b4-gdk9f   1/1     Running   0          5m13s

NAME                             READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/otel-collector   1/1     1            1           5m13s

NAME                     TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                      AGE
service/kubernetes       ClusterIP   10.96.0.1      <none>        443/TCP                      22m
service/otel-collector   ClusterIP   10.96.194.28   <none>        4318/TCP,4317/TCP,8889/TCP   5m13s

NAME                       ENDPOINTS                                         AGE
endpoints/kubernetes       172.19.0.9:6443                                   22m
endpoints/otel-collector   10.244.1.2:8889,10.244.1.2:4318,10.244.1.2:4317   5m13s

Now that the OpenTelemetry infrastructure is in place, let’s add the workload exposing the custom metrics.

ASP.NET Workload

The demo application has been containerized and published to the GitHub container registry for your convenience. So to add the workload to your cluster, simply apply the ./manifests/otel-demo-workload.yaml that contains the Deployment manifest and adds two environment variables to configure the OTEL collector endpoint and the OTLP protocol to use – in this case gRPC.

Here’s the relevant part:

spec:
  containers:
  - image: ghcr.io/cdennig/otel-demo:1.0
    name: otel-demo
    env:
    - name: OTEL_EXPORTER_OTLP_ENDPOINT
      value: "http://otel-collector.default.svc.cluster.local:4317"
    - name: OTEL_EXPORTER_OTLP_PROTOCOL
      value: "grpc"

Apply the manifest now.

$ kubectl apply -f ./manifests/otel-demo-workload.yaml

Remember that the application also logs to the console. Let’s query the logs of the ASP.NET service (note that the podname will differ in your environment).

$ kubectl logs po/otel-workload-69cc89d456-9zfs7

info: Microsoft.Hosting.Lifetime[14]
      Now listening on: http://[::]:80
info: Microsoft.Hosting.Lifetime[0]
      Application started. Press Ctrl+C to shut down.
info: Microsoft.Hosting.Lifetime[0]
      Hosting environment: Production
info: Microsoft.Hosting.Lifetime[0]
      Content root path: /app/
Resource associated with Metric:
    service.name: otel.demo
    service.version: 0.0.1
    service.instance.id: b84c78be-49df-42fa-bd09-0ad13481d826

Export otel.demo.metric.gauge1, Meter: otel.demo.metric/0.0.1
(2022-08-20T11:40:41.4260064Z, 2022-08-20T11:40:51.3451557Z] LongGauge
Value: 10

Export otel.demo.metric.gauge2, Meter: otel.demo.metric/0.0.1
(2022-08-20T11:40:41.4274763Z, 2022-08-20T11:40:51.3451863Z] DoubleGauge
Value: 0.8778815716262417

Export otel.demo.metric.gauge1, Meter: otel.demo.metric/0.0.1
(2022-08-20T11:40:41.4260064Z, 2022-08-20T11:41:01.3387999Z] LongGauge
Value: 19

Export otel.demo.metric.gauge2, Meter: otel.demo.metric/0.0.1
(2022-08-20T11:40:41.4274763Z, 2022-08-20T11:41:01.3388003Z] DoubleGauge
Value: 0.35409627617124295

Also, let’s check if the data will be sent to the collector. Remember it exposes its /metrics endpoint on 0.0.0.0:8889/metrics. Let’s query it by port-forwarding the service to our local machine.

$ kubectl port-forward svc/otel-collector 8889:8889

Forwarding from 127.0.0.1:8889 -> 8889
Forwarding from [::1]:8889 -> 8889

# in a different session, curl the endpoint
$  curl http://localhost:8889/metrics

# HELP otel_demo_metric_gauge1
# TYPE otel_demo_metric_gauge1 gauge
otel_demo_metric_gauge1{instance="b84c78be-49df-42fa-bd09-0ad13481d826",job="otel.demo"} 37
# HELP otel_demo_metric_gauge2
# TYPE otel_demo_metric_gauge2 gauge
otel_demo_metric_gauge2{instance="b84c78be-49df-42fa-bd09-0ad13481d826",job="otel.demo"} 0.45433988869946285

Great, both components – the metric producer and the collector – are working as expected. Now, let’s spin up the Prometheus/Grafana environment, add the service monitor to scrape the /metrics endpoint and create the Grafana dashboard for it.

Add Kube-Prometheus-Stack

Easiest way to add the Prometheus/Grafana stack to your Kubernetes cluster is to use the kube-prometheus-stack Helm chart. We will use a custom values.yaml file to automatically add the static Prometheus target for the OTEL collector called demo/otel-collector (kubeEtc config is only needed in the kind environment):

kubeEtcd:
  service:
    targetPort: 2381
prometheus:
  prometheusSpec:
    additionalScrapeConfigs:
    - job_name: "demo/otel-collector"
      static_configs:
      - targets: ["otel-collector.default.svc.cluster.local:8889"]

Now, add the helm chart to your cluster by executing:

$ helm upgrade --install --wait --timeout 15m \
  --namespace monitoring --create-namespace \
  --repo https://prometheus-community.github.io/helm-charts \
  kube-prometheus-stack kube-prometheus-stack --values ./prom-grafana/values.yaml

Release "kube-prometheus-stack" does not exist. Installing it now.
NAME: kube-prometheus-stack
LAST DEPLOYED: Mon Aug 22 13:53:58 2022
NAMESPACE: monitoring
STATUS: deployed
REVISION: 1
NOTES:
kube-prometheus-stack has been installed. Check its status by running:
  kubectl --namespace monitoring get pods -l "release=kube-prometheus-stack"

Let’s have a look at the Prometheus targets, if Prometheus can scrape the OTEL collector endpoint – again, port-forward the service to your local machine and open a browser at http://localhost:9090/targets.

$ kubectl port-forward -n monitoring svc/kube-prometheus-stack-prometheus 9090:9090
Prometheus targets
Prometheus targets

That looks as expected, now over to Grafana and create a dashboard to display the custom metrics. As done before, port-forward the Grafana service to your local machine and open a browser at http://localhost:3000. Because you need a username/password combination to login to Grafana, we first need to grab that information from a Kubernetes secret:

# Grafana admin username
$ kubectl get secret -n monitoring kube-prometheus-stack-grafana -o jsonpath='{.data.admin-user}' | base64 --decode

# Grafana password
$ kubectl get secret -n monitoring kube-prometheus-stack-grafana -o jsonpath='{.data.admin-password}' | base64 --decode

# port-forward Grafana service
$ kubectl port-forward -n monitoring svc/kube-prometheus-stack-grafana 3000:80

Forwarding from 127.0.0.1:3000 -> 3000
Forwarding from [::1]:3000 -> 3000

After opening a browser at http://localhost:3000 and a successful login, you should be greeted by the Grafana welcome page.

Grafana Welcome Page

Add a Dashboard for the Custom Metrics

Head to http://localhost:3000/dashboard/import and upload the precreated dashboard from ./prom-grafana/dashboard.json (or simply paste its content to the textbox). After importing the definition, you should be redirected to the dashboard and see our custom metrics being displayed.

Add preconfigured dashboard
OTEL metrics gauge1 and gauge2

Wrap-Up

This demo showed how to use OpenTelemetry custom metrics in an ASP.NET service, sending telemetry data to an OTEL collector instance that is being scraped by a Prometheus instance. To close the loop, those custom metrics are eventually displayed in a Grafana dashboard. The advantage of this solution is that you use a common solution like OpenTelemetry to generate and collect metrics. To which service the data is finally sent and which solution is used to analyze the data can be easily exchanged via OTEL exporter configuration – if you don’t want to use Prometheus, you simply adapt the OTEL pipeline and export the telemetry data to e.g. Azure Monitor or DataDog, Splunk etc.

I hope the demo has given you a good introduction to the world of OpenTelemetry. Happy coding! 🙂

Jürgen Gutsch: ASP.NET Core on .NET 7.0 - Output caching

Finally, Microsoft added output caching to the ASP.NET Core 7.0 preview 6.

Output caching is a middleware that caches the entire output of an endpoint instead of executing the endpoint every time it gets requested. This will make your endpoints a lot faster.

This kind of caching is useful for APIs that provide data that don't change a lot or that gets accessed pretty frequently. It is also useful for more or less static pages, e.g. CMS output, etc. Different caching options will help you to fine-tune your output cache or to vary the cache based on header or query parameter.

For more dynamic pages or APIs that serve data that change a lot, it would make sense to cache more specifically on the data level instead of the entire output.

Trying output caching

To try output caching I created a new empty web app using the .NET CLI:

dotnet new web -n OutputCaching -o OutputCaching
cd OutputCaching
code .

This will create the new project and opens it in VSCode.

In the Program.cs you now need to add output caching to the ServiceCollection as well as using the middleware on the app:

var builder = WebApplication.CreateBuilder(args);

builder.Services.AddOutputCache();

var app = builder.Build();

app.UseOutputCache();

app.MapGet("/", () => "Hello World!");

app.Run();

This enables output caching in your application.

Let's use output caching with the classic example that displays the current date and time.

app.MapGet("/time", () => DateTime.Now.ToString());

This creates a new endpoint that displays the current date and time. Every time you refresh the result in the browser, you got a new time displayed. No magic here. Now we are going to add some caching magic to another endpoint:

app.MapGet("/time_cached", () => DateTime.Now.ToString())
	.CacheOutput();

If you access this endpoint and refresh it in the browser, the time will not change. The initial output got cached and you'll receive the cached output every time you refresh the browser.

This is good for more or less static outputs that don't change a lot. What if you have a frequently used API that just needs a short cache to reduce the calculation effort or to just reduce the database access. You can reduce the caching time to, let's say, 10 seconds:

 builder.Services.AddOutputCache(options =>
 {
     options.DefaultExpirationTimeSpan = TimeSpan.FromSeconds(10);
 });

This reduces the default cache expiration timespan to 10 seconds.

If you now start refreshing the endpoint we created previously, you'll get a new time every 10 seconds. This means the cache get's released every 10 seconds. Using the options you can also define the size of the cached body or the overall cache size.

If you provide a more dynamic API that receives parameters using query strings. You can vary the cache by the query string:

app.MapGet("/time_refreshable", () => DateTime.Now.ToString())
    .CacheOutput(p => p.VaryByQuery("time"));

This adds another endpoint that varies the cache by the query string argument called "time". This means the query string ?time=now, caches a different result than the query string ?time=later or ?time=before.

The VaryByQuery function allows you to add more than one query string:

app.MapGet("/time_refreshable", () => DateTime.Now.ToString())
    .CacheOutput(p => p.VaryByQuery("time", "culture", "format"));

In case you like to vary the cache by HTTP headers you can do this the same way using the VaryByHeader function:

app.MapGet("/time_cached", () => DateTime.Now.ToString())
    .CacheOutput(p => p.VaryByHeader("content-type"));

Further reading

If you like to explore more complex examples of output caching, it would make sense to have a look into the samples project:

https://github.com/dotnet/aspnetcore/blob/main/src/Middleware/OutputCaching/samples/OutputCachingSample/Startup.cs

Code-Inside Blog: How to run a Azure App Service WebJob with parameters

We are using WebJobs in our Azure App Service deployment and they are pretty “easy” for the most part. Just register a WebJobs or deploy your .exe/.bat/.ps1/... under the \site\wwwroot\app_data\Jobs\triggered folder and it should execute as described in the settings.job.

x

If you put any executable in this WebJob folder, it will be executed as planned.

Problem: Parameters

If you have a my-job.exe, then this will be invoked from the runtime. But what if you need to invoke it with a parameter like my-job.exe -param "test"?

Solution: run.cmd

The WebJob environment is “greedy” and will search for a run.cmd (or run.exe) and if this is found, it will be executed and it doesn’t matter if you have any other .exe files there. Stick to the run.cmd and use this to invoke your actual executable like this:

echo "Invoke my-job.exe with parameters - Start"

..\MyJob\my-job.exe -param "test"

echo "Invoke my-job.exe with parameters - Done"

Be aware, that the path must “match”. We use this run.cmd-approach in combination with the is_in_place-option (see here) and are happy with the results).

A more detailed explanation can be found here.

Hope this helps!

Sebastian Seidel: Combining Lottie Animations with Gestures and Scrolling

Matt Goldman revived #XamarinUIJuly and renamed it to #MAUIUIJuly, where each day in July someone from the .NET MAUI community publishes a blog post or video showing some incredible UI magic in MAUI. In this contribution I will show you how to combine Lottie animations with gestures and scrollable containers to spice up your .NET MAUI App UI!

Christina Hirth : DDD Europe 2022 Watch-List

I attend conferences and open spaces for more than 15 years but I can’t remember ever being keener to go to a conference than the DDD EU this year. But I still haven’t imagined that my list for “watch later”-videos will be almost as long as the number of talks – including the ones from the DDD Foundations (2 pre-conference days).

I was so full of expectations because I would be a speaker at an international conference for the first time and the opportunity to meet all those wonderful people who became friends in the last two years! (I won’t even try to list the names because I would surely miss a few). The most often repeated sentence on those five days wasn’t “Can you see my screen?” anymore but “Do you know that we never met before IRL?!” 🤗

This was only one of those great evenings meeting old friends and making new ones 🙂 (After two years of collaboration, the Virtual DDD organizers have finally met too!)

But now back to the lists:

Talks I haven’t seen but I should:

  1. DDD Foundations with clever people and interesting talks which should/could land in our ddd-crew repositories. (In general, the sessions are not too long, I will probably browse through all of them.)
  2. Main Conference

Talks to revisit

This list is not the list of “good talks”; I can’t remember being at any talk I wished I wouldn’t. But these here need to be seen and listened to more than once (at least I do).

Domain-Driven Design in ProductLand – Alberto Brandolini

Alberto speaking the truth about product development is exactly my kind of radical candour.

Independent Service Heuristics: a rapid, business-friendly approach to flow-oriented boundaries – Matthew Skelton and Nick Tune

The tweet tells it all: an essential new method in our toolbox

The Fractal Geometry of Software Design – Vladik Khononov

Mindblowing. I will probably have to re-watch this video a couple of times until I get my brain around all of the facets Valdik touches in his talk.

Sociotechnical Systems Design for the “Digital Coal Mines” – Trond Hjorteland

This talk is not something I haven’t understood – I understand it completely. I will still re-watch it because it contains historical and actual arguments and requirements for employers on how they have to re-think their organizational models.

This is the longest list of videos I have ever bookmarked (and published as a suggestion for you all). Still, it is how it is: the DDD-Eu 2022 was, in my opinion, the most mature conference I ever participated.

At the same time, there is always time for jokes when Mathias Verraes and Nick Tune are around (and we are around them, of course) 😃

Stefan Henneken: IEC 61131-3: SOLID – Das Liskov Substitution Principle

„Das Liskov Substitution Principle (LSP) fordert, dass abgeleitete FBs immer zu ihren Basis-FB kompatibel sind. Abgeleitete FBs müssen sich so verhalten wie ihr jeweiliger Basis-FB. Ein abgeleiteter FB darf den Basis-FB erweitern, aber nicht einschränken.“ Dieses ist die Kernaussage des Liskov Substitution Principle (LSP), welches Barbara Liskov schon Ende der 1980iger Jahre formulierte. Obwohl das Liskov Substitution Principle (LSP) eines der einfacheren SOLID-Prinzipien ist, tritt deren Verletzung doch sehr häufig auf. Warum das Liskov Substitution Principle (LSP) wichtig ist, zeigt das folgende Beispiel.

Ausgangssituation

Erneut wird das Beispiel verwendet, welches zuvor in den beiden vorherigen Posts entwickelt und optimiert wurde. Kern des Beispiels sind drei Lampentypen, welche durch die Funktionsblöcke FB_LampOnOff, FB_LampSetDirect und FB_LampUpDown abgebildet werden. Die Schnittstelle I_Lamp und der abstrakte Funktionsblock FB_Lamp gewährleisten eine saubere Entkopplung zwischen den jeweiligen Lampentypen und dem übergeordneten Controller FB_Controller.

FB_Controller greift nicht mehr auf konkrete Instanzen, sondern nur noch auf eine Referenz des abstrakten Funktionsblock FB_Lamp zu. Für das Auflösen der festen Koppelung wird das IEC 61131-3: SOLID – Das Dependency Inversion Principle (DIP) angewendet.

Zur Realisierung der geforderten Funktionsweise, stellt jeder Lampentyp seine eigenen Methoden bereit. Aus diesem Grund besitzt jeder Lampentyp einen entsprechenden Adapter-Funktionsblock (FB_LampOnOffAdapter, FB_LampSetDirectAdapter und FB_LampUpDownAdapter), der für das Mapping zwischen der abstrakten Lampe (FB_Lamp) und den konkreten Lampentypen (FB_LampOnOff, FB_LampSetDirect und FB_LampUpDown) zuständig ist. Unterstützt wird diese Optimierung durch das IEC 61131-3: SOLID – Das Single Responsibility Principle (SRP).

Erweiterung der Implementierung

Die drei geforderten Lampentypen lassen sich durch das bisherige Software-Design gut abbilden. Trotzdem kann es passieren, dass Erweiterungen, die auf dem ersten Blick einfach wirken, später zu Schwierigkeiten führen. Als Beispiel soll hier der neue Lampentyp FB_LampSetDirectDALI dienen.

DALI steht für Digital Addressable Lighting Interface und ist ein Protokoll zur Ansteuerung von lichttechnischen Geräten. Grundsätzlich verhält sich der neue Baustein wie FB_LampSetDirect, allerdings wird der Ausgangswert bei DALI nicht in 0-100 %, sondern in 0-254 angegeben.

Optimierung und Analyse der Erweiterungen

Welche Ansätze stehen zur Verfügung, um diese Erweiterung umzusetzen? Dabei sollen auch die unterschiedlichen Ansätze genauer analysiert werden.

Ansatz 1: Quick & Dirty

Hoher Zeitdruck kann dazu verleiten die Umsetzung Quick & Dirty zu realisieren. Da FB_LampSetDirect sich ähnlich verhält wie der neue DALI-Lampentyp, erbt FB_LampSetDirectDALI von FB_LampSetDirect. Um den Wertebereich von 0-254 zu ermöglichen, wird die Methode SetLightLevel() von FB_LampSetDirectDALI überschrieben.

METHOD PUBLIC SetLightLevel
VAR_INPUT
  nNewLightLevel    : BYTE(0..254);
END_VAR
nLightLevel := nNewLightLevel;

Auch der neue Adapter-Funktionsblock (FB_LampSetDirectDALIAdapter) wird so angepasst, dass die Methoden den Wertebereich von 0-254 berücksichtigen.

Als Beispiel sollen hier die Methoden DimUp() und On() gezeigt werden:

METHOD PUBLIC DimUp
IF (fbLampSetDirectDALI.nLightLevel <= 249) THEN
  fbLampSetDirectDALI.SetLightLevel(fbLampSetDirectDALI.nLightLevel + 5);
END_IF
IF (_ipObserver <> 0) THEN
  _ipObserver.Update(fbLampSetDirectDALI.nLightLevel);
END_IF
METHOD PUBLIC On
fbLampSetDirectDALI.SetLightLevel(254);
IF (_ipObserver <> 0) THEN
  _ipObserver.Update(fbLampSetDirectDALI.nLightLevel);
END_IF

Das vereinfachte UML-Diagramm zeigt die Integration der Funktionsblöcke für die DALI-Lampe in das bestehende Software-Design:

(abstrakte Elemente werden in kursiver Schriftart dargestellt)

Beispiel 1 (TwinCAT 3.1.4024) auf GitHub

Dieser Ansatz setzt die Forderungen durch eine pragmatische Herangehensweise schnell und einfach um. Doch dadurch wurden auch einige Besonderheiten hinzugefügt, welche den Einsatz der Bausteine in einer Applikation erschweren.

Wie soll sich z.B. eine Bedieneroberfläche verhalten, wenn sich diese auf eine Instanz von FB_Controller verbindet und FB_AnalogValue einen Wert von 100 ausgibt? Bedeutet 100, dass die aktuelle Lampe auf 100 % steht, oder gibt die neue DALI-Lampe einen Wert von 100 aus, was deutlich unter 100 % liegen würde?

Der Anwender von FB_Controller muss immer den aktiven Lampentyp kennen, um den aktuellen Ausgangswert korrekt interpretieren zu können. FB_LampSetDirectDALI erbt zwar von FB_LampSetDirect, verändert aber dessen Verhalten. In diesem Beispiel durch das Überschreiben der Methode SetLightLevel(). Der abgeleitete FB (FB_LampSetDirectDALI) verhält sich anders als der Basis-FB (FB_LampSetDirect). FB_LampSetDirect kann nicht mehr durch FB_LampSetDirectDALI ersetzt (substituiert) werden. Das Liskov Substitution Principle (LSP) wird verletzt.

Ansatz 2: Optionalität

Bei diesem Ansatz enthält jeder Lampentyp eine Eigenschaft, die Auskunft über die genaue Funktionsweise des Funktionsblock zurückgibt.

In .NET wird z.B. dieser Ansatz in der abstrakten Klasse System.IO.Stream verwendet. Die Klasse Stream dient als Basisklasse für spezialisierte Streams (z.B. FileStream und NetworkStream) und legt die wichtigsten Methoden und Eigenschaften fest. Hierzu gehören auch die Methoden Write(), Read() und Seek(). Da nicht jeder Stream alle Funktionen zur Verfügung stellen kann, geben die Eigenschaften CanRead, CanWrite und CanSeek Auskunft darüber, ob die entsprechende Methode vom jeweiligen Stream unterstützt wird. So kann bei NetworkStream zur Laufzeit geprüft werden, ob ein Schreiben in den Stream möglich ist, oder ob es sich um einen read-only Stream handelt.

Bei unserem Beispiel wird I_Lamp durch die Eigenschaft bIsDALIDevice erweitert.

Dadurch erhält auch FB_Lamp und somit jeder Adapter-Funktionsblock diese Eigenschaft. Da die Funktionalität von bIsDALIDevice in allen Adapter-Funktionsblöcken gleich ist, wird bIsDALIDevice in FB_Lamp nicht als abstract deklariert. Dadurch ist es nicht notwendig, dass alle Adapter-Funktionsblöcke diese Eigenschaft selbst implementieren müssen. Die Funktionalität von bIsDALIDevice wird von FB_Lamp an alle Adapter-Funktionsblöcke vererbt.

Für FB_LampSetDirectDALIAdapter wird in der Methode FB_init() die Backing-Variable der Eigenschaft bIsDALIDevice auf TRUE gesetzt.

METHOD FB_init : BOOL
VAR_INPUT
  bInitRetains  : BOOL;
  bInCopyCode   : BOOL;
END_VAR
SUPER^._bIsDALIDevice := TRUE;

Bei allen anderen Adapter-Funktionsblöcken behält _bIsDALIDevice seinen Initialisierungswert (FALSE). Der Einsatz der Methode FB_init() ist bei diesen Adapter-Funktionsblöcken nicht notwendig.

Der Anwender von FB_Controller (Baustein MAIN) kann jetzt zur Laufzeit des Programms abfragen, ob die aktuelle Lampe eine DALI-Lampe ist oder nicht. Ist dieses der Fall, wird der Ausgangswert entsprechend auf 0-100 % skaliert.

IF (__ISVALIDREF(fbController.refActiveLamp) AND_THEN fbController.refActiveLamp.bIsDALIDevice) THEN
  nLightLevel := TO_BYTE(fbController.fbActualValue.nValue * 100.0 / 254.0);
ELSE
  nLightLevel := fbController.fbActualValue.nValue;
END_IF

Anmerkung: Wichtig ist hierbei die Verwendung des Operators AND_THEN statt THEN. Hierdurch wird der Ausdruck rechts von AND_THEN nur dann ausgeführt, wenn der erste Operand (links von AND_THEN) TRUE ist. Das ist hierbei wichtig, da sonst bei einer ungültigen Referenz auf die aktive Lampe (refActiveLamp) der Ausdruck fbController.refActiveLamp.bIsDALIDevice die Ausführung des Programms beenden würde.

Im UML-Diagramm ist zu erkennen wie FB_Lamp über die Schnittstelle I_Lamp die Eigenschaft bIsDALIDevice erhält und somit von allen Adapter-Funktionsblöcken geerbt wird:

(abstrakte Elemente werden in kursiver Schriftart dargestellt)

Beispiel 2 (TwinCAT 3.1.4024) auf GitHub

Auch bei diesem Ansatz wird das Liskov Substitution Principle (LSP) weiterhin verletzt. FB_LampSetDirectDALI verhält sich nach wie vor unterschiedlich zu FB_LampSetDirect. Diese Unterschiedlichkeit muss vom Anwender berücksichtigt (Abfragen von bIsDALIDevice) und korrigiert (Skalierung auf 0-100 %) werden. Dieses wird schnell übersehen oder fehlerhaft umgesetzt.

Ansatz 3: Harmonisierung

Um das Liskov Substitution Principle (LSP) nicht weiter zu verletzen, wird die Vererbung zwischen FB_LampSetDirect und FB_LampSetDirectDALI aufgelöst. Auch wenn beide Funktionsblöcke auf dem ersten Blick sehr ähnlich wirken, so sollte an dieser Stelle auf die Vererbung verzichtet werden.

Die Adapter-Funktionsblöcke stellen sicher, dass alle Lampentypen mit den gleichen Methoden steuerbar sind. Unterschiede gibt es allerdings weiterhin bei der Darstellung des Ausgangswertes.

In FB_Controller wird der Ausgangswert der aktiven Lampe durch eine Instanz von FB_AnalogValue dargestellt. Übermittelt wird ein neuer Ausgangswert durch die Methode Update(). Damit auch der Ausgangswert einheitlich dargestellt wird, wird vor dem Aufruf der Methode Update() dieser auf 0-100 % skaliert. Die notwendigen Anpassungen erfolgen ausschließlich in den Methoden DimDown(), DimUp(), Off() und On() von FB_LampSetDirectDALIAdapter.

Als Beispiel soll hier die Methode On() gezeigt werden:

METHOD PUBLIC On
fbLampSetDirectDALI.SetLightLevel(254);
IF (_ipObserver <> 0) THEN
  _ipObserver.Update(TO_BYTE(fbLampSetDirectDALI.nLightLevel * 100.0 / 254.0));
END_IF

Der Adapter-Funktionsblock enthält alle notwendigen Anweisungen, wodurch sich die DALI-Lampe nach Außen so verhält wie erwartet. FB_LampSetDirectDALI bleibt bei diesem Lösungsansatz unverändert.

(abstrakte Elemente werden in kursiver Schriftart dargestellt)

Beispiel 3 (TwinCAT 3.1.4024) auf GitHub

Analyse der Optimierung

Durch verschiedene Techniken ist es uns möglich, die gewünschte Erweiterung zu implementieren, ohne dass das Liskov Substitution Principle (LSP) verletzt wird. Voraussetzung, um das LSP zu verletzen, ist Vererbung. Wird das LSP verletzt, so ist dieses evtl. ein Hinweis auf eine schlechte Vererbungshierarchie innerhalb des Software-Designs.

Warum ist es wichtig, dass Liskov Substitution Principle (LSP) einzuhalten? Funktionsblöcke können auch als Parameter übergeben werden. Würde ein POU einen Parameter vom Typ FB_LampSetDirect erwarten, so könnte, bei der Verwendung von Vererbung, auch FB_LampSetDirectDALI übergeben werden. Die Arbeitsweise der Methode SetLightLevel() ist aber bei beiden Funktionsblöcken unterschiedlich. Solche Unterschiede können zu unerwünschten Verhalten innerhalb einer Anlage führen.

Die Definition des Liskov Substitution Principle

Sei q(x) eine beweisbare Eigenschaft von Objekten x des Typs T. Dann soll q(y) für Objekte y des Typs S wahr sein, wobei S ein Untertyp von T ist.

So lautet, etwas formeller ausgedrückt, die Definition des Liskov Substitution Principle (LSP) von Barbara Liskov. Wie weiter oben schon erwähnt, wurde schon Ende der 1980iger Jahre dieses Prinzip definiert. Die vollständige Ausarbeitung hierzu wurde unter dem Titel Data Abstraction and Hierarchy veröffentlicht.

Barbara Liskov promovierte 1968 als eine der ersten Frauen in Informatik. 2008 erhielt sie, ebenfalls als eine der ersten Frauen, den Turing Award. Schon früh beschäftigte sie sich mit der objektorientierten Programmierung und somit auch mit der Vererbung von Klassen (Funktionsblöcken).

Die Vererbung stellt zwei Funktionsblöcke in eine bestimmte Beziehung zueinander. Vererbung beschreibt hierbei eine istein-Beziehung. Erbt FB_LampSetDirectDALI von FB_LampSetDirect, so ist die DALI-Lampe eine (normale) Lampe, erweitert um besondere (zusätzliche) Funktionen. Überall wo FB_LampSetDirect verwendet wird, könnte auch FB_LampSetDirectDALI zum Einsatz kommen. FB_LampSetDirect kann durch FB_LampSetDirectDALI substituiert werden. Ist dieses nicht sichergestellt, so sollte die Vererbung an dieser Stelle hinterfragt werden.

Robert C. Martin hat dieses Prinzip mit in die SOLID-Prinzipien aufgenommen. In dem Buch (Amazon-Werbelink *) Clean Architecture: Das Praxis-Handbuch für professionelles Softwaredesign wird dieses Prinzip weiter erläutert und auf den Bereich der Softwarearchitektur ausgedehnt.

Zusammenfassung

Durch die Erweiterung des obigen Beispiels, haben Sie das Liskov Substitution Principle (LSP) kennen gelernt. Gerade komplexe Vererbungshierarchien sind anfällig für die Verletzung dieses Prinzips. Obwohl sich die formelle Definition des Liskov Substitution Principle (LSP) kompliziert anhört, so ist die Kernaussage dieses Prinzips doch einfach zu verstehen.

Im nächsten Post soll unserer Beispiel erneut erweitert werden. Dabei wird das Interface Segregation Principle (ISP) eine zentrale Rolle haben.

Daniel Schädler: QuickTipp: Terraform sicher in Azure erstellen

In diesem Artikel gehe ich darauf ein, wie man die Ressourcen mit Terraform in Azure in einer Kadenz von 5 Minuten provisionieren und wieder abbauen kann. Dieses Szenario kann durchaus bei BDD-Test vorkommen, bei deinen Ad-Hoc eine Testinfrastruktur hochgefahren werden muss.

Vorausetzungen

Ausgangslage

Währen der Provisionierung in der Pipeline, kann es immer wieder vokommen, dass bereits eine Ressource noch vorhanden sei, wenn diese rasch wieder erstellt werden muss. Dann sieht man folgende Fehlermeldung:

    │ Error: waiting for creation/update of Server: (Name "sqldbserver" / Resource Group "rg-switzerland"): Code="NameAlreadyExists" Message="The name 'sqldbserver.database.windows.net' already exists. Choose a different name."
    │
    │   with azurerm_mssql_server.sqlsrv,
    │   on main.tf line 52, in resource "azurerm_mssql_server" "sqlsrv":
    │   52: resource "azurerm_mssql_server" "sqlsrv" {

Ärgerlich, wenn man sich darauf verlassen möchte, dass die Test-Umgebung immer gleich aufgebaut werden soll.

Die Lösung

Um diesem Problem Herr zu werden, reicht es wenn man den Ressourcen einen zufälligen Namenszusatz vergibt. Dies kann mit dem Terraform Integer eine einfache Abhilfe geschaffen werden. Dazu braucht es im Terraform Script nur folgende Ressource:

    resource "random_integer" "salt"{
        min = 1
        max = 99
    }

Der Umstand, dass Zahlen zwischen 1 und 99 generiert werden, in einer Zufälligkeit, lässt die Wahrscheinlichkeit, dass eine Ressource bereits besteht und des zu einem Fehler kommt, minimieren.

Das Anlegen einer Ressource-Gruppe mit zufälligem Namenssuffix würde dann wie folgt aussehen.

# Create a resource group
    resource "azurerm_resource_group" "rg" {
      name     = "rg-swiss-${random_integer.salt.result}"
      location = "switzerlandnorth"  
    }

Mit einem Testscript dass fünf mal durchläuft mit einer realistischen Pause von 5min während den Ausführungen hat keinen Fehler ausgegeben.

    $totalseconds = 0;
    $stopwatch = New-Object -TypeName System.Diagnostics.Stopwatch
    
    for ($index = 0; $index -lt 5; $index++) {
        $stopwatch.Reset()
        $stopwatch.Start()
    
        $executable = "terraform.exe"
            
        $initargument = "init"
        $plangargument = "plan -out testplan"
        $applyargument = "apply -auto-approve testplan"
        $destroyargeument = "destroy -auto-approve"
        
        Start-Process $executable -ArgumentList $initargument -Wait -NoNewWindow
        Start-Process $executable -ArgumentList $plangargument -Wait -NoNewWindow
        Start-Process $executable -ArgumentList $applyargument -Wait -NoNewWindow
        Start-Process $executable -ArgumentList $destroyargeument -Wait -NoNewWindow
    
        $stopwatch.Stop()
        $totalseconds += $stopwatch.Elapsed.TotalSeconds
        Start-Sleep -Seconds 480
    }
    
    Write-Host "Verstrichene Zeit $totalseconds"

Natürlich kann die Dauer mit der aktuellen Verfügbarkeit von Azure Dienste zusammenhängen. Bei der Vergabe der Namen für die Ressourcen ist es immer ratsam die Azure API der einzelnen Ressourcen zu konsultieren um keinen Fehler in der Länge des Namens zu generieren. Denn im Gegensatz zu früher, wo Namen noch wichtig waren, sind es heute nur noch Ressourcen, die nicht mehr für eine lange Existenz bestimmt sind, in Zeiten von DevOps Praktiken.

Fazit

Mit dieser Lösung kann sichergestellt werden, dass man sich Umgehungslösungen baut, die dann nur einen kleinen Zeitraum funktionieren. Ich hoffe der Artikel hat gefallen.

Daniel Schädler: Quickstart: Bereitstellung statischer Webseite auf Azure

In diesem Artikel möchte ich die Schritte für das Veröffentlichen einer statischen Webseite, zum Beispiel einer "Landing Page" mit Terraform und Azure zeigen.

Voraussetzung

  • Ein Azure Konto ist eingerichtet.
  • Die Azure Cli Tools müssen für das jeweilige Zielsystem installiert sein.
  • Terraform ist installiert und konfiguriert für den Zugriff auf Azure.

Vorgehen

Folgende Schritte werden in der Terraform ausgeführt, damit eine statische Webseite auf Azure veröffentlicht werden kann.

StorageAccount und statische WebApp erstellen

Im ersten Schritt wird ein StorageAccount erstellt.

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=3.0.0"
    }
  }
}

# Configure the Microsoft Azure Provider
provider "azurerm" {
  features {}

  subscription_id = "YOUR SUBSCRIPTION"
  client_id       = "YOU APPLICATION ID"
  client_secret   = "YOUR APPLICATION SECRET"
  tenant_id       = "YOUR TENANT ID"
}

resource "azurerm_resource_group" "rg" {
  name = "terrfaform-playground"
  # Westeurope da statische Webseiten in der Schweiz
  # noch nicht verfügbar sind.
  location = "westeurope"
}

resource "azurerm_storage_account" "storage" {
  account_tier = "Standard"
  account_kind = "StorageV2"
  account_replication_type = "LRS"
  location = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  name = "schaedldstorage"  
  allow_nested_items_to_be_public = true
  static_website {
    index_document = "index.html"
  }
}

Die Befehle terraform init, terraform plan -out sampleplan, terraform apply sampleplan und terraform destroy (In Produktion eher vorsichtig damit umgehen) ausgeführt. Diese sind durchgängig durch das ganze Beispiel immer wieder anzuwenden.

Terraform init


terraform init

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/azurerm versions matching "3.0.0"...
- Installing hashicorp/azurerm v3.0.0...
- Installed hashicorp/azurerm v3.0.0 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.

Terraform plan

     terraform plan -out simpleplan

Terraform used the selected providers to generate the following execution plan. Resource actions are    
indicated with the following symbols:
  + create

Terraform will perform the following actions:

  # azurerm_resource_group.rg will be created
  + resource "azurerm_resource_group" "rg" {
      + id       = (known after apply)
      + location = "westeurope"
      + name     = "terrfaform-playground"
    }

  # azurerm_static_site.website will be created
  + resource "azurerm_static_site" "website" {
      + api_key             = (known after apply)
      + default_host_name   = (known after apply)
      + id                  = (known after apply)
      + location            = "westeurope"
      + name                = "sample-web-app"
      + resource_group_name = "terrfaform-playground"
      + sku_size            = "Free"
      + sku_tier            = "Free"
    }

  # azurerm_storage_account.storage will be created
  + resource "azurerm_storage_account" "storage" {
      + access_tier                       = (known after apply)
      + account_kind                      = "StorageV2"
      + account_replication_type          = "LRS"
      + account_tier                      = "Standard"
      + allow_nested_items_to_be_public   = true
      + enable_https_traffic_only         = true
      + id                                = (known after apply)
      + infrastructure_encryption_enabled = false
      + is_hns_enabled                    = false
      + large_file_share_enabled          = (known after apply)
      + location                          = "westeurope"
      + min_tls_version                   = "TLS1_2"
      + name                              = "schaedldstorage"
      + nfsv3_enabled                     = false
      + primary_access_key                = (sensitive value)
      + primary_blob_connection_string    = (sensitive value)
      + primary_blob_endpoint             = (known after apply)
      + primary_blob_host                 = (known after apply)
      + primary_connection_string         = (sensitive value)
      + primary_dfs_endpoint              = (known after apply)
      + primary_dfs_host                  = (known after apply)
      + primary_file_endpoint             = (known after apply)
      + primary_file_host                 = (known after apply)
      + primary_location                  = (known after apply)
      + primary_queue_endpoint            = (known after apply)
      + primary_queue_host                = (known after apply)
      + primary_table_endpoint            = (known after apply)
      + primary_table_host                = (known after apply)
      + primary_web_endpoint              = (known after apply)
      + primary_web_host                  = (known after apply)
      + queue_encryption_key_type         = "Service"
      + resource_group_name               = "terrfaform-playground"
      + secondary_access_key              = (sensitive value)
      + secondary_blob_connection_string  = (sensitive value)
      + secondary_blob_endpoint           = (known after apply)
      + secondary_blob_host               = (known after apply)
      + secondary_connection_string       = (sensitive value)
      + secondary_dfs_endpoint            = (known after apply)
      + secondary_dfs_host                = (known after apply)
      + secondary_file_endpoint           = (known after apply)
      + secondary_file_host               = (known after apply)
      + secondary_location                = (known after apply)
      + secondary_queue_endpoint          = (known after apply)
      + secondary_queue_host              = (known after apply)
      + secondary_table_endpoint          = (known after apply)
      + secondary_table_host              = (known after apply)
      + secondary_web_endpoint            = (known after apply)
      + secondary_web_host                = (known after apply)
      + shared_access_key_enabled         = true
      + table_encryption_key_type         = "Service"

      + blob_properties {
          + change_feed_enabled      = (known after apply)
          + default_service_version  = (known after apply)
          + last_access_time_enabled = (known after apply)
          + versioning_enabled       = (known after apply)

          + container_delete_retention_policy {
              + days = (known after apply)
            }

          + cors_rule {
              + allowed_headers    = (known after apply)
              + allowed_methods    = (known after apply)
              + allowed_origins    = (known after apply)
              + exposed_headers    = (known after apply)
              + max_age_in_seconds = (known after apply)
            }

          + delete_retention_policy {
              + days = (known after apply)
            }
        }

      + network_rules {
          + bypass                     = (known after apply)
          + default_action             = (known after apply)
          + ip_rules                   = (known after apply)
          + virtual_network_subnet_ids = (known after apply)

          + private_link_access {
              + endpoint_resource_id = (known after apply)
              + endpoint_tenant_id   = (known after apply)
            }
        }

      + queue_properties {
          + cors_rule {
              + allowed_headers    = (known after apply)
              + allowed_methods    = (known after apply)
              + allowed_origins    = (known after apply)
              + exposed_headers    = (known after apply)
              + max_age_in_seconds = (known after apply)
            }

          + hour_metrics {
              + enabled               = (known after apply)
              + include_apis          = (known after apply)
              + retention_policy_days = (known after apply)
              + version               = (known after apply)
            }

          + logging {
              + delete                = (known after apply)
              + read                  = (known after apply)
              + retention_policy_days = (known after apply)
              + version               = (known after apply)
              + write                 = (known after apply)
            }

          + minute_metrics {
              + enabled               = (known after apply)
              + include_apis          = (known after apply)
              + retention_policy_days = (known after apply)
              + version               = (known after apply)
            }
        }

      + routing {
          + choice                      = (known after apply)
          + publish_internet_endpoints  = (known after apply)
          + publish_microsoft_endpoints = (known after apply)
        }

      + share_properties {
          + cors_rule {
              + allowed_headers    = (known after apply)
              + allowed_methods    = (known after apply)
              + allowed_origins    = (known after apply)
              + exposed_headers    = (known after apply)
              + max_age_in_seconds = (known after apply)
            }

          + retention_policy {
              + days = (known after apply)
            }

          + smb {
              + authentication_types            = (known after apply)
              + channel_encryption_type         = (known after apply)
              + kerberos_ticket_encryption_type = (known after apply)
              + versions                        = (known after apply)
            }
        }
    }

Plan: 3 to add, 0 to change, 0 to destroy.

Terraform apply


terraform apply sampleplan    
azurerm_resource_group.rg: Creating...
azurerm_resource_group.rg: Creation complete after 0s [id=/subscriptions/YOUR SUBSCRIPTION/resourceGroups/terrfaform-playground]
azurerm_storage_account.storage: Creating...
azurerm_storage_account.storage: Still creating... [11s elapsed]
azurerm_storage_account.storage: Still creating... [21s elapsed]
azurerm_storage_account.storage: Creation complete after 22s [id=/subscriptions/YOUR SUBSCRIPTION/resourceGroups/terrfaform-playground/providers/Microsoft.Storage/storageAccounts/schaedldstorage]
azurerm_static_site.website: Creating...
azurerm_static_site.website: Creation complete after 3s [id=/subscriptions/YOUR SUBSCRIPTION/resourceGroups/terrfaform-playground/providers/Microsoft.Web/staticSites/sample-web-app]        

Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

In Azure ist der StorageAccount erstellt worden.

Azure: StorageAccount erstellt.

Nun können die weiteren Elemente hinzugefügt werden. Gemäss der Anleitung für das Hosten von statischen Webseiten wird noch überprüft ob die Konfiguration von Terraform mit der dokumentierten übereinstimmt.

Azure: Statische Webseite aktiviert.

Hochladen der Landing Page

Azure bietet nicht die Möglichkeit ein Objekt direkt mit Terraform im StorageAccount zu erstellen, sodass ein anderer Weg zur Publizierung gewählt werden muss. Hierzu kann einer der drei dokumentierten Wege gewählt werden:

  • Über das Portal (wenig Automatisierungspotential)
  • Über die Azure Cli Tools
  • Über die Powershell

Das Powershell script ist schnell erläutert:

$storageAccount = Get-AzStorageAccount -Name "schaedldstorage" -ResourceGroupName "terrfaform-playground"
$storageAccountContext = $storageAccount.Context
Set-AzStorageBlobContent -Context $storageAccountContext -Blob "index.html" -File "..\content\index.html" -Container `$web -Properties @{ ContentType="text/html; charset=utf-8;"}

Beim Erstellen des StorageAccounts, wird ein Container automatisch mit dem Namen $web angelegt. Diesen kann man dann für das Hosten verwenden (das Script kopiert die Datei in diesen Container.)

Azure: Web Container

Fazit

Mir nur wenig Aufwand, kann ein erster Kontaktpunkt zu einer neuen Firma auf Azure bereitgestellt werden. Dies ist nur ein Beispiel und hat noch keine Sicherheitsfunktionen aktiviert (vgl. Hosting a static Webseite in Azure Storage). Jedoch ist es weniger simple das Ganze, wie in AWS nur mit Terraform zu bewerkstelligen.

Daniel Schädler: Quickstart mit Azure und Terraform

Was braucht es dazu?

Folgende Voraussetzungen müssen gegeben sein:

  1. Erstellen einer app im Azure Portal.
  2. Kopieren der Schlüssel
  3. Verifizierung des Zugriffes mit den Azure Cli Tools.
  4. Terraform muss installiert sein.

Erstellen einer App im Azure Portal

Um automatisiert Ressourcen auf Azure erstellen zu können, muss vorgängig eine App im Active Directory erstellt werden.

  1. Im Azure Portal Active Directory, App Registrierung auswählen.
Azure: App Registrierung
  1. Nun wählt man neue Registrierung hinzufügen.
Azure: App Registrierung hinzufügen.
  1. Anschliessend im Menü Zertifikate und Geheimnisse ein neuer Geheimer Clientschlüssel erstellen ein.
Azure: Geheimer Schlüssel erstellen.
  1. Nun ist es wichtig, beide Schlüssel zu notieren.
  1. Nun muss noch die Client Id aufgeschrieben werden. Diese findet man in der App Registrierungs-Übersicht.
Azure: Schlüssel erstellen.

Diese können, wenn man sich mit den Azure Cli Tools einmal angemeldet hat mit folgendem Befehl herausgefunden werden:

az account list
  1. Nun muss über das Abonnement und die Zugriffsteuerung der erstellten App eine Rolle zugewiesen werden, damit diese funktioniert. In diesem Beispiel ist die Rolle "Mitwirkender" verwendet worden (Hängt vom Anwendungsfall in der Firma ab, welche tatsächliche Rolle vergeben wird.) Dies geschieht über "Rollenzuweisung hinzufügen".
Azure: Rollenzuweisung hinzufügen.
  1. Anschliessend muss man die zuvor erstellte App hinzufügen. Diese kann mittels Suchfeld gesucht und hinzugefügt werden.
Azure: Applikation hinzufügen.

Verifizierung des Zugriffes mit den Azure Cli Tools

Als erstes müssen die Azure Cli Tools bereits installiert sein.

Sobald die Azure Cli Tools installiert sind, kann man sich mit dem Service Principal versuchen anzumelden.

PS C:\Git-Repos\blogposts> az login --service-principal -u "YOUR APP ID"  -p "APP ID SECRET" --tenant "YOUR TENANT ID"
[
  {
    "cloudName": "AzureCloud",
    "homeTenantId": "YOUR TENANT",
    "id": "YOUR SUBSCRIPTION",
    "isDefault": true,
    "managedByTenants": [],
    "name": "Pay-As-You-Go",
    "state": "Enabled",
    "tenantId": "YOUR TENANT",
    "user": {
      "name": "YOUR APP ID",
      "type": "servicePrincipal"
    }
  }
]
PS C:\Git-Repos\blogposts> 

Terraform SetUp

Damit Terraform funktioniert, müssen die zuvor heruntergeladenen Schlüssel eingetragen werden. Diese Konfiguration sieht dann wie folgt aus:

# Configure the Microsoft Azure Provider
provider "azurerm" {
  features {}

  subscription_id = "ABONNEMENT ID"
  client_id       = "ZUVOR ERSTELLTE APPLIKATIONS ID"
  client_secret   = "ZUVOR ERSTELLTES GEHEIMNIS IN DER APP"
  tenant_id       = "TENANT ID"
}

Damit auch hier überprüft werden kann ob die Verbindung mit Azure funktioniert kann auch ein StorageAccount erstellt werden mit den Terraform Ressourcen.

terraform {
  required_providers {
    azurerm = {
      source  = "hashicorp/azurerm"
      version = "=3.0.0"
    }
  }
}

# Configure the Microsoft Azure Provider
provider "azurerm" {
  features {}

  subscription_id = "YOURSUBSCRIPTION"
  client_id       = "APP ID"
  client_secret   = "SECRET ID"
  tenant_id       = "YOUR TENANT"
}

resource "azurerm_resource_group" "rg" {
  name = "terrfaform-playground"
  location = "switzerlandnorth"
}

resource "azurerm_storage_account" "storage" {
  account_tier = "Standard"
  account_replication_type = "LRS"
  location = azurerm_resource_group.rg.location
  resource_group_name = azurerm_resource_group.rg.name
  name = "schaedldstorage"  
}

Wir terraform dann in seiner Reihenfolge mit

terraform init                

Initializing the backend...

Initializing provider plugins...
- Reusing previous version of hashicorp/azurerm from the dependency lock file
- Using previously-installed hashicorp/azurerm v3.0.0

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
terraform plan -out sampleplan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
  + create

Terraform will perform the following actions:

  # azurerm_resource_group.rg will be created
  + resource "azurerm_resource_group" "rg" {
      + id       = (known after apply)
      + location = "switzerlandnorth"
      + name     = "terrfaform-playground"
    }

  # azurerm_storage_account.storage will be created
  + resource "azurerm_storage_account" "storage" {
      + access_tier                       = (known after apply)
      + account_kind                      = "StorageV2"
      + account_replication_type          = "LRS"
      + account_tier                      = "Standard"
      + allow_nested_items_to_be_public   = true
      + enable_https_traffic_only         = true
      + id                                = (known after apply)
      + infrastructure_encryption_enabled = false
      + is_hns_enabled                    = false
      + large_file_share_enabled          = (known after apply)
      + location                          = "switzerlandnorth"
      + min_tls_version                   = "TLS1_2"
      + name                              = "schaedldstorage"
      + nfsv3_enabled                     = false
      + primary_access_key                = (sensitive value)
      + primary_blob_connection_string    = (sensitive value)
      + primary_blob_endpoint             = (known after apply)
      + primary_blob_host                 = (known after apply)
      + primary_connection_string         = (sensitive value)
      + primary_dfs_endpoint              = (known after apply)
      + primary_dfs_host                  = (known after apply)
      + primary_file_endpoint             = (known after apply)
      + primary_file_host                 = (known after apply)
      + primary_location                  = (known after apply)
      + primary_queue_endpoint            = (known after apply)
      + primary_queue_host                = (known after apply)
      + primary_table_endpoint            = (known after apply)
      + primary_table_host                = (known after apply)
      + primary_web_endpoint              = (known after apply)
      + primary_web_host                  = (known after apply)
      + queue_encryption_key_type         = "Service"
      + resource_group_name               = "terrfaform-playground"
      + secondary_access_key              = (sensitive value)
      + secondary_blob_connection_string  = (sensitive value)
      + secondary_blob_endpoint           = (known after apply)
      + secondary_blob_host               = (known after apply)
      + secondary_connection_string       = (sensitive value)
      + secondary_dfs_endpoint            = (known after apply)
      + secondary_dfs_host                = (known after apply)
      + secondary_file_endpoint           = (known after apply)
      + secondary_file_host               = (known after apply)
      + secondary_location                = (known after apply)
      + secondary_queue_endpoint          = (known after apply)
      + secondary_queue_host              = (known after apply)
      + secondary_table_endpoint          = (known after apply)
      + secondary_table_host              = (known after apply)
      + secondary_web_endpoint            = (known after apply)
      + secondary_web_host                = (known after apply)
      + shared_access_key_enabled         = true
      + table_encryption_key_type         = "Service"

      + blob_properties {
          + change_feed_enabled      = (known after apply)
          + default_service_version  = (known after apply)
          + last_access_time_enabled = (known after apply)
          + versioning_enabled       = (known after apply)

          + container_delete_retention_policy {
              + days = (known after apply)
            }

          + cors_rule {
              + allowed_headers    = (known after apply)
              + allowed_methods    = (known after apply)
              + allowed_origins    = (known after apply)
              + exposed_headers    = (known after apply)
              + max_age_in_seconds = (known after apply)
            }

          + delete_retention_policy {
              + days = (known after apply)
            }
        }

      + network_rules {
          + bypass                     = (known after apply)
          + default_action             = (known after apply)
          + ip_rules                   = (known after apply)
          + virtual_network_subnet_ids = (known after apply)

          + private_link_access {
              + endpoint_resource_id = (known after apply)
              + endpoint_tenant_id   = (known after apply)
            }
        }

      + queue_properties {
          + cors_rule {
              + allowed_headers    = (known after apply)
              + allowed_methods    = (known after apply)
              + allowed_origins    = (known after apply)
              + exposed_headers    = (known after apply)
              + max_age_in_seconds = (known after apply)
            }

          + hour_metrics {
              + enabled               = (known after apply)
              + include_apis          = (known after apply)
              + retention_policy_days = (known after apply)
              + version               = (known after apply)
            }

          + logging {
              + delete                = (known after apply)
              + read                  = (known after apply)
              + retention_policy_days = (known after apply)
              + version               = (known after apply)
              + write                 = (known after apply)
            }

          + minute_metrics {
              + enabled               = (known after apply)
              + include_apis          = (known after apply)
              + retention_policy_days = (known after apply)
              + version               = (known after apply)
            }
        }

      + routing {
          + choice                      = (known after apply)
          + publish_internet_endpoints  = (known after apply)
          + publish_microsoft_endpoints = (known after apply)
        }

      + share_properties {
          + cors_rule {
              + allowed_headers    = (known after apply)
              + allowed_methods    = (known after apply)
              + allowed_origins    = (known after apply)
              + exposed_headers    = (known after apply)
              + max_age_in_seconds = (known after apply)
            }

          + retention_policy {
              + days = (known after apply)
            }

          + smb {
              + authentication_types            = (known after apply)
              + channel_encryption_type         = (known after apply)
              + kerberos_ticket_encryption_type = (known after apply)
              + versions                        = (known after apply)
            }
        }
    }

Plan: 2 to add, 0 to change, 0 to destroy.

───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 

Saved the plan to: sampleplan

To perform exactly these actions, run the following command to apply:
    terraform apply "sampleplan"
terraform apply simpleplan    
azurerm_resource_group.rg: Creating...
azurerm_resource_group.rg: Creation complete after 1s [id=/subscriptions/YOURSUBSCRIPTION/resourceGroups/terrfaform-playground]
azurerm_storage_account.storage: Creating...
azurerm_storage_account.storage: Still creating... [10s elapsed]
azurerm_storage_account.storage: Still creating... [20s elapsed]
azurerm_storage_account.storage: Creation complete after 21s [id=/subscriptions/YOURSUBSCRIPTION/resourceGroups/terrfaform-playground/providers/Microsoft.Storage/storageAccounts/schaedldstorage]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Nach der Kontrolle im Azure Portal sieht man das Ergebnis.

Azure: Ressource Gruppe.

Die Ressource Gruppe ist erstellt und wenn man diese auswählt, sieht man den darin erstellten StorageAccount.

Azure: Storageaccount.

as Abräumen der Ressource kann dann wie folgt geschehen:

terraform destroy
azurerm_resource_group.rg: Refreshing state... [id=/subscriptions/YOURSUBSCRIPTION/resourceGroups/terrfaform-playground]
azurerm_storage_account.storage: Refreshing state... [id=/subscriptions/YOURSUBSCRIPTION/resourceGroups/terrfaform-playground/providers/Microsoft.Storage/storageAccounts/schaedldstorage]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following
symbols:
  - destroy

Terraform will perform the following actions:

  # azurerm_resource_group.rg will be destroyed
  - resource "azurerm_resource_group" "rg" {
      - id       = "/subscriptions/YOURSUBSCRIPTION/resourceGroups/terrfaform-playground" -&gt; null
      - location = "switzerlandnorth" -&gt; null
      - name     = "terrfaform-playground" -&gt; null
      - tags     = {} -&gt; null
    }

  # azurerm_storage_account.storage will be destroyed
  - resource "azurerm_storage_account" "storage" {
      - access_tier                       = "Hot" -&gt; null
      - account_kind                      = "StorageV2" -&gt; null
      - account_replication_type          = "LRS" -&gt; null
      - account_tier                      = "Standard" -&gt; null
      - allow_nested_items_to_be_public   = true -&gt; null
      - enable_https_traffic_only         = true -&gt; null
      - id                                = "/subscriptions/YOURSUBSCRIPTION/resourceGroups/terrfaform-playground/providers/Microsoft.Storage/storageAccounts/schaedldstorage" -&gt; null
      - infrastructure_encryption_enabled = false -&gt; null
      - is_hns_enabled                    = false -&gt; null
      - location                          = "switzerlandnorth" -&gt; null
      - min_tls_version                   = "TLS1_2" -&gt; null
      - name                              = "schaedldstorage" -&gt; null
      - nfsv3_enabled                     = false -&gt; null
      - primary_access_key                = (sensitive value)
      - primary_blob_connection_string    = (sensitive value)
      - primary_blob_endpoint             = "https://schaedldstorage.blob.core.windows.net/" -&gt; null
      - primary_blob_host                 = "schaedldstorage.blob.core.windows.net" -&gt; null
      - primary_connection_string         = (sensitive value)
      - primary_dfs_endpoint              = "https://schaedldstorage.dfs.core.windows.net/" -&gt; null
      - primary_dfs_host                  = "schaedldstorage.dfs.core.windows.net" -&gt; null
      - primary_file_endpoint             = "https://schaedldstorage.file.core.windows.net/" -&gt; null
      - primary_file_host                 = "schaedldstorage.file.core.windows.net" -&gt; null
      - primary_location                  = "switzerlandnorth" -&gt; null
      - primary_queue_endpoint            = "https://schaedldstorage.queue.core.windows.net/" -&gt; null
      - primary_queue_host                = "schaedldstorage.queue.core.windows.net" -&gt; null
      - primary_table_endpoint            = "https://schaedldstorage.table.core.windows.net/" -&gt; null
      - primary_table_host                = "schaedldstorage.table.core.windows.net" -&gt; null
      - primary_web_endpoint              = "https://schaedldstorage.z1.web.core.windows.net/" -&gt; null
      - primary_web_host                  = "schaedldstorage.z1.web.core.windows.net" -&gt; null
      - queue_encryption_key_type         = "Service" -&gt; null
      - resource_group_name               = "terrfaform-playground" -&gt; null
      - secondary_access_key              = (sensitive value)
      - secondary_connection_string       = (sensitive value)
      - shared_access_key_enabled         = true -&gt; null
      - table_encryption_key_type         = "Service" -&gt; null
      - tags                              = {} -&gt; null

      - blob_properties {
          - change_feed_enabled      = false -&gt; null
          - last_access_time_enabled = false -&gt; null
          - versioning_enabled       = false -&gt; null
        }

      - network_rules {
          - bypass                     = [
              - "AzureServices",
            ] -&gt; null
          - default_action             = "Allow" -&gt; null
          - ip_rules                   = [] -&gt; null
          - virtual_network_subnet_ids = [] -&gt; null
        }

      - queue_properties {

          - hour_metrics {
              - enabled               = true -&gt; null
              - include_apis          = true -&gt; null
              - retention_policy_days = 7 -&gt; null
              - version               = "1.0" -&gt; null
            }

          - logging {
              - delete                = false -&gt; null
              - read                  = false -&gt; null
              - retention_policy_days = 0 -&gt; null
              - version               = "1.0" -&gt; null
              - write                 = false -&gt; null
            }

          - minute_metrics {
              - enabled               = false -&gt; null
              - include_apis          = false -&gt; null
              - retention_policy_days = 0 -&gt; null
              - version               = "1.0" -&gt; null
            }
        }

      - share_properties {

          - retention_policy {
              - days = 7 -&gt; null
            }
        }
    }

Plan: 0 to add, 0 to change, 2 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

azurerm_storage_account.storage: Destroying... [id=/subscriptions/YOURSUBSCRIPTION/resourceGroups/terrfaform-playground/providers/Microsoft.Storage/storageAccounts/schaedldstorage]
azurerm_storage_account.storage: Destruction complete after 2s
azurerm_resource_group.rg: Destroying... [id=/subscriptions/YOURSUBSCRIPTION/resourceGroups/terrfaform-playground]
azurerm_resource_group.rg: Still destroying... [id=/subscriptions/YOURSUBSCRIPTION-...4/resourceGroups/terrfaform-playground, 10s elapsed]
azurerm_resource_group.rg: Destruction complete after 16s

Destroy complete! Resources: 2 destroyed.

Nach dessen Ausführung ist dann auch im Azure-Portal nichts mehr zu sehen.

„Azure: Ressourcegruppe entfernt.

Fazit

Ein einfacher Weg Infrastruktur auch in Azure zu erstellen, ohne die dauernde Anmeldung und der Möglichkeit, Terraform automatisiert in einer CI/CD Pipeline laufen zu lassen.

Daniel Schädler: Quickstart: Bereitstellung einer statischen Webseite auf AWS

In diesem Artikel möchte ich die Schritte für das Veröffentlichen einer statischen Webseite, zum Beispiel einer "Landing Page" mit Terraform und AWS zeigen.

Voraussetzung

  • Ein AWS Konto ist eingerichtet.
  • Terraform ist installiert und konfiguriert für den Zugriff auf AWS.

Vorgehen

Folgende Schritte werden in der Terraform ausgeführt, damit eine statische Webseite auf AWS veröffentlicht werden kann.

Bucket erstellen

Im ersten Schritt wird ein Bucket erstellt.

    terraform {
      required_providers {
        aws = {
          source  = "hashicorp/aws"
          version = "~&gt; 3.0"
        }
      }
    }
    
    # Configure the AWS Provider
    provider "aws" {
      region = "eu-central-1"
      access_key = "DEIN SCHLÜSSEL"
      secret_key = "DEIN SCHLÜSSEL"
    }
    
    resource "aws_s3_bucket" "webapp" {
      bucket = "schaedld-webapp"
      object_lock_enabled = false   
    }

Die Befehle terraform init, terraform plan -out sampleplan, terraform apply sampleplan und terraform destroy (In Produktion eher vorsichtig damit umgehen) ausgeführt. Diese sind durchgängig durch das ganze Beispiel immer wieder anzuwenden.

Terraform init


terraform init

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/aws versions matching "~&gt; 3.0"...
- Installing hashicorp/aws v3.75.1...
- Installed hashicorp/aws v3.75.1 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

Terraform plan

    terraform plan -out sampleplan
    
    Terraform used the selected providers to generate the following execution plan. Resource actions are indicated with the following symbols:
      + create
    
    Terraform will perform the following actions:
    
      # aws_s3_bucket.webapp will be created
      + resource "aws_s3_bucket" "webapp" {
          + acceleration_status         = (known after apply)
          + acl                         = "private"
          + arn                         = (known after apply)
          + bucket                      = "schaedld-webapp"
          + bucket_domain_name          = (known after apply)
          + bucket_regional_domain_name = (known after apply)
          + force_destroy               = false
          + hosted_zone_id              = (known after apply)
          + id                          = (known after apply)
          + object_lock_enabled         = false
          + region                      = (known after apply)
          + request_payer               = (known after apply)
          + tags_all                    = (known after apply)
          + website_domain              = (known after apply)
          + website_endpoint            = (known after apply)
    
          + object_lock_configuration {
              + object_lock_enabled = (known after apply)
    
              + rule {
                  + default_retention {
                      + days  = (known after apply)
                      + mode  = (known after apply)
                      + years = (known after apply)
                    }
                }
            }
    
          + versioning {
              + enabled    = (known after apply)
              + mfa_delete = (known after apply)
            }
        }
    
    Plan: 1 to add, 0 to change, 0 to destroy.
    
    ─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────── 
    
    Saved the plan to: sampleplan
    
    To perform exactly these actions, run the following command to apply:
        terraform apply "sampleplan"

Terraform apply


    terraform apply sampleplan    
    aws_s3_bucket.webapp: Creating...
    aws_s3_bucket.webapp: Creation complete after 2s [id=schaedld-webapp]
    
    Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

In AWS ist das Bucket erstellt worden.

AWS: Bucket erstellen erfolgreich.

Nun können die weiteren Elemente hinzugefügt werden.

Erstellen der Webseiten Konfiguration

Damit das Bucket auch als Webseite für die Auslieferung von statischem Inhalt funktioniert, muss eine Webseitenkonfigurationselement in Terraform hinzugefügt werden. (Aus Platzgründen habe ich die vorherigen Schritte weggelassen.)

resource "aws_s3_bucket_website_configuration" "webappcfg" {
  bucket = aws_s3_bucket.webapp.bucket
  
  index_document {
    suffix = "index.html"    
  }  
}

Wird nun terraform apply sampleplan ausgeführt, so ist zu sehen, dass 2 Ressourcen erstellt worden sind.

terraform apply sampleplan
aws_s3_bucket.webapp: Creating...
aws_s3_bucket.webapp: Creation complete after 1s [id=schaedld-webapp]
aws_s3_bucket_website_configuration.webappcfg: Creating...
aws_s3_bucket_website_configuration.webappcfg: Creation complete after 0s [id=schaedld-webapp]

Apply complete! Resources: 2 added, 0 changed, 0 destroyed.

Klickt man im Portal auf das Bucket Objekt, gelangt man in die Verwaltungsseite des Objektes.

AWS: Bucket  Objekt

Wählt man die Option Eigenschaften, so gelangt man in die Einstellungen des Buckets. Navigiert man ans untere Ende der Seite, ist folgender Punkt zu sehen:

Hosten einer statischen Webseite.

Hier zu sehen, ist dass diese Option aktiviert ist. Wenn man nun den bereitgestellten Link anklickt, gelangt man auf eine Seite, die einem den Zugriff verwehrt, da noch kein Objekt für einen öffentlichen Lesezugriff vorhanden ist.

AWS: Access Denied.

Nun kann mit dem nächsten Schritt, dem erstellen eines Objekts für das Bucket fortgefahren werden.

Nun kann mit dem nächsten Schritt, dem erstellen eines Objekts für das Bucket fortgefahren werden.

Erstellen eines Objektes im Bucket

Als letztes Puzzle-Teilchen, ist das Objekt für das Bucket hinzuzufügen. In diesem Beispiel ist es ein einfaches Index.html, dass als "Landing Page" verwendet werden könnte, wenn man frischer Besitzer einer Domain ist und gerade die Webseite aufbaut.

resource "aws_s3_bucket_object" "index" {
  bucket = aws_s3_bucket.webapp.bucket
  content_type = "text/html"
  key = "index.html"
  content = &lt;&lt;EOF
            
            
            <a href="https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/js/bootstrap.bundle.min.js">https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/js/bootstrap.bundle.min.js</a>
            
              
                <div class="container py-4">
                    <div class="p-5 mb-4 bg-light rounded-3">
                      <div class="container-fluid py-5">
                        <h1 class="display-5 fw-bold">Custom jumbotron</h1>
                        <p class="col-md-8 fs-4">Using a series of utilities, you can create this jumbotron, just like the one in previous versions of Bootstrap. Check out the examples below for how you can remix and restyle it to your liking.</p>
                        Example button
                      </div>
                    </div>
                    </div>
                    <footer class="pt-3 mt-4 text-muted border-top">
                      © 2021
                    </footer>
                  </div>
                
              
  EOF
  acl = "public-read"
}

In diesem Beispiel ist eine Webseite als Multiline Content von Terraform HEREDOC Strings drin, die erstellt wird.

Die Ausführung mit terraform apply zeigt dass eine dritte Ressource erstellt worden ist.

    terraform apply sampleplan
    aws_s3_bucket.webapp: Creating...
    aws_s3_bucket.webapp: Creation complete after 2s [id=schaedld-webapp]
    aws_s3_bucket_website_configuration.webappcfg: Creating...
    aws_s3_bucket_object.index: Creating...
    aws_s3_bucket_object.index: Creation complete after 0s [id=index.html]
    aws_s3_bucket_website_configuration.webappcfg: Creation complete after 0s [id=schaedld-webapp]
    
    Apply complete! Resources: 3 added, 0 changed, 0 destroyed.

Die Kontrolle im AWS Portal, des Buckets offenbar, dass diese Datei angelegt worden ist.

AWS: Bucket Objekt erstellt.

Wir der Link, in den Eigenschaften des Buckets unter "Hosten einer statischen Webseite" angeklickt so ist nicht mehr die Access Denied Meldung zu sehen, sondern die bereitgestellte Webseite.

AWS: Landing Page erstellt.

Die komplette Terraform Konfiguration sieht dann wie folgt aus:

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&gt; 3.0"
    }
  }
}

# Configure the AWS Provider
provider "aws" {
  region = "eu-central-1"
  access_key = "AKIAXEP7TRWQSFUAJI65"
  secret_key = "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"
}

resource "aws_s3_bucket" "webapp" {
  bucket = "schaedld-webapp"
  object_lock_enabled = false   
}


resource "aws_s3_bucket_website_configuration" "webappcfg" {
  bucket = aws_s3_bucket.webapp.bucket
  
  index_document {
    suffix = "index.html"    
  }  
}

resource "aws_s3_bucket_object" "index" {
  bucket = aws_s3_bucket.webapp.bucket
  content_type = "text/html"
  key = "index.html"
  content = &lt;&lt;EOT
            
            
            <a href="https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/js/bootstrap.bundle.min.js">https://cdn.jsdelivr.net/npm/bootstrap@5.1.3/dist/js/bootstrap.bundle.min.js</a>
            
              
                <div class="container py-4">
                    <div class="p-5 mb-4 bg-light rounded-3">
                      <div class="container-fluid py-5">
                        <h1 class="display-5 fw-bold">Willkommen</h1>
                        <p class="col-md-8 fs-4">Willkommen auf der Landing Page der Firma Software Sorglos.</p>                        
                      </div>
                    </div>
                    </div>
                    <footer class="pt-3 mt-4 text-muted border-top">
                      © 2021
                    </footer>
                  </div>
                
              
  EOT
  acl = "public-read"
}

Fazit

Mir nur wenig Aufwand, kann ein erster Kontaktpunkt zu einer neuen Firma auf AWS bereitgestellt werden. Dies ist nur ein Beispiel und hat noch keine Sicherheitsfunktionen aktiviert (vgl. Hosting a static Webseite using Amazon S3).

Holger Schwichtenberg: Magdeburger Developer Days vom 16. bis 18.5.2022

Eintrittskarten für die Community-Veranstaltung für Entwickler gibt es bereits ab 40 Euro.

Sebastian Seidel: Creating a .NET MAUI Maps Control

I am currently working on porting a Xamarin Forms app to DOTNET MAUI. The app also uses maps from Apple or Google Maps to display locations. Even though there was no official support in MAUI until the release of .NET 7, I want to show you a way to display maps via custom handler.

Daniel Schädler: Quickstart mit AWS und Terraform

In diesem Artikel gehe ich darauf ein, wie man sich vorbereitet, um mit Terraform und AWS arbeiten zu können.

Was braucht es dazu?

Folgende Voraussetzungen müssen erfüllt sein:

  1. Erstellen eines Benutzers im AWS Account, der dann über die API zugreifen kann.
  2. VS Code AWS Extensions runterladen und konfigurieren für den Zugriff auf AWS.
  3. Zugriffsschlüssel erstellen und als CSV herunterladen.
  4. Verifizierung des Zugriffes über die Extension aus Visual Studio Code.
  5. Terraform muss installiert sein.

Erstellen eines IAM Benutzers in AWS

Um automatisiert Ressourcen auf AWS erstellen zu können, muss vorgängig ein sogenannter IAM-Benutzer erstellt werden. Hier kann wie folgt vorgegangen werden.

  1. Im AWS Konto auf die Identity und Accessmanagement navigieren. Man sollte nun schon auf der richtigen Maske landen.
IAM Benutzer in AWS erstellen.
  1. Mit dem drücken des Knopfes "neuer Benutzer" gelangt man die nachfolgende Ansicht für die Parametrisierung des Benutzers.
Neuer Benutzer hinzufügen.

Wichtig hierbei ist, dass die der Haken bei den CLI Tools gesetzt wird, damit man später mit Terraform darauf zugreifen kann.

  1. Nun können die notwendigen Berechtigungen hinzugefügt werden.
Berechtigungen hinzufügen.

  1. Wenn der Benutzer erstellt worden ist, kopiert euch den Access Key und den private Key oder ladet diesen als CSV herunter, damit diese in den nächsten Schritten weiter verwendet werden können.

AWS Extension Visual Studio Code

Als erstes muss im Marketplace von Visual Studio Code nach der Extension für AWS gesucht werden um diese dann installieren zu können.

AWS Extension hinzufügen.

Nun sind die Erweiterungen für AWS installiert. Diese müssen nun konfiguriert werden. Dies kann mit Hilfe der AWS-Erweiterung durchgeführt werden die durch den SetUp Prozess führt. Hierbei ist es wichtig, dass der Access Key und der private Schlüssel notiert worden sind.

Um zu testen ob eine Verbindung mit den Tools auf AWS gemacht werden kann, reicht es nach deren Konfiguration einfach in der Menüleiste eine Ressource zu erstellen um zu sehen ob die Verbindung geklappt hat.

Ich habe mit einem S3 Bucket getestet und bin wie folgt vorgegangen.

  1. Bucket Option in den Erweiterungen auswählen.
AWS Extension Bucket erstellen.
  1. Anschliessend muss nur noch ein Name eingegeben werden, der eineindeutig sein muss.
AWS Extensions Bucket Name eingeben.

  1. Hat man Erfolg und einen eineindeutigen Namen erwischt so kann ein Bucket erstellt werden und man erhält eine Erfolgsmeldung.
AWS Extension Bucket erfolgreich erstellt.

Nun sind alle Schritte gemacht und die Verbindung zu AWS funktioniert.

Terraform SetUp

Damit Terraform funktioniert, müssen die zuvor heruntergeladenen Schlüssel eingetragen werden. Diese Konfiguration sieht dann wie folgt aus:

provider "aws" {
  region = "eu-central-1"
  access_key = "DEINACCESSKEY"
  secret_key = "DEINSECRETKEY"
}

Damit auch hier überprüft werden kann ob die Verbindung mit AWS funktioniert kann auch ein Bucket erstellt werden mit den Terraform Ressourcen.

terraform {
  required_providers {
    aws = {
      source  = "hashicorp/aws"
      version = "~&gt; 3.0"
    }
  }
}

# Configure the AWS Provider
provider "aws" {
  region = "eu-central-1"
  access_key = "DEINACCESSKEY"
  secret_key = "DEINSECRETKEY"
}

resource "aws_s3_bucket" "samplebucket" {
  bucket = "schaedlds-sample-bucket"
  object_lock_enabled = false   
}

Wir terraform dann in seiner Reihenfolge mit

terraform init

Initializing the backend...

Initializing provider plugins...
- Finding hashicorp/aws versions matching "~&gt; 3.0"...
- Installing hashicorp/aws v3.75.1...
- Installed hashicorp/aws v3.75.1 (signed by HashiCorp)

Terraform has created a lock file .terraform.lock.hcl to record the provider
selections it made above. Include this file in your version control repository
so that Terraform can guarantee to make the same selections by default when
you run "terraform init" in the future.

Terraform has been successfully initialized!

You may now begin working with Terraform. Try running "terraform plan" to see
any changes that are required for your infrastructure. All Terraform commands
should now work.

If you ever set or change modules or backend configuration for Terraform,
rerun this command to reinitialize your working directory. If you forget, other
commands will detect it and remind you to do so if necessary.
terraform plan -out sampleplan

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated    
with the following symbols:
  + create

Terraform will perform the following actions:

  # aws_s3_bucket.samplebucket will be created
  + resource "aws_s3_bucket" "samplebucket" {
      + acceleration_status         = (known after apply)
      + acl                         = "private"
      + arn                         = (known after apply)
      + bucket                      = "schaedlds-sample-bucket"
      + bucket_domain_name          = (known after apply)
      + bucket_regional_domain_name = (known after apply)
      + force_destroy               = false
      + hosted_zone_id              = (known after apply)
      + id                          = (known after apply)
      + object_lock_enabled         = false
      + region                      = (known after apply)
      + request_payer               = (known after apply)
      + tags_all                    = (known after apply)
      + website_domain              = (known after apply)
      + website_endpoint            = (known after apply)

      + object_lock_configuration {
          + object_lock_enabled = (known after apply)

          + rule {
              + default_retention {
                  + days  = (known after apply)
                  + mode  = (known after apply)
                  + years = (known after apply)
                }
            }
        }

      + versioning {
          + enabled    = (known after apply)
          + mfa_delete = (known after apply)
        }
    }

Plan: 1 to add, 0 to change, 0 to destroy.

───────────────────────────────────────────────────────────────────────────────────────────────────────────────── 

Saved the plan to: simpleplan
terraform apply sampleplan

aws_s3_bucket.samplebucket: Creating...
aws_s3_bucket.samplebucket: Creation complete after 2s [id=schaedlds-sample-bucket]

Apply complete! Resources: 1 added, 0 changed, 0 destroyed.

Nach der Kontrolle im AWS Dashboard sieht man das Ergebnis.

AWS Portal Bucket erfolgreich erstellt mit Terraform.

Das Abräumen der Ressource kann dann wie folgt geschehen:

terraform destroy 

aws_s3_bucket.samplebucket: Refreshing state... [id=schaedlds-sample-bucket]

Terraform used the selected providers to generate the following execution plan. Resource actions are indicated    
with the following symbols:
  - destroy

Terraform will perform the following actions:

  # aws_s3_bucket.samplebucket will be destroyed
  - resource "aws_s3_bucket" "samplebucket" {
      - acl                         = "private" -&gt; null
      - arn                         = "arn:aws:s3:::schaedlds-sample-bucket" -&gt; null
      - bucket                      = "schaedlds-sample-bucket" -&gt; null
      - bucket_domain_name          = "schaedlds-sample-bucket.s3.amazonaws.com" -&gt; null
      - bucket_regional_domain_name = "schaedlds-sample-bucket.s3.eu-central-1.amazonaws.com" -&gt; null
      - force_destroy               = false -&gt; null
      - hosted_zone_id              = "Z21DNDUVLTQW6Q" -&gt; null
      - id                          = "schaedlds-sample-bucket" -&gt; null
      - object_lock_enabled         = false -&gt; null
      - region                      = "eu-central-1" -&gt; null
      - request_payer               = "BucketOwner" -&gt; null
      - tags                        = {} -&gt; null
      - tags_all                    = {} -&gt; null

      - versioning {
          - enabled    = false -&gt; null
          - mfa_delete = false -&gt; null
        }
    }

Plan: 0 to add, 0 to change, 1 to destroy.

Do you really want to destroy all resources?
  Terraform will destroy all your managed infrastructure, as shown above.
  There is no undo. Only 'yes' will be accepted to confirm.

  Enter a value: yes

aws_s3_bucket.samplebucket: Destroying... [id=schaedlds-sample-bucket]
aws_s3_bucket.samplebucket: Destruction complete after 0s

Destroy complete! Resources: 1 destroyed.

Nach dessen Ausführung ist dann auch im AWS-Portal nichts mehr zu sehen.

AWS Portal Ressource mit Terraform abgeräumt.

Fazit

Ein einfacher Weg Infrastruktur auch in AWS zu erstellen und zu löschen aber mit anderen Konzepten als in Azure.

Martin Richter: Der ständige Helfer im (Datei-)Alltag, der SpeedCommander

Seit mehr als 10 Jahren ist der SpeedCommander nun mein täglicher Begleiter auf meinen privaten und auf meinem Firmen-PC. Irgendwie dachte ich mir, dass dies auch eine Erwähnung wert ist, auch wenn ich nur noch wenig blogge.

Ich will einfach mal die Features erwähnen, die ich wirklich jeden Tag nutze.

  • Eingebauter Packer/Entpacker in x-Formaten (darunter auch so manches esoterisches, aber eben nützliches).
  • Selbstentpackende Dateien werden (wenn gewünscht) als Archiv geöffnet.
  • Einfacher Dateifilter um nur bestimmte Dateien anzuzeigen.
  • Komplexes umbenennen von Dateien mit Dateimustern, Regex Filtern und Erstetzungsfunktionen.
  • Zweigansicht (Ansicht aller Dateien ink. Dateien in den Unterordnern)
  • Simple Vergleichsfunktionen zwischen zwei Ordnern
  • Komplexe Synchronisierungsfunktionen zwischen Ordnern
  • Schnellansicht für extrem viele Dateiformate (für mich für EXE/DLL Dateien immer wieder wichtig)
  • FTP/SFTP Client (ich nutze Filezilla nur in Ausnahmefällen)
  • Sehr guter eingebauter Editor.
  • Direkter Zugriff auf Cloudspeicher (Dropbox/Onedrive etc.)

Das ist vermutlich nicht mal gerade mal die Spitze des Eisberges bei all den Funktionen vom SpeedCommander. Aber es sind die Funktionen, die ich nicht mehr missen möchte bei meiner ganz alltäglichen Arbeit.


Copyright © 2017 Martin Richter
Dieser Feed ist nur für den persönlichen, nicht gewerblichen Gebrauch bestimmt. Eine Verwendung dieses Feeds bzw. der hier veröffentlichten Beiträge auf anderen Webseiten bedarf der ausdrücklichen Genehmigung des Autors.
(Digital Fingerprint: bdafe67664ea5aacaab71f8c0a581adf)

Holger Schwichtenberg: Ein erster Blick auf die Ahead-of-Time-Kompilierung in .NET 7.0

.NET 7 bringt den seit langem geplante AOT-Compiler. Ein Vergleich mit dem JIT-Compiler gibt einen ersten Eindruck vom Speicherbedarf und den Einschränkungen.

Holger Schwichtenberg: Supportende für .NET Framework 4.5.2, 4.6 und 4.6.1 sowie .NET 5.0

Einige .NET-Entwickler, die nicht die neusten Versionen nutzen, müssen in Kürze Ihre Software aktualisieren.

Daniel Schädler: Eine Kurzgeschichte über Pfade

Eine Kurzgeschichte über Pfade

In diesem Kurzbeitrag erläutere ich euch wie man sicher mit Pfaden in der Cross-Plattform Entwicklung umgeht.

Voraussetzungen

Folgende Voraussetzungen sind gegeben.

Eine appsettings.json Datei die wie folgt aussieht:

    "PlantUmlSettings": {
      "PlantUmlTheme": "plain",
      "PlantUmlArgs": "-jar plantuml.jar {0}\*{1}*{2} -o {3} -{4}",
      "PlantUmlEndTag": "@enduml",
      "PlantUmlExe": "plantUml\bin\java.exe",
      "PlantUmlFileSuffix": ".plantuml",
      "PlantUmlStartTag": "@startuml",
      "PlantUmlThemeTag": "!theme",
      "PlantUmlWorkDir": "plantUml\bin"
    }

Eine .NET Anwendung, die sowohl auf einem Windows, wie auch interaktiv im GitLab Runner gestartet werden kann.

Leider hatte ich dann immer folgende Fehlermeldung, als die Applikation im GitLab Runner gestartet worden ist:

/src/Sample.Cli\plantUml//bin//java.exe' with working directory '/builds/Sample/src/Sample.Cli\plantUml//bin'. No such file or directory

Detailbetrachtung

Um dem Ganzen ein wenig weiter auf die Spur zu gehen, habe ich mir eine Beispiel-Applikation geschrieben um das Verhalten auf beiden System zu betrachten.

        static void Main(string[] args)
        {
            var environmentVariable = Environment.ExpandEnvironmentVariables("tmp");
            var tempPath = Path.GetTempPath();

            Console.WriteLine($"Value for Environment Variable {environmentVariable}");
            Console.WriteLine($"Value for {nameof(Path.GetTempPath)} {tempPath}");

            Console.ReadKey();
        }

Lässt man dann das Ganze auf einem Windows System mit dotnet wie folgt laufen, sieht das Ergebnis dann so aus:

PS C:\Users\schae> dotnet run --project D:\_Development_Projects\Repos\ConsoleApp1\ConsoleApp1\ConsoleApp1.csproj
Value for Environment Variable tmp
Value for GetTempPath C:\Users\schae\AppData\Local\Temp\

Das Ergebnis ist wie gewünscht. Nun schauen wir uns das auf der WSL2 an.

root@GAMER-001:~/.dotnet# ./dotnet run --project /mnt/d/_Development_Projects/Repos/ConsoleApp1/ConsoleApp1/ConsoleApp1.
csproj
Value for Environment Variable tmp
Value for GetTempPath /tmp/

Schauen wir doch nun ob der Pfad auch existiert:

root@GAMER-001:~/.dotnet# cd /tmp/
root@GAMER-001:/tmp#

Und das ist der erhoffte Pfad.

Die Lösung

Nach ein wenig Recherchieren in der Dokumentation von Microsoft bin ich auf diesen Artikel DirectorySeperatorChar gestossen. Nicht dass er mir mit dieser Methode geholfen hätte, sondern vielmehr mit dem Auszug

The following example displays Path field values on Windows and on Unix-based systems. Note that Windows supports either the forward slash (which is returned by the AltDirectorySeparatorChar field) or the backslash (which is returned by the DirectorySeparatorChar field) as path separator characters, while Unix-based systems support only the forward slash

dass Windows auch Forward-Slashes unterstützt. Manchem wird das sicherlich schon bekannt gewesen sein aber ich selber werde wohl meine Arbeit mit Pfaden, auch in Windows in Zukunft nur noch mit Forwar-Slashes machen.

Nun habe ich das natürlich auch getestet und zwar in der powershell core.

PS C:\Temp> cd C:/Users
PS C:\Users>

Interessant ist der Umstand, dass wenn der Pfad bekannt ist, man den Tabulator betätigt, Windows automatisch Backslashes macht.

Nun sind überall wo Pfade verwendet werden, die Backward-Slashes durch Forward-Slashes zu ersetzen. Die appsettings.json sieht dann nun so aus:

    "PlantUmlSettings": {
      "PlantUmlTheme": "plain",
      "PlantUmlArgs": "-jar plantuml.jar {0}/*{1}*{2} -o {3} -{4}",
      "PlantUmlEndTag": "@enduml",
      "PlantUmlExe": "plantUml/bin/java.exe",
      "PlantUmlFileSuffix": ".plantuml",
      "PlantUmlStartTag": "@startuml",
      "PlantUmlThemeTag": "!theme",
      "PlantUmlWorkDir": "plantUml\bin"
    }

Nun sind auch keine Fehlermeldungen vorhanden, dass der Pfad nicht mehr gefunden werden kann.

Ein weiterer Punkt den ich mitnehmen werde, ist der, dass in Zukunft alles klein geschrieben wird. Auch Variablen im Windows.

Fazit

Mit einfachen Mitteln lassen sich unter Umständen Stunden des Debuggens oder der Fehlersuche vermeiden. Ich hoffe Dir hat der Beitrag gefallen.

Daniel Schädler: Verwendung von Certbot und Azure

In diesem Artikel will ich zeigen, wie man den Certbot einsetzt um ein Zertifikat zu erhalten um dieses anschliessend auf Azure zu installieren.

Voraussetzungen

Folgende Voraussetzungen müssen erfüllt sein:

  • Linux Subsystem für Windows muss installiert sein mit Ubuntu.
  • Auf dem Linux Subsystem für Windows muss certbot installiert sein.
  • Eine eigene Domain und eine Webseite müssen existieren.

Vorbereitung der Webseite

Damit die Anfragen für Dateien ohne Endung auf einer ASP.NET Applikation ankommen, muss folgende Einstellung vorgenommen werden:

Das Beispiel, zeigt den WebHost der verwendet wird, für die Konfiguration der statischen Dateien.

app.UseStaticFiles(new StaticFileOptions
{
    ServeUnknownFileTypes = true, // serve extensionless files

    OnPrepareResponse = staticFileResponseContext =>
    {
        // Cache Header für Optimierung für Page Speed
        const int durationInSeconds = 60 * 60 * 24 * 365;
        staticFileResponseContext.Context.Response.Headers[HeaderNames.CacheControl] =
            "public,max-age=" + durationInSeconds;
    }
});

Durchführung

Die Durchführung lässt sich in folgende Schritte gliedern:

  1. Verbinden auf die Azure Webseite mit den Azure Cli Tools
  2. Vorbereitung des Certbots local
  3. Dateien und Ordner in der Webseite erstellen
  4. Weiterfahren mit Cerbot
  5. Konvertierung des Zertifikates
  6. Installation des Zertifikates auf Azure

Verbindung auf die Azure Webseite herstellen

Die Verbindung zu Azure und der Webseite geschieht wie folgt.

az login --use-device-code

az subscription --set <%subscriptionId%>

az webapp ssh -n <%webseiten-name%> -g <%resourcegruppenname%>

Nach erfolgtem einloggen sieht man folgenden Azure Willkommensbilschirm:

Last login: Wed Apr  6 18:38:26 2022 from 169.254.130.3
  _____                               
  /  _  \ __________ _________   ____  
 /  /_\  \___   /  |  \_  __ \_/ __ \ 
/    |    \/    /|  |  /|  | \/\  ___/ 
\____|__  /_____ \____/ |__|    \___  >
        \/      \/                  \/ 
A P P   S E R V I C E   O N   L I N U X

Documentation: http://aka.ms/webapp-linux
Dotnet quickstart: https://aka.ms/dotnet-qs
ASP .NETCore Version: 6.0.0
Note: Any data outside '/home' is not persisted
root@bcd2c665073e:~/site/wwwroot# ^C
root@bcd2c665073e:~/site/wwwroot# 

Nun muss in folgenden Ordner navigiert werden:

cd /home/wwwroot/wwwroot

Nun weiter mit dem nächsten Schritt.

### Vorbereitung des Certbots local

Sollte der Certbot noch nicht installiert sein, so kann dies mittels folgendem Befehl durchgeführt werden:

```bash
sudo apt-get install certbot

Ist der Certbot installiert, kann dieser gestartet werden, wie nachfolgen beschrieben:

sudo certbot certonly -d www.dnug-bern.ch -d dnug-bern.ch --manual

Der Certbot startet und man sieht folgende Meldungen:

Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator manual, Installer None
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for www.dnug-bern.ch

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
NOTE: The IP of this machine will be publicly logged as having requested this
certificate. If you're running certbot in manual mode on a machine that is not
your server, please ensure you're okay with that.

Are you OK with your IP being logged?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o:

Mit Y bestätigen und man erhält die Instruktionen, wie weiter vorzugehen ist.

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Create a file containing just this data:

JxOqa2wKiSbAq5R_o66Gs5_sEE9xhuDwyPbv6pfEOJ8.RN326tu-yly1wbWDsnoT5mbba-NazH6fhba6WeEfA2s

And make it available on your web server at this URL:

http://www.dnug-bern.ch/.well-known/acme-challenge/JxOqa2wKiSbAq5R_o66Gs5_sEE9xhuDwyPbv6pfEOJ8

Achtung: Hier nicht bestätigen, ansonsten wird die Validierung fehlschlagen

Dateien und Ordner in der Webseite erstellen

Nun können wir in der bereits geöffneten Webseite in Azure in dem Ordner weiterfahren, in welchem wir vorher schon navigiert sind.

Nun wird der Ordner erstellt, den Certbot verlangt. Um dies zu bewerkstelligen muss wie folgt vorgegangen werden:

mkdir .well-known
mkdir acme-challenge

Nun muss die Datei erstellt werden. In unserem Fall soll die Datei so heissen: JxOqa2wKiSbAq5R_o66Gs5_sEE9xhuDwyPbv6pfEOJ8

Um dies zu erreichen muss zuerst vim gestartet werden. Hier muss dann die folgende Zeile eingefügt werden: JxOqa2wKiSbAq5R_o66Gs5_sEE9xhuDwyPbv6pfEOJ8.RN326tu-yly1wbWDsnoT5mbba-NazH6fhba6WeEfA2s. Anschliessend ist die Datei und dem folgenden Namen zu speicher (mit :w in vim) JxOqa2wKiSbAq5R_o66Gs5_sEE9xhuDwyPbv6pfEOJ8

Anschliessend kann die Webseite für das Testen einmal mit der URL die der Certbot angegeben hat, aufgerufen werden. Ist dies erfolgreich, so wird die Datei aufgerufen und man sieht die erfasste Zeichenfolge.

Weiterfahren mit Certbot

Da der Certbot noch darauf wartet eine Bestätigung für die Validierung zu erhalten, drücken wir nun im noch geöffneten Dialog die ENTER-Taste. Ist alles in Ordnung, so erhält man eine Erfolgsmeldung, dass die Validierung erfolgreich war.

Konvertierung des Zertifikates

Anschliessend muss die PEM Datei in eine PFX-Datei umgewandelt werden. Dies erfolgt wie nachfolgend beschrieben:

openssl pkcs12 -inkey privkey.pem -in cert.pem -export -out dnug.bern.pfx

Installation des Zertifikates auf Azure

Anschliessend muss nach nach folgender Anleitung das Zertifikat in Azure hochgeladen werden.

Fazit

So kann in einfachen Schritten das Zertifikat mit Certbot, zwar manuel aktualisiert werden und es entstehen keine weiteren Kosten. Der Nachteil ist, dass in kurzen Intervallen die Aktualisierung durchgeführt werden muss. Ich hoffe Dir hat Dieser Blogbeitrag gefallen.

Jürgen Gutsch: ASP.​NET Core on .NET 7.0 - File upload and streams using Minimal API

It seems the Minimal API that got introduced in ASP.NET Core 6.0 will now be finished in 7.0. One feature that was heavily missed in 6.0 was the File Upload, as well as the possibility to read the request body as a stream. Let's have a look how this would look alike.

The Minimal API

Creating endpoints using the Minimal API is great for beginners, or to create small endpoints like for microservice applications, or of your endpoints need to be super fast, without the overhead of binding routes to controllers and actions. However, endpoints created with the Minimal API might be quite useful.

By adding the mentioned features they are even more useful. And many more Minimal PI improvements will come in ASP.NET Core 7.0.

To try this I created a new empty web app using the .NET CLI

dotnet new web -n MinimalApi -o MinimalApi
cd MinimalApi
code .

This will create the new project and opens it in VSCode.

Inside VSCode open the Program.cs that should look like this

var builder = WebApplication.CreateBuilder(args);
var app = builder.Build();

app.MapGet("/", () => "Hello World!");

app.Run();

Here we see a simple endpoint that sends a "Hello World!" on a GET request.

Uploading files using IFormFile and IFormFileCollection

To upload files we should map an endpoint that listens to POST

Inside the Program.cs, lets create two endpoints, one that receives a IFormFile and another one that receives a IFormFileCollection

app.MapPost("/upload", async(IFormFile file) =>
{
    string tempfile = CreateTempfilePath();
    using var stream = File.OpenWrite(tempfile);
    await file.CopyToAsync(stream);

    // dom more fancy stuff with the IFormFile
});

app.MapPost("/uploadmany", async (IFormFileCollection myFiles) => 
{
    foreach (var file in files)
    {
        string tempfile = CreateTempfilePath();
        using var stream = File.OpenWrite(tempfile);
        await file.CopyToAsync(stream);

        // dom more fancy stuff with the IFormFile
    }
});

The IFormfile is the regular interface Microsoft.AspNetCore.Http.IFormFile that contains all the useful information about the uploaded file, like FileName, ContentType, FileSize, etc.

The CreateTempfilePath that is used here is a small method I wrote to generate a temp file and a path to it. It also creates the folder in case it doesn't exist:

static string CreateTempfilePath()
{
    var filename = $"{Guid.NewGuid()}.tmp";
    var directoryPath = Path.Combine("temp", "uploads");
    if (!Directory.Exists(directoryPath)) Directory.CreateDirectory(directoryPath);

    return Path.Combine(directoryPath, filename);
}

The creation of a temporary filename like this is needed because the actual filename and extension should be exposed to the filesystem for security reason.

Once the file is saved, you can do whatever you need to do with it.

Important note: Currently the file upload doesn't work in case there is a cookie header in the POST request or in case authentication is enabled. This will be fixed in one of the next preview versions. For now you should delete the cookies before sending the request

iformfile

Read the request body as stream

This is cool, you can now read the body of a request as a stream and do what ever you like to do. To try it out I created another endpoint into the Program.cs:

app.MapPost("v2/stream", async (Stream body) =>
{
    string tempfile = CreateTempfilePath();
    using var stream = File.OpenWrite(tempfile);
    await body.CopyToAsync(stream);
});

I'm going to use this endpoint to to store a binary in the file system. BTW: This stream is readonly and not buffered, that means it can only be read once:

request body as stream

It works the same way by using a PipeReader instead of a Stream:

app.MapPost("v3/stream", async (PipeReader body) =>
{
    string tempfile = CreateTempfilePath();
    using var stream = File.OpenWrite(tempfile);
    await body.CopyToAsync(stream);
});

Conclusion

This features makes the Minimal API much more useful. What do you think? Please drop a comment about your opinion.

This aren't the only new features that will come in ASP.NET Core 7.0, many more will come. I'm really looking forward to the route grouping that is announced in the roadmap.

Don't contact us via this (fleischfalle@alphasierrapapa.com) email address.